Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label GDPR. Show all posts

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Meta Fined €91 Million by EU Privacy Regulator for Improper Password Storage

 

On Friday, Meta was fined €91 million ($101.5 million) by the European Union's primary privacy regulator for accidentally storing some user passwords without proper encryption or protection.

The investigation began five years ago when Meta informed Ireland's Data Protection Commission (DPC) that it had mistakenly saved certain passwords in plaintext format. At the time, Meta publicly admitted to the issue, and the DPC confirmed that no external parties had access to the passwords.

"It is a widely accepted practice that passwords should not be stored in plaintext due to the potential risk of misuse by unauthorized individuals," stated Graham Doyle, Deputy Commissioner of the Irish DPC.

A Meta spokesperson mentioned that the company took swift action to resolve the error after it was detected during a 2019 security audit. Additionally, there is no evidence suggesting the passwords were misused or accessed inappropriately.

Throughout the investigation, Meta cooperated fully with the DPC, the spokesperson added in a statement on Friday.

Given that many major U.S. tech firms base their European operations in Ireland, the DPC serves as the leading privacy regulator in the EU. To date, Meta has been fined a total of €2.5 billion for violations under the General Data Protection Regulation (GDPR), which was introduced in 2018. This includes a record €1.2 billion penalty issued in 2023, which Meta is currently appealing.

The Rising Threat of Payment Fraud: How It Impacts Businesses and Ways to Counter It

 

Payment fraud continues to be a significant and evolving threat to businesses, undermining their profitability and long-term sustainability. The FBI reports that between 2013 and 2022, companies lost around $50 billion to business email compromise, showing how prevalent this issue is. In 2022 alone, 80% of enterprises faced at least one payment fraud attempt, with 30% of affected businesses unable to recover their losses. These attacks can take various forms, from email interception to more advanced methods like deep fakes and impersonation scams. 

Cybercriminals exploit vulnerabilities, manipulating legitimate transactions to steal funds, often without immediate detection. Financial losses from payment fraud can be devastating, impacting a company’s ability to pay suppliers, employees, or even invest in growth opportunities. Investigating such incidents can be time-consuming and costly, further straining resources and leading to operational disruptions. Departments like finance, IT, and legal must shift focus to tackle the issue, slowing down core business activities. For example, time spent addressing fraud issues can cause delays in projects, damage employee morale, and disrupt customer services, affecting overall business performance. 

Beyond financial impact, payment fraud can severely damage a company’s reputation. Customers and partners may lose trust if they feel their financial information isn’t secure, leading to lost sales, canceled contracts, or difficulty attracting new clients. Even a single fraud incident can have long-lasting effects, making it difficult to regain public confidence. Businesses also face legal and regulatory consequences when payment fraud occurs, especially if they have not implemented adequate protective measures. Non-compliance with data protection regulations like the General Data Protection Regulation (GDPR) or penalties from the Federal Trade Commission (FTC) can lead to fines and legal actions, causing additional financial strain. Payment fraud not only disrupts daily operations but also poses a threat to a company’s future. 

End-to-end visibility across payment processes, AI-driven fraud detection systems, and regular security audits are essential to prevent attacks and build resilience. Companies that invest in these technologies and foster a culture of vigilance are more likely to avoid significant losses. Staff training on recognizing potential threats and improving security measures can help businesses stay one step ahead of cybercriminals. Mitigating payment fraud requires a proactive approach, ensuring businesses are prepared to respond effectively if an attack occurs. 

By investing in advanced fraud detection systems, conducting frequent audits, and adopting comprehensive security measures, organizations can minimize risks and safeguard their financial health. This preparation helps prevent financial loss, operational disruption, reputational damage, and legal consequences, thereby ensuring long-term resilience and sustainability in today’s increasingly digital economy.

Is Google Spying on You? EU Investigates AI Data Privacy Concerns



Google is currently being investigated in Europe over privacy concerns raised about how the search giant has used personal data to train its generative AI tools. The subject of investigation is led by Ireland's Data Protection Commission, which ensures that the giant technical company adheres to strict data protection laws within the European Union. This paper will establish whether Google adhered to its legal process, such as obtaining a Data Protection Impact Assessment (DPIA), before using people's private information to develop its intelligent machine models.

Data Collection for AI Training Causes Concerns

Generative AI technologies similar to Google's brand Gemini have emerged into the headlines because these tend to create fake information and leak personal information. This raises the question of whether Google's AI training methods, necessarily involving tremendous amounts of data through which such training must pass, are GDPR-compliant-its measures to protect privacy and rights regarding individuals when such data is used for developing AI.

This issue at the heart of the probe is if Google should have carried out a DPIA, which is an acronym for Data Protection Impact Assessment-the view of any risks data processing activities may have on the rights to privacy of individuals. The reason for conducting a DPIA is to ensure that the rights of the individuals are protected simply because companies like Google process humongous personal data so as to create such AI models. The investigation, however, is specifically focused on how Google has been using its model called PaLM2 for running different forms of AI, such as chatbots and enhancements in the search mechanism.

Fines Over Privacy Breaches

But if the DPC finds that Google did not comply with the GDPR, then this could pose a very serious threat to the company because the fine may amount to more than 4% of the annual revenue generated globally. Such a company as Google can raise billions of dollars in revenue every year; hence such can result in a tremendous amount.

Other tech companies, including OpenAI and Meta, also received similar privacy-based questions relating to their data practices when developing AI.

Other general issues revolve around the processing of personal data in this fast-emerging sphere of artificial intelligence.

Google Response to Investigation

The firm has so far refused to answer questions over specific sources of data used to train its generative AI tools. A company spokesperson said Google remains dedicated to compliance with the GDPR and will continue cooperating with the DPC throughout the course of the investigation. The company maintains it has done nothing illegal. And just because a company is under investigation, that doesn't mean there's something wrong with it; the very act of inquiring itself forms part of a broader effort to ensure that companies using technology take account of how personal information is being used.

Data Protection in the AI Era

DPC questioning of Google is part of a broader effort by the EU regulators to ensure generative AI technologies adhere to the bloc's high data-privacy standards. As concerns over how personal information is used, more companies are injecting AI into their operations. The GDPR has been among the most important tools for ensuring citizens' protection against misuse of data, especially during cases involving sensitive or personal data.

In the last few years, other tech companies have been prosecuted with regard to their data-related activities in AI development. Recently, the developers of ChatGPT, OpenAI, and Elon Musk's X (formerly Twitter), faced investigations and complaints under the law of GDPR. This indicates the growing pressure technological advancement and the seriousness in the protection of privacy are under.

The Future of AI and Data Privacy

In developing AI technologies, firms developing relevant technology need to strike a balance between innovation and privacy. The more innovation has brought numerous benefits into the world-search capabilities and more efficient processes-the more it has opened risks to light by leaving personal data not so carefully dealt with in most cases.

Moving forward, the regulators, including the DPC, would be tracking the manner in which the companies like Google are dealing with the data. It is sure to make rules much more well-defined on what is permissible usage of personal information for developing the AI that would better protect individuals' rights and freedoms in this digital age.

Ultimately, the consequences of this study may eventually shape how AI technologies are designed and implemented in the European Union; it will certainly inform tech businesses around the world.


X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

Digital Afterlife: Are We Ready for Virtual Resurrections?


 

Imagine receiving a message that your deceased father's "digital immortal" bot is ready to chat. This scenario, once confined to science fiction, is becoming a reality as the digital afterlife industry evolves. Virtual reconstructions of loved ones, created using their digital footprints, offer a blend of comfort and disruption, blurring the lines between memory and reality.

The Digital Afterlife Industry

The digital afterlife industry leverages VR and AI technologies to create virtual personas of deceased individuals. Companies like HereAfter allow users to record stories and messages during their lifetime, accessible to loved ones posthumously. MyWishes offers pre-scheduled messages from the deceased, maintaining their presence in the lives of the living. Hanson Robotics has developed robotic busts that interact using the memories and personality traits of the deceased, while Project December enables text-based conversations with those who have passed away.

Generative AI plays a crucial role in creating realistic and interactive digital personas. However, the high level of realism can blur the line between reality and simulation, potentially causing emotional and psychological distress.

Ethical and Emotional Challenges

As comforting as these technologies can be, they also present significant ethical and emotional challenges. The creation of digital immortals raises concerns about consent, privacy, and the psychological impact on the living. For some, interacting with a digital version of a loved one can aid the grieving process by providing a sense of continuity and connection. However, for others, it may exacerbate grief and cause psychological harm.

One of the major ethical concerns is consent. The deceased may not have agreed to their data being used for a digital afterlife. There’s also the risk of misuse and data manipulation, with companies potentially exploiting digital immortals for commercial gain or altering their personas to convey messages the deceased would never have endorsed.

Need for Regulation

To address these concerns, there is a pressing need to update legal frameworks. Issues such as digital estate planning, the inheritance of digital personas, and digital memory ownership need to be addressed. The European Union's General Data Protection Regulation (GDPR) recognizes post-mortem privacy rights but faces challenges in enforcement due to social media platforms' control over deceased users' data.

Researchers have recommended several ethical guidelines and regulations, including obtaining informed and documented consent before creating digital personas, implementing age restrictions to protect vulnerable groups, providing clear disclaimers to ensure transparency, and enforcing strong data privacy and security measures. A 2018 study suggested treating digital remains as integral to personhood, proposing regulations to ensure dignity in re-creation services.

The dialogue between policymakers, industry, and academics is crucial for developing ethical and regulatory solutions. Providers should offer ways for users to respectfully terminate their interactions with digital personas. Through careful, responsible development, digital afterlife technologies can meaningfully and respectfully honour our loved ones.

As we navigate this new frontier, it is essential to balance the benefits of staying connected with our loved ones against the potential risks and ethical dilemmas. By doing so, we can ensure that the digital afterlife industry develops in a way that respects the memory of the deceased and supports the emotional well-being of the living.


EU Accuses Microsoft of Secretly Harvesting Children's Data

 

Noyb (None of Your Business), also known as the European Centre for Digital Rights, has filed two complaints against Microsoft under Article 77 of the GDPR, alleging that the tech giant breached schoolchildren's privacy rights with its Microsoft 365 Education service to educational institutions. 

Noyb claims that Microsoft tried to shift the responsibility and privacy expectations of GDPR principles onto institutions through its contracts, but that these organisations had no reasonable means of complying with such requests because they had no more control over the collected data. 

The non-profit argued that while schools and educational institutions in the European Union depended more on digital services during the pandemic, large tech businesses took advantage of this trend to try to attract a new generation of committed clients. While noyb supports the modernization of education, he believes Microsoft has breached various data protection rights by offering educational institutions with access to Microsoft's 365 Education services, leaving students, parents, and institutes with little options. 

Noyb voiced concern about the market strength of software vendors like Microsoft, which allows them to dictate the terms and circumstances of their contracts with schools. The organisation claims that this power has enabled IT companies to transfer most of their legal obligations under the General Data Protection Regulation (GDPR) to educational institutions and municipal governments. 

In reality, according to noyb, neither local government nor educational institutions have the power to affect how Microsoft handles user data. Rather, they were frequently faced with a "take it or leave it" scenario, in which Microsoft controlled all financial decisions and decision-making authority while the schools were required to bear all associated risks.

“This take-it-or-leave-it approach by software vendors such as Microsoft is shifting all GDPR responsibilities to schools,” stated Maartje de Graaf, a data protection lawyer at noyb. “Microsoft holds all the key information about data processing in its software, but is pointing the finger at schools when it comes to exercising rights. Schools have no way of complying with the transparency and information obligations.” 

Two complaints 

Due to suspected infringement of information privacy rules, Noyb represented two plaintiffs against Microsoft. The first complaint mentioned a father who requested personal data acquired by Microsoft's 365 Education service on behalf of his daughter in accordance with GDPR regulations. 

However, Microsoft had redirected the concerned parent to the "data controller," and after confirming with Microsoft that the school was the data controller, the parent contacted the school, which responded that they only had access to the student's email addresses used for sign-up. 

According to Microsoft's own documentation, the second complaint stated that, despite not giving consent to cookie or tracking technologies, Microsoft 365 Education installed cookies analysing user behaviour and collecting browser data, both of which are used for advertising purposes. The non-profit alleged that this type of invasive profiling was conducted without the school's knowledge or approval. 

noyb has requested that the Austrian data protection authority (DSB) investigate and analyse the data collected and processed by Microsoft 365 Education, as neither Microsoft's own privacy documentation, the complainant's access requests, nor the non-profit's own research could shed light on this process, which it believes violates the GDPR's transparency provisions.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Meta to Train AI with Public Facebook and Instagram Posts

 


 

Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.

Utilising Public Data for AI

European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.

GDPR Compliance and Legitimate Interest

Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.

Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.

Regulatory Concerns and Delays

The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.

Meta’s AI Development Efforts

Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.

In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.

Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.

European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.

This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.


Slack Faces Backlash Over AI Data Policy: Users Demand Clearer Privacy Practices

 

In February, Slack introduced its AI capabilities, positioning itself as a leader in the integration of artificial intelligence within workplace communication. However, recent developments have sparked significant controversy. Slack's current policy, which collects customer data by default for training AI models, has drawn widespread criticism and calls for greater transparency and clarity. 

The issue gained attention when Gergely Orosz, an engineer and writer, pointed out that Slack's terms of service allow the use of customer data for training AI models, despite reassurances from Slack engineers that this is not the case. Aaron Maurer, a Slack engineer, acknowledged the need for updated policies that explicitly detail how Slack AI interacts with customer data. This discrepancy between policy language and practical application has left many users uneasy. 

Slack's privacy principles state that customer data, including messages and files, may be used to develop AI and machine learning models. In contrast, the Slack AI page asserts that customer data is not used to train Slack AI models. This inconsistency has led users to demand that Slack update its privacy policies to reflect the actual use of data. The controversy intensified as users on platforms like Hacker News and Threads voiced their concerns. Many felt that Slack had not adequately notified users about the default opt-in for data sharing. 

The backlash prompted some users to opt out of data sharing, a process that requires contacting Slack directly with a specific request. Critics argue that this process is cumbersome and lacks transparency. Salesforce, Slack's parent company, has acknowledged the need for policy updates. A Salesforce spokesperson stated that Slack would clarify its policies to ensure users understand that customer data is not used to train generative AI models and that such data never leaves Slack's trust boundary. 

However, these changes have yet to address the broader issue of explicit user consent. Questions about Slack's compliance with the General Data Protection Regulation (GDPR) have also arisen. GDPR requires explicit, informed consent for data collection, which must be obtained through opt-in mechanisms rather than default opt-ins. Despite Slack's commitment to GDPR compliance, the current controversy suggests that its practices may not align fully with these regulations. 

As more users opt out of data sharing and call for alternative chat services, Slack faces mounting pressure to revise its data policies comprehensively. This situation underscores the importance of transparency and user consent in data practices, particularly as AI continues to evolve and integrate into everyday tools. 

The recent backlash against Slack's AI data policy highlights a crucial issue in the digital age: the need for clear, transparent data practices that respect user consent. As Slack works to update its policies, the company must prioritize user trust and regulatory compliance to maintain its position as a trusted communication platform. This episode serves as a reminder for all companies leveraging AI to ensure their data practices are transparent and user-centric.

Websites Engage in Deceptive Practices to Conceal the Scope of Data Collection and Sharing

 

Websites frequently conceal the extent to which they share our personal data, employing tactics to obscure their practices and prevent consumers from making fully informed decisions about their privacy. This lack of transparency has prompted governmental responses, such as the European Union's GDPR and California's CCPA, which require websites to seek permission before tracking user activity.

Despite these regulations, many users remain unaware of how their data is shared and manipulated. A recent study delves into the strategies employed by websites to hide the extent of data sharing and the reasons behind such obfuscation.

The research, focusing on online privacy regulations in Canada, reveals that websites often employ deception to mislead users and increase the difficulty of monitoring their activities. Notably, websites dealing with sensitive information, like medical or banking sites, tend to be more transparent about data sharing due to market constraints and heightened privacy sensitivity.

During the COVID-19 pandemic, as online activity surged, instances of privacy abuses also increased. The study shows that popular websites are more likely to obscure their data-sharing practices, potentially to maximize profits by exploiting uninformed consumers.

Third-party data collection by websites is pervasive, with numerous tracking mechanisms used for advertising and other purposes. This extensive surveillance raises concerns about privacy infringement and the commodification of personal data. Dark patterns and lack of transparency further exacerbate the issue, making it difficult for users to understand and control how their information is shared.

Efforts to protect consumer privacy, such as GDPR and CCPA, have limitations, as websites continue to manipulate and profit from user data despite opt-in and opt-out regulations. Consumer responses, including the use of VPNs and behavioral obfuscation, offer some protection, but the underlying information asymmetry remains a significant challenge.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Unlocking Data Privacy: Mine's No-Code Approach Nets $30 Million in Funding

 


An Israeli data privacy company, Mine Inc., has announced that it has completed a $30 million Series B fundraising round led by Battery Ventures, PayPal Ventures, as well as the investment arm of US insurance giant Nationwide, with the participation of a third investor. In addition to Gradient Ventures, Saban Ventures, MassMutual Ventures, and Headline Ventures, which are all existing investors, Google's AI fund Gradient Ventures also joined the round of investment.

Using artificial intelligence and specifically natural language processing, Mine is capable of scanning your inbox to identify which companies have access to your personal information, as well as allowing you to delete any information that you had no reason to have access to. 

There were a lot of concerns that people had concerning GDPR, and the product sparked a lot of interest: initially free, the startup managed to rake in about 5 million users in just a few weeks. Next, the company was able to expand its user base to include business users and enterprise applications. 

Mine can figure out all of the locations where the end user is installing and using customer or business data from a scan of the user's inbox and log-on authenticity. In this instance, it struck a chord with the privacy officers who are responsible for keeping companies in compliance with privacy rules and that resonated with them as well.

150 clients are using Mine’s data privacy and disclosure solutions to protect their data. These companies include Reddit, HelloFresh SE, Fender, Guesty, Snappy, and Data.ai. By raising this capital, the Company will be able to fund its ongoing operations in the coming years as well as expand its global operations, including expanding the company's MineOS B2B platform into the US and expanding its offerings to the enterprise market. 

With 35 employees, the company is in the process of hiring dozens of developers, QA professionals, and machine learning professionals to be based in Israel. Founded in 2019, Mine is a company headquartered in Tel Aviv, with the company's founding members being CEO Gal Ringel, CTO Gal Golan, and CPO Kobi Nissan.

Since the company started, its vision has been to provide companies and individuals with ease of access to privacy regulations. It has been two years since the company's vision around its MineOS B2B platform has sharpened, and it aims to provide the company with a Single Source of Truth (SSOFT) of data within its organization, enabling them to identify which systems, assets, and data they have within their organization. 

In every organization, this process, known as Data Mapping, is one of the most important building blocks which serves as a basis for a variety of teams, including legal and privacy teams, data teams, engineering teams, information technologies, and security teams. It is the most important building block for many teams within a company. As Ringel said, "The funding was complete at the end of the second week of October, just one week after the war had begun." 

As a result of the difficult market conditions of the past year, we have managed the company very carefully and disciplined since March last year while reducing monthly expenses and boosting revenue significantly to a rate of millions of dollars in annualized return on equity (4x growth in 2023) which has allowed us to achieve extraordinary metrics that have attracted many investors to the company. 

There is no doubt that mineOS is one of the greatest open-source operating systems out there, and as such it has hundreds of enterprise customers, including Reddit, HelloFresh SE, FIFA and Data.ai, and Data.ai it announces $30 million in Series B funding to continue its development. There are two leads in this round, Battery Ventures (from the financial giant) and PayPal Ventures (from the payments giant) as well as all of the previous backers that were involved in this round, including Saban Ventures, Gradient Ventures (Google's AI fund), MassMutual Ventures, and Headline Ventures. 

Although Mine has not disclosed its valuation, the co-founder and CEO, Gal Ringel, told me during his recent interview that the company has increased in valuation three times since its last fundraising back in 2020. (The previous round was $9.5 million after the company had only 100,000 users and no revenue.) Mine has raised over $42.5 million in funding. 

A part of the new funding will be used for both sales development surrounding Mine's current offerings, as well as more funding for R&D. In line with this, Mine intends to launch two new products in Q1 that cater to the explosion in interest and use of artificial intelligence. One of these products is designed for data privacy officers who are prepared to comply with the plans of regulators to adopt artificial intelligence laws shortly. The data protection tools market is not limited to Mine, as it should be. 

The fact that the feature sits close to other data protection activities is why it is more likely to be challenged by other companies in the same arena – for instance, OneTrust, which offers GDPR and consent gate solutions for websites, and BigID, which is a provider of a comprehensive set of compliance tools for data usage and compliance. Ringel said Mine has a strong competitive advantage over these as it is designed with an emphasis on becoming user-friendly, so it can be adopted and used even by people who have no technical background.

TikTok Faces Massive €345 Million Penalty for Mishandling Kids' Data Privacy

 


As a result of TikTok's failure to shield underage users' content from public view as well as violating EU data laws, the company has been fined €345 million (£296 million) for mishandling children's accounts and for breaking the laws. 

Data watchdogs in Ireland, which oversee the Chinese video app TikTok across the EU, recently told legal watchdogs that the video app had violated multiple GDPR rules in its operation. In its investigation, TikTok was found to have violated GDPR by making it mandatory for its users to place their accounts on a public setting by default; failing to give transparent information to child users; allowing a parent to view a child's account using the "family pairing" option to enable direct messaging for those over 16; and not considering the risks to children who were placed on the platform in a public setting and not considering that. 

Children's personal information was not sufficiently protected by the popular Chinese-owned app because it made its account public by default and did not adequately address the risks associated with under-13 users being able to access its platform, according to a decision published by the Irish Data Protection Commission (DPC). 

In a statement released on Tuesday, the Irish Data Protection Commission (DPC) said the company violated eight articles in the GDPR, the EU's primary regulatory authority for the company. There are several legal aspects of data processing which are covered by these laws, and they go from the legal use of personal data to protecting it from unlawful use. 

In most children's accounts, the settings for the profile page are set to public by default, so that everyone will be able to see any content that they post there. In an attempt to allow parents to link to their older child's account and use Direct Messages, this feature called Family Pairing allowed any adult to pair up with their child's account.  

There was no indication the child could be at risk from this feature. In the process of registering users and posting videos, TikTok did not provide the information it should have to child users and instead resorted to what's known as "dark patterns" to encourage users to choose more privacy-invasive options during their registration process. 

According to a DPC decision, the media company has been fined £12.7m after the UK data regulator found TikTok had illegally processed 1.4 million children's data under the age of 13 who were using its platform without their parent's consent in April. 

Despite being a popular social media platform, TikTok has done "very little or nothing, if anything" to ensure the safety of the platform's users from illicit activity. According to TikTok, the investigation examined the privacy setup the company had between 31 July and 31 December 2020, and it has said that it has addressed all of the issues raised as a result of the investigation.

Since 2021, all new and existing TikTok accounts that are 13- to 15-year-olds as well as those that are already set up have been set up as private, meaning that only people the user has authorized will be able to view their content. Additionally, the DPC pointed out that some aspects of their decision had been overruled by the European Data Protection Board (EDPB), a body made up of data protection regulators from various EU member states, on certain aspects. 

The German regulator had to propose a finding that the use of “dark patterns” – the term for deceptive website and app design that leads users to choose certain behaviours or make certain choices – violated the GDPR's provisions for the fair processing of personal data, and this was the reason why it had to include the proposed finding. 

TikTok has been accused of unlawfully making accounts of its users aged 13 to 17 public by default, which effectively means anyone can watch and comment on the videos that individuals have posted on their TikTok accounts between July and December 2020, according to the Irish privacy regulator. 

Moreover, the company failed to adequately assess the risks associated with the possibility of users under the age of 13 gaining access to its platform through marketing channels. Also, the report found that TikTok is still manipulating teenagers who join the platform by requesting them to share their videos and accounts publicly through pop-up advertisements that manipulate them. 

A regulator has ordered the company to change these misleading designs, also known as dark patterns, within three months to prevent any further harm to consumers. As early as the second half of 2020, accounts of minors could be linked to unverified accounts of adults. 

It was also reported that the video platform failed to explain to teenagers previous to the release of their content and accounts to the general public the consequences of making those content and accounts public. It has also been mentioned by the board of European regulators that there were serious doubts in their minds about the effectiveness of TikTok's measures to keep under 13 users off its platform in the latter half of 2020. 

As a result, the EDPB found that TikTok was failing to check the ages of existing users "in a sufficiently systematic manner" even though the mechanisms could be easily circumvented. Because of a lack of information available during the cooperation process, the group was unable to find an infringement, according to the group.

There was a fine of £12.7 million (€14.8 million) from the United Kingdom's data regulator in April for allowing children under 13 to use the platform and use their data. In addition, the company also received a fine of €750,000 from the Dutch privacy authority in 2021 for failing to provide a privacy policy in the Dutch language, which was meant to protect Dutch children.

New Cyber Threat: North Korean Hackers Exploit npm for Malicious Intent

 


There has been an updated threat warning from GitHub regarding a new North Korean attack campaign that uses malicious dependencies on npm packages to compromise victims. An earlier blog post published by the development platform earlier this week claimed that the attacks were against employees of blockchain, cryptocurrency, online gambling, and cybersecurity companies.   

Alexis Wales, VP of GitHub security operations, said that attacks often begin when attackers pretend to be developers or recruiters, impersonating them with fake GitHub, LinkedIn, Slack, or Telegram profiles. There are cases in which legitimate accounts have been hijacked by attackers. 

Another highly targeted attack campaign has been launched against the NPM package registry, aimed at enticing developers into downloading immoral modules by enticing them to install malicious third-party software. There was a significant attack wave uncovered in June, and it has since been linked to North Korean threat actors by the supply chain security firm Phylum, according to Hacker News. This attack wave appears to exhibit similar behaviours as another that was discovered in June. 

During the period from August 9 to August 12, 2023, it was identified that nine packages were uploaded to NPM. Among the libraries that are included in this file are ws-paso-jssdk, pingan-vue-floating, srm-front-util, cloud-room-video, progress-player, ynf-core-loader, ynf-core-renderer, ynf-dx-scripts, and ynf-dx-webpack-plugins. A conversation is initiated with the target and attempts are made to move the conversation to another platform after contacting them. 

As the attacker begins to execute the attack chain, it is necessary to have a post-install hook in the package.json file to execute the index.js file which executes after the package has been installed. In this instance, a daemon process is called Android. The daemon is launched as a dependency on the legitimate pm2 module and, in turn, a JavaScript file named app.js is executed. 

A JavaScript script is crafted in a way that initiates encrypted two-way communications with a remote server 45 seconds after the package is installed by masquerading as RustDesk remote desktop software – "ql. rustdesk[.]net," a spoofed domain posing as the authentic RustDesk remote desktop software. This information entails the compromised host's details and information. 

The malware pings every 45 seconds to check for further instructions, which are decoded and executed in turn, after which the malware checks for new instructions every 45 seconds. As the Phylum Research Team explained, "It would seem to be that the attackers are monitoring the GUIDs of the machines in question and selectively sending additional payloads (which are encoded Javascript code) to the machines of interest in the direction of the GUID monitors," they added. 

In the past few months there have been several typosquat versions of popular Ethereum packages in the npm repository that attempts to make HTTP requests to Chinese servers to retrieve the encryption key from the wallet on the wallet.cba123[.]cn, which had been discovered. 

Additionally, the highly popular NuGet package, Moq, has come under fire since new versions of the package released last week included a dependency named SponsorLink, that extracted the SHA-256 hash of developers' email addresses from local Git configurations and sent them to a cloud service without their knowledge. In addition, Moq has been receiving criticism after new versions released last week came with the SponsorLink dependency. 

Version 4.20.2 of the app has been rolled back as a result of the controversial changes that raise GDPR compliance issues. Despite this, Bleeping Computer reported that Amazon Web Services (AWS) had withdrawn its support for the project, which may have done serious damage to the project's reputation. 

There are also reports that organizations are increasingly vulnerable to dependency confusion attacks, which could've led to developers unwittingly introducing malicious or vulnerable code into their projects, thus resulting in large-scale attacks on supply chains on a large scale. 

There are several mitigations that you can use to prevent dependency confusion attacks. For example, we recommend publishing internal packages under scopes assigned to organizations and setting aside internal package names as placeholders in the public registry to prevent misuse of those names.

Throughout the history of cybersecurity, the recent North Korean attack campaign exploiting npm packages has served as an unmistakable reminder that the threat landscape is transforming and that more sophisticated tactics are being implemented to defeat it. For sensitive data to be safeguarded and further breaches to be prevented, it is imperative that proactive measures are taken and vigilant measures are engaged. To reduce the risks posed by these intricate cyber tactics, organizations need to prioritize the verification of identity, the validation of packages, and the management of internal packages.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Facebook Shares Private Information With NHS Trusts

 


In a report published by The Observer, NHS trusts have been revealed to share private information with Facebook. As a result of a newspaper investigation, it was discovered that all of the websites of 20 NHS trusts were using a covert tracking tool to collect browsing data that was shared with the tech giant, it is a major breach of privacy that violated patient privacy. 

The trust has assured people that it will not collect personal information about them. It has not obtained the consent of the people involved in the process. Data were collected showing the pages people visited, the buttons they clicked, and the keywords they searched for.

As part of the system, the user's IP address was matched with the data and often the data was associated with their Facebook account details. 

A person's medical condition, the doctor's appointment, and the treatments they have received may be known once this information is matched with their medical information. 

Facebook might use it for advertising campaigns related to its business objectives as part of its business strategy. 

The news of this weekend's breach of Meta Pixel has caused panic across the NHS trust community. This is due to 17 of the 20 trusts using the tracking tool taking drastic measures, even apologizing for the incident. 

How does a Meta Pixel tracker work? What is it all about? 

Meta's advertising tracking tool allows companies to track visitor activity on their web pages and gain a deeper understanding of their actions. 

A meta-pixel has been identified as an element of 33 hospital websites where, whenever someone clicks on an appointment button to make an appointment, Facebook receives “a packet of data” from the Meta Pixel. Data about an individual household may be associated with an IP address, which in turn can be linked to its specific IP address. 

It has been reported that eight doctors have apologized to their patients. Furthermore, multiple trusts were unaware they sent patient data to Facebook. This was when they installed tracking pixels to monitor recruitment and charity campaigns. They thought they monitored recruitment specifically. The Information Commissioner's Office (ICO) has proceeded with its investigation despite this and privacy experts have verbally expressed their concerns in concert as well.

As a result of the research findings, the Meta Pixel has been removed from the Friedrich Hospital website. 

Piedmont Healthcare used Meta Pixels to collect data about patients' upcoming doctor appointments through Piedmont Healthcare's patient portal. These data included patients' names, dates, and times of appointments. 

Privacy experts have expressed concern over these findings, who are concerned that they indicate widespread potential breaches of patient confidentiality and data protection that are in their view “completely unacceptable ”. 

There is a possibility that the company will receive health information of a special category, which is legally protected in certain situations. As defined by the law, health information consists of information that relates to an individual's health status, such as medical conditions, tests, treatments, or any other information that relates to health. 

It is impossible to determine the exact usage of the data once it is accessed by Facebook's servers. The company states that the submission of sensitive medical data to the company is prohibited. It has filters in place to weed out such information if it is received accidentally. 

As several of the trusts involved explained, they originally implemented the tracking pixel to monitor recruitment or charity campaigns. They had no idea that patient information is sent to Facebook as part of that process. 

BHNHST, a healthcare trust in the town of Buckinghamshire, has removed the tracking tool from its website. It has been commented that the appearance of Meta Pixel on this site was an unintentional error on the part of the organization. 

When BHNHST users accessed a patient handbook about HIV medications, it appears that BHNHST shared some information with Facebook as a result of the access. According to the report, this data included details such as the name of the drug, the trust's name, the user's IP address, and the details of their Instagram account. 

In its privacy policy, the trust has made it explicitly clear that any consumer health information collected by it will not be used for marketing purposes without the consumer's explicit consent. 

When Alder Hey Children's Trust in Liverpool was linked to Facebook each time a user accessed a webpage related to a sexual development issue, a crisis mental health service, or an eating disorder, the organization also shared information with Facebook. 

Professor David Leslie, director of ethics at the Alan Turing Institute, warned that the transfer of patient information to third parties by the National Health Service would erode the "delicate relationship of trust" between the NHS and its patients. When accessing an NHS website, we have a reasonable expectation that our personal information will not be extracted and shared with third-party advertising companies or companies that might use it to target ads or link our personal information to health conditions."

According to Wolfie Christl, a data privacy expert who has been researching the ad tech industry to find out what is happening, "This should have been stopped long ago by regulators, rather than what is happening now. This is unacceptable in any way, and it must stop immediately as it is irresponsible and negligent." 

20 NHS trusts in England use the tracking tool to find their locations. Together the 20 trusts cover a 22 million population in England, reaching from Devon to the Pennines. Several people had used it for many years before it was discontinued. 

Moreover, Meta is facing litigation over allegations that it intentionally received sensitive health information - including information taken from health portals - and did not take any steps to prevent it. Several plaintiffs have filed lawsuits against Meta, alleging it violated their medical privacy by intercepting and selling their individually identifiable health information from its partner websites. T

Meta stated that the trusts had been contacted to remind them of the privacy policies in place, essentially to prohibit the sharing of health information between the organization and Meta. 

"Our corporate communication department educates advertisers on the proper use of business tools to avoid this kind of situation," the spokesperson added. The group added that it was the owner's responsibility to make sure that the website complied with all applicable data protection laws and that consent was obtained before sending any personal information. 

Several questions have been raised concerning the effectiveness of its filters designed to weed out potentially sensitive, or what types of information would be blocked from hospital websites by the company. They also refused to explain why NHS trusts could send the data in the first place. 

According to the company, advertisers can use its business software tools to grow their business by using health-based advertising to help them achieve their business goals. There are several guides available on its website on how it can display ads to its users that "might be of interest" by leveraging data collected by its business tools. If you look at travel websites, for instance, you might see ads for hotel deals appearing on the website. 

Meta was accused of not complying with part of GDPR (General Data Protection Regulation), in the sense that it moved Facebook users' data from one country to another without permission, according to the DPC. 

Meta Ireland was fined a record fine on Meta Ireland from the European Commission. This order orders it to suspend any future transfers of personal data to the US within five months. They have also ordered the company to stop any future data transfer to the US within the same period. Meta imposed an unjustified fine, according to the company.