Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label EDPB. Show all posts

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.

ChatGPT: Researcher Develops Malicious Data-stealing Malware Using AI


Ever since the introduction of ChatGPT last year, it has created a buzz among tech enthusiasts all around the world with its ability to create articles, poems, movie scripts, and much more. The AI can even generate functional code if provided with well-written and clear instructions. 

Despite the security measures put in place by OpenAI, with a majority of developers using it for harmless purposes, a new analysis suggests that AI can still be utilized by threat actors to create malware. 

According to a cybersecurity researcher, ChatGPT was utilised to create a zero-day attack that may be used to collect data from a hacked device. Alarmingly, the malware managed to avoid being detected by every vendor on VirusTotal. 

As per Forcepoint researcher Aaron Mulgrew, he had decided early on in the malware development process not to write any code himself and instead to use only cutting-edge approaches often used by highly skilled threat actors, such as rogue nation-states. 

Mulgrew, who called himself a "novice" at developing malware, claimed that he selected the Go implementation language not just because it was simple to use but also because he could manually debug the code if necessary. In order to escape detection, he also used steganography, which conceals sensitive information within an ordinary file or message. 

Creating Dangerous Malware Through ChatGPT 

Mulgrew found a loophole in ChatGPT's code that allowed him to write the malware code line by line and function by function. 

He created an executable that steals data discreetly after compiling each of the separate functions, which he believes were comparable to nation-state malware. The drawback here is that Mulgrew developed such dangerous malware with no advanced coding experience or with the help of any hacking team. 

As told by Mulgrew, the malware poses as a screensaver app, that launches itself on Windows-sponsored devices, automatically. Once launched, the malware looks for various files, like Word documents, images, and PDFs, and steals any data it can find. 

The data is then fragmented by the malware and concealed within other photos on the device. The data theft is difficult to identify because these images are afterward transferred to a Google Drive folder. 

Latest from OpenAI 

According to a report by Reuters, European Data Protection Board (EDPB) has recently established a task force to address privacy issues relating to artificial intelligence (AI), with a focus on ChatGPT. 

The action comes after recent decisions by Germany's commissioner for data protection and Italy to regulate ChatGPT, raising the possibility that other nations may follow suit.