Italy's data protection regulator, Garante per la Protezione dei Dati Personali, has cautioned GEDI, a leading Italian media group, to comply with EU data protection laws in its collaboration with OpenAI. Reuters reports that the regulator highlighted the risk of non-compliance if personal data from GEDI's archives were shared under a proposed agreement with OpenAI, the creator of ChatGPT.
The partnership, formed in September, would allow OpenAI to use Italian-language content from GEDI’s publications, including La Repubblica and La Stampa, to enhance its chatbot services. The regulator warned that the use of personal and sensitive data stored in digital archives requires stringent safeguards. “The digital archives of newspapers contain the stories of millions of people, with information, details, and even extremely sensitive personal data that cannot be licensed without due care for use by third parties to train artificial intelligence,” stated the Garante.
GEDI clarified that its agreement with OpenAI does not involve selling personal data. “The project has not been launched,” said GEDI. “No editorial content has been made available to OpenAI at the moment and will not be until the reviews underway are completed.” The company expressed hope for ongoing constructive dialogue with the Italian data protection authority.
The case highlights growing tension between European regulators and major AI developers. The EU’s Artificial Intelligence Act (EU AI Act), effective from August 2024, sets strict guidelines for AI systems based on their risk levels. While the Act aims to ensure transparency and data privacy, critics argue it imposes burdensome constraints that could hamper innovation.
AI industry leaders have voiced frustration over Europe's regulatory environment. OpenAI’s CEO, Sam Altman, warned in 2023 that the company might "cease operating" in the EU if compliance proved too difficult. In September 2024, executives from Meta and other firms cautioned in an open letter that the EU’s strict tech policies risk undermining Europe’s competitiveness in AI development.
The Italian regulator’s scrutiny of the GEDI-OpenAI partnership reflects broader EU attitudes toward AI regulation. While ensuring compliance with GDPR, such interventions exemplify Europe's cautious approach to AI innovation. Critics argue that this could slow progress in a field where other regions, such as the US and China, are advancing more aggressively.
AI technologies have the potential to revolutionise various sectors, from healthcare and finance to transportation and education. However, with great power comes great responsibility. The misuse or unintended consequences of AI can lead to significant ethical, legal, and social challenges. Issues such as bias in AI algorithms, data privacy concerns, and the potential for job displacement are just a few of the risks associated with unchecked AI development.
Australia’s proposed guardrails are designed to address these concerns by establishing a clear regulatory framework that promotes transparency, accountability, and ethical AI practices. These guardrails are not just about mitigating risks but also about fostering public trust and providing businesses with the regulatory certainty they need to innovate responsibly.
Accountability Processes: Organizations must establish clear accountability mechanisms to ensure that AI systems are used responsibly. This includes defining roles and responsibilities for AI governance and oversight.
Risk Management: Implementing comprehensive risk management strategies is crucial. This involves identifying, assessing, and mitigating potential risks associated with AI applications.
Data Protection: Ensuring the privacy and security of data used in AI systems is paramount. Organizations must adopt robust data protection measures to prevent unauthorized access and misuse.
Human Oversight: AI systems should not operate in isolation. Human oversight is essential to monitor AI decisions and intervene when necessary to prevent harm.
Transparency: Transparency in AI operations is vital for building public trust. Organizations should provide clear and understandable information about how AI systems work and the decisions they make.
Bias Mitigation: Addressing and mitigating bias in AI algorithms is critical to ensure fairness and prevent discrimination. This involves regular audits and updates to AI models to eliminate biases.
Ethical Standards: Adhering to ethical standards in AI development and deployment is non-negotiable. Organizations must ensure that their AI practices align with societal values and ethical principles.
Public Engagement: Engaging with the public and stakeholders is essential for understanding societal concerns and expectations regarding AI. This helps in shaping AI policies that are inclusive and reflective of public interests.
Regulatory Compliance: Organizations must comply with existing laws and regulations related to AI. This includes adhering to industry-specific standards and guidelines.
Continuous Monitoring: AI systems should be continuously monitored and evaluated to ensure they operate as intended and do not pose unforeseen risks.
The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.
The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.
The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.
It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.
However, the laws will apply to a wide range of enterprises, including non-technology firms.
Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.
High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.
The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.
Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.
Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.
In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.
The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.
General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.
The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.
The intersection of wargames and artificial intelligence (AI) has become a key subject in the constantly changing field of combat and technology. Experts are advocating for ethical monitoring to reduce potential hazards as nations use AI to improve military capabilities.
The administrator’s executive order aims at creating new guidelines for the security and safety of AI use. By applying the Defense Production Act, the order directs businesses to provide US regulators with safety test results and other crucial data whenever they are developing AI that could present a "serious risk" for US military, economic, or public security. However, it is still unclear who will be monitoring these risks and to what extent.
Nevertheless, prior to the public distribution of any such AI programs, the National Institute of Standards and Technology will shortly establish safety requirements that must be fulfilled.
In regards to the order, Ben Buchanan, the White House Senior Advisor for AI said, “I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do[…]We have to do safety and security, we have to do civil rights and equity, we have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”
“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development, and that they share the tests of those systems in accordance with those standards[…]Before it goes out to the public, it needs to be safe, secure, and trustworthy,” Mr. Buchanan added.
In an announcement made by President Biden on Monday, he urged Congress to enact bipartisan data privacy legislation to “protect all Americans, especially kids,” from AI risks.
While several US states like Massachusetts, California, Virginia, and Colorado have agreed on passing the legislation, the US however lacks comprehensive legal safeguards akin to the EU’s General Data Protection Regulation (GDPR).
GDPR, enacted in 2018, severely limits how businesses can access the personal data of their customers. If they are found to be violating the law, they may as well face hefty fines.
However, according to Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University, the White House's most recent requests for data privacy laws "are unlikely to be answered[…]Both sides concur that action is necessary, but they cannot agree on how it should be carried out."