Google is currently being investigated in Europe over privacy concerns raised about how the search giant has used personal data to train its generative AI tools. The subject of investigation is led by Ireland's Data Protection Commission, which ensures that the giant technical company adheres to strict data protection laws within the European Union. This paper will establish whether Google adhered to its legal process, such as obtaining a Data Protection Impact Assessment (DPIA), before using people's private information to develop its intelligent machine models.
Data Collection for AI Training Causes Concerns
Generative AI technologies similar to Google's brand Gemini have emerged into the headlines because these tend to create fake information and leak personal information. This raises the question of whether Google's AI training methods, necessarily involving tremendous amounts of data through which such training must pass, are GDPR-compliant-its measures to protect privacy and rights regarding individuals when such data is used for developing AI.
This issue at the heart of the probe is if Google should have carried out a DPIA, which is an acronym for Data Protection Impact Assessment-the view of any risks data processing activities may have on the rights to privacy of individuals. The reason for conducting a DPIA is to ensure that the rights of the individuals are protected simply because companies like Google process humongous personal data so as to create such AI models. The investigation, however, is specifically focused on how Google has been using its model called PaLM2 for running different forms of AI, such as chatbots and enhancements in the search mechanism.
Fines Over Privacy Breaches
But if the DPC finds that Google did not comply with the GDPR, then this could pose a very serious threat to the company because the fine may amount to more than 4% of the annual revenue generated globally. Such a company as Google can raise billions of dollars in revenue every year; hence such can result in a tremendous amount.
Other tech companies, including OpenAI and Meta, also received similar privacy-based questions relating to their data practices when developing AI.
Other general issues revolve around the processing of personal data in this fast-emerging sphere of artificial intelligence.
Google Response to Investigation
The firm has so far refused to answer questions over specific sources of data used to train its generative AI tools. A company spokesperson said Google remains dedicated to compliance with the GDPR and will continue cooperating with the DPC throughout the course of the investigation. The company maintains it has done nothing illegal. And just because a company is under investigation, that doesn't mean there's something wrong with it; the very act of inquiring itself forms part of a broader effort to ensure that companies using technology take account of how personal information is being used.
Data Protection in the AI Era
DPC questioning of Google is part of a broader effort by the EU regulators to ensure generative AI technologies adhere to the bloc's high data-privacy standards. As concerns over how personal information is used, more companies are injecting AI into their operations. The GDPR has been among the most important tools for ensuring citizens' protection against misuse of data, especially during cases involving sensitive or personal data.
In the last few years, other tech companies have been prosecuted with regard to their data-related activities in AI development. Recently, the developers of ChatGPT, OpenAI, and Elon Musk's X (formerly Twitter), faced investigations and complaints under the law of GDPR. This indicates the growing pressure technological advancement and the seriousness in the protection of privacy are under.
The Future of AI and Data Privacy
In developing AI technologies, firms developing relevant technology need to strike a balance between innovation and privacy. The more innovation has brought numerous benefits into the world-search capabilities and more efficient processes-the more it has opened risks to light by leaving personal data not so carefully dealt with in most cases.
Moving forward, the regulators, including the DPC, would be tracking the manner in which the companies like Google are dealing with the data. It is sure to make rules much more well-defined on what is permissible usage of personal information for developing the AI that would better protect individuals' rights and freedoms in this digital age.
Ultimately, the consequences of this study may eventually shape how AI technologies are designed and implemented in the European Union; it will certainly inform tech businesses around the world.
At a recent EU meeting in Luxembourg, Poland supported a European Commission proposal to shorten the time new drugs are protected by data exclusivity rules. Health Minister Izabela Leszczyna said Poland prefers one year of market protection over longer periods of data protection.
In April 2023, the European Commission suggested reducing the data exclusivity period for drugs from eight to six years. Minister Leszczyna agreed, saying this would help people access new treatments more quickly without adding extra paperwork. She also proposed one year of market protection for new uses of existing drugs instead of extending data protection.
Balancing Incentives and Access
Minister Leszczyna emphasised that Poland supports measures to ensure all EU countries have access to modern treatments. She suggested that incentives should focus on market protection and not last longer than a year. For drugs treating rare diseases, extending protection could be considered, but for other drugs, different solutions should be found.
Challenges in Generic Drug Production
Krzysztof Kopeć, President of the Polish Association of Pharmaceutical Industry Employers, highlighted issues with drug shortages, especially for generic drugs. He explained that producing drugs in Europe is becoming less profitable, leading to shortages. Although the European Commission wants to boost drug production in Europe, current regulations do not support this, and production costs are higher in Europe than in Asia.
Concerns from Innovative Drug Companies
Innovative drug companies argue that changing existing intellectual property rules is not the answer to drug access problems. They believe the current rules should continue to support innovation and ensure EU patients can access new treatments. Michał Byliniak, General Director of INFARMA, stressed the need for EU reforms to improve drug supply security, availability, and affordability while also supporting new drug development.
INFARMA is discussing potential risks of shorter protection periods with the Ministry of Health and other stakeholders. They warn that reducing protection could limit access to advanced treatments. INFARMA supports keeping current data protection levels and creating incentives to promote innovation, address unmet medical needs, and encourage research in the EU.
Poland's support for a shorter data exclusivity period shows its commitment to balancing access to new treatments, innovation, and economic realities in the EU drug industry. As discussions continue, the goal remains to create rules that ensure safe, effective, and affordable medicines are available to everyone in Europe.
Privacy and security in financial transactions are becoming increasingly important in our digital age. The Consumer Finance Group's recent call for stricter privacy protections for the digital Euro is a proactive step to ensure that people's financial information is protected.
The Consumer Finance Group, a prominent advocate for consumer rights, has raised concerns about the potential privacy vulnerabilities associated with the digital Euro, which is currently under development by the European Central Bank. As reported by ThePrint and Reuters, the group emphasizes the need for robust privacy protections.
One of the key concerns highlighted by the Consumer Finance Group is the risk of digital Euro transactions being traced and monitored without adequate safeguards. This could lead to an invasion of financial privacy, as every transaction could potentially be linked to an individual, raising concerns about surveillance and misuse of data.
To address these concerns, the group has proposed several measures:
While these measures are essential for safeguarding privacy, it's essential to strike a balance between privacy and security. Implementing stringent privacy measures must also consider the need to combat financial crimes such as money laundering and terrorism financing.
The European Central Bank and policymakers should carefully consider the recommendations put forth by the Consumer Finance Group. Finding the right balance between privacy and security in the digital Euro's design will be crucial in gaining public trust and ensuring the widespread adoption of this digital currency.
The need for stronger privacy protections in the digital Euro is a reminder of the importance of safeguarding personal financial data in our increasingly digitalized society. Regulators and financial institutions must prioritize addressing these privacy issues as digital currencies become more widely used.
After Italy became the first Western country to block advanced chatbot ChatGPT on Friday due to a lack of transparency in its data use, Europe is wondering who will follow. Several neighboring countries have already expressed interest in the decision.
“In the space of a few days, specialists from all over the world and a country, Italy, are trying to slow down the meteoric progression of this technology, which is as prodigious as it is worrying,” writes the French daily Le Parisien.
Many cities in France have already begun with their own research “to assess the changes brought about by ChatGPT and the consequences of its use in the context of local action,” reports Ouest-France.
The city of Montpellier wants to ban ChatGPT for municipal staff, as a precaution," the paper reports. “The ChatGPT software should be banned within municipal teams considering that its use could be detrimental.”
According to the BBC, the Irish data protection commission is following up with the Italian regulator to understand the basis for its action and "will coordinate with all E.U. (European Union) data protection authorities" in relation to the ban.
The Information Commissioner's Office, the United Kingdom's independent data regulator, also told the BBC that it would "support" AI developments while also "challenging non-compliance" with data protection laws.
ChatGPT is already restricted in several countries, including China, Iran, North Korea, and Russia. The E.U. is in the process of preparing the Artificial Intelligence Act, legislation “to define which AIs are likely to have societal consequences,” explains Le Parisien. “This future law should in particular make it possible to fight against the racist or misogynistic biases of generative artificial intelligence algorithms and software (such as ChatGPT).
The Artificial Intelligence Act also proposes appointing one regulator in charge of artificial intelligence in each country.
The Italian situation
The Italian data protection authority explained that it was banning and investigating ChatGPT due to privacy concerns about the model, which was developed by a U.S. start-up called OpenAI, which is backed by billions of dollars in investment from Microsoft.
The decision "with immediate effect" announced by the Italian National Authority for the Protection of Personal data was taken because “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users,” Le Point reported.
“The move by the agency, which is independent from the government, made Italy the first Western country to take action against a chatbot powered by artificial intelligence,” wrote Reuters.
The Italian data protection authority stated that it would not only block OpenAI's chatbot, but would also investigate whether it complied with the EU's General Data Protection Regulation.
It goes on to say that the new technology "exposes minors to completely inappropriate answers in comparison to their level of development and awareness."
According to the press release from the Italian Authority, on March 20, ChatGPT "suffered a loss of data ('data breach') concerning user conversations and information relating to the payment of subscribers to the paid service."
It also mentions the "lack of a legal basis justifying the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the platform's operation."
ChatGPT was released to the public in November and was quickly adopted by millions of users who were impressed by its ability to answer difficult questions clearly, mimic writing styles, write sonnets and papers, and even pass exams. ChatGPT can also be used without any technical knowledge to write computer code.
“Since its release last year, ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products,” writes Reuters.
"On Friday, OpenAI, which disabled ChatGPT for users in Italy in response to the agency's request, said it is actively working to reduce the use of personal data in training its AI systems like ChatGPT."
According to Euronews, the Italian watchdog has now asked OpenAI to "communicate within 20 days the measures undertaken" to remedy the situation, or face a fine of €20 million ($21.7 million) or up to 4% of annual worldwide turnover.
The announcement comes after Europol, the European police agency, warned on Monday that criminals were ready to use AI chatbots like ChatGPT to commit fraud and other cybercrimes. The rapidly evolving capabilities of chatbots, from phishing to misinformation and malware, are likely to be quickly exploited by those with malicious intent, Europol warned in a report.
Authorities in Europe, led by French and Dutch forces disclosed how the EncroChar network had been compromised several months earlier. More than 100 million messages were siphoned out by malware the police covertly inserted into the encrypted system, exposing the inner workings of the criminal underworld. People openly discussed drug deals, coordinated kidnappings, premeditated killings, and worse.
The hack, considered one of the largest ever being conducted by the police, was an intelligence gold mine. It led to hundreds of arrests, home raids, and thousands of kilograms of drugs being seized. Following this, thousands of EncroChat members are now imprisoned in Europe, including the UK, Germany, France, and the Netherlands, after two years have passed.
The EncroChat phone network, which was established in 2016, had about 60,000 users when it was uncovered by law enforcement. According to EncroChat's company website, subscribers paid hundreds of dollars to use a customized Android phone that could "guarantee anonymity." The phone's security features included the ability to "panic wipe" everything on the device, live customer assistance, and encrypted conversations, notes, and phone calls using a version of the Signal protocol. Its GPS chip, microphone, and camera may all be taken out.
Instead of decrypting the phone network, it appears that the police who hacked it compromised the EncroChat servers in Roubaix, France, and then distributed malware to devices.
According to court filings, 32,477 of EncroChat's 66,134 users in 122 countries were affected, despite the little-known fact on how the breach occurred or the kind of malware deployed.
The Documents obtained by Motherboard indicated that the investigators might potentially collect all of the data on the phones. The participating law enforcement agencies in the inquiry exchanged this information. (EncroChat claimed to be a legitimate business before shutting down as a result of the breach.)
In regard to the hack, Europe is facing several legal challenges.
While in many countries the court has ruled that the hacked EncroChat messages can be utilized as legal shreds of evidence, these decisions have now been disputed.
According to a report by Computer Weekly, many of the reported cases possess complexity: Every country has a unique legal system with distinct guidelines about the kinds of evidence that may be utilized and the procedures prosecutors must adhere to. For instance, Germany places strict restrictions on the installation of malware on mobile devices, while the UK generally forbids the use of "intercepted" evidence in court.
The most well-known objection to date comes from German attorneys. One of the top courts on the continent, the Court of Justice of the European Union (CJEU), received an EncroChat appeal from a regional court in Berlin in October.
The judge asked the court to rule on 14 issues relating to the use of the data in criminal cases and how it was moved across Europe. The Berlin court emphasized how covert the investigation was. The court decision's machine translation states that "technical specifics on the operation of the trojan software and the storage, assignment, and filtering of the data by the French authorities and Europol are not known." "French military secrecy inherently affects how the trojan software functions."
Despite the legal issues, police departments all around Europe have praised the EncroChat breach and how it has assisted in locking up criminals. In massive coordinated policing operations that began as soon as the hack was revealed in June 2020, hundreds of people were imprisoned. In the Netherlands, police found criminals using shipping containers as "torture chambers."
Since then, a steady stream of EncroChat cases has been brought before courts, and individuals have been imprisoned for some of the most severe crimes. The data from EncroChat has been a tremendous help to law enforcement; as a result of the police raids, organized crime arrests in Germany increased by 17%, and at least 2,800 persons have been detained in the UK.
Despite the police being lauded for capturing the criminals, according to the lawyers, this method of investigation is flawed and should not be presented as evidence in court. They emphasized how the secrecy of the hacking indicates that suspects have not received fair trials. A lawsuit from Germany was then sent to Europe's top court toward the end of 2022.
If successful, the appeal could jeopardize criminals' convictions across Europe. Additionally, analysts claim that the consequences have an impact on end-to-end encryption globally.
“Even bad people have rights in our jurisdictions because we are so proud of our rule of law […] We’re not defending criminals or defending crimes. We are defending the rights of accused people,” says Lödden.
US-based company "Evolv" known for selling artificial intelligence (AI) scanners, claims it detects all weapons.
However, the research firm IPVM says Evolv might fail in detecting various types of knives and some components and bombs.
Evolv says it has told venues of all "capabilities and limitations." Marion Oswald, from Government Centre for Data Ethics and Innovation said there should be more public knowledge as well as independent evaluation of the systems before they are launched in the UK.
Because these technologies will replace methods of metal detection and physical searches that have been tried and tested.
AI and machine learning allow scanners to make unique "signatures" of weapons that distinguish them from items like computers or keys, lessening the need for preventing long queues in manual checks.
"Metallic composition, shape, fragmentation - we have tens of thousands of these signatures, for all the weapons that are out there. All the guns, all the bombs, and all the large tactical knives," said Peter George, chief executive, in 2021. For years, independent security experts have raised concerns over some of Evolv's claims.
The company in the past didn't allow IPVM to test its technology named Evolv Express. However, last year, Evolve allowed the National Center for Spectator Sports Safety and Security (NCS4).
NCS4's public report, released last year, gave a score of 2.84 out of 3 to Evolv- most of the guns were detected 100% of the time.
However, it also produced a separate report (private), received via a Freedom of Information request by IPVM. The report gave Evolv's ability to identify large knives 42% of the time. The report said that the system failed to detect every knife on the sensitivity level noticed during the exercise.
The report recommended full transparency to potential customers, on the basis of the data collected. ASM Global, owner of Manchester arena said its use of Evolv Express is the "first such deployment at the arena in Europe," it is also planning to introduce technology to other venues.
In an unfortunate incident in 2017, a man detonated a bomb at an Ariana Grande concert in the arena, which kille22 people and injured more than hundreds, primarily children.
Evolv didn't debate IPVM's private report findings. It says that the company believes in communicating sensitive security information, which includes capabilities and limitations of Evolv's systems, allowing security experts to make informed decisions for their specific venues.
We should pay attention to NCS4's report as there isn't much public information as to how Evolv technology works.