Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Act. Show all posts

EU Bans AI Systems Deemed ‘Unacceptable Risk’

 


As outlined in the European Union's (EU) Artificial Intelligence Act (AI Act), which was first presented in 2023, the AI Act establishes a common regulatory and legal framework for the development and application of artificial intelligence. In April 2021, the European Commission (EC) proposed the law, which was passed by the European Parliament in May 2024 following its proposal by the EC in April 2021. 

EC guidelines introduced this week now specify that the use of AI practices whose risk assessment was deemed to be "unacceptable" or "high" is prohibited. The AI Act categorizes AI systems into four categories, each having a degree of oversight that varies. It remains relatively unregulated for low-risk artificial intelligence such as spam filters, recommendation algorithms, and customer service chatbots, whereas limited-risk artificial intelligence, such as customer service chatbots, must meet basic transparency requirements. 

Artificial intelligence that is considered high-risk, such as in medical diagnostics or autonomous vehicles, is subjected to stricter compliance measures, including risk assessments required by law. As a result of the AI Act, Europeans can be assured of the benefits of artificial intelligence while also being protected from potential risks associated with its application. The majority of AI systems present minimal to no risks and are capable of helping society overcome societal challenges, but certain applications need to be regulated to prevent negative outcomes from occurring. 

It is an issue of major concern that AI decision-making lacks transparency, which causes problems when it comes to determining whether individuals have been unfairly disadvantaged, for instance in the hiring process for jobs or in the application for public benefits. Despite existing laws offering some protection, they are insufficient to address the unique challenges posed by AI, which is why the EU has now enacted a new set of regulations. 

It has been decided that AI systems that pose unacceptable risks, or those that constitute a clear threat to people's safety, livelihoods, and rights, should be banned in the EU. Among their plethora are social scoring and data scraping for facial recognition databases through the use of internet or CCTV footage, as well as the use of AI algorithms to manipulate, deceive, and exploit other vulnerabilities in a harmful way. Although it is not forbidden, the EC is also going to monitor the applications categorised as "high risk." These are applications that seem to have been developed in good faith, but if something were to go wrong, could have catastrophic consequences.

The use of artificial intelligence in critical infrastructures, such as transportation, that are susceptible to failure, which could lead to human life or death citizens; AI solutions used in education institutions, which can have a direct impact on someone's ability to gain an education and their career path. An example of where AI-based products will be used, such as the scoring of exams, the use of robots in surgery, or even the use of AI in law enforcement with the potential to override people's rights, such as the evaluation of evidence, there may be some issues with human rights. 

AI Act is the first piece of legislation to be enforced in the European Union, marking an important milestone in the region's approach to artificial intelligence regulation. Even though the European Commission has not yet released comprehensive compliance guidelines, organizations are now required to follow newly established guidelines concerning prohibited artificial intelligence applications and AI literacy requirements, even though no comprehensive compliance guidelines have yet been released. 

It explicitly prohibits artificial intelligence systems that are deemed to pose an “unacceptable risk,” which includes those that manipulate human behaviour in harmful ways, take advantage of vulnerabilities associated with age, disability, and socioeconomic status, as well as those that facilitate the implementation of social scoring by the government. There is also a strong prohibition in this act against the use of real-time biometric identification in public places, except under specified circumstances, as well as the creation of facial recognition databases that are based on online images or surveillance footage scraped from online sources. 

The use of artificial intelligence for the recognition of emotions in the workplace or educational institutions is also restricted, along with the use of predictive policing software. There are severely fined companies found to be using these banned AI systems within the EU, and the fines can reach as high as 7% of their global annual turnover or 35 million euros, depending on which is greater. In the days following the enactment of these regulations, companies operating in the AI sector must pay attention to compliance challenges while waiting for further guidance from the EU authorities on how to accomplish compliance. 

There is an antitrust law that prohibits the use of artificial intelligence systems that use information about an individual's background, skin colour, or social media behaviour as a way of ranking their likelihood of defaulting on a loan or defrauding a social welfare program. A law enforcement agency must follow strict guidelines to ensure that they do not use artificial intelligence (AI) to predict criminal behaviour based only on facial features or personal characteristics, without taking any objective, verifiable facts into account.

Moreover, the legislation also forbids AI tools which extract facial images from the internet, or CCTV footage, indiscriminately to create large-scale databases that can be accessed by any surveillance agencies, as this is a form of mass surveillance. An organization is restricted from using artificial intelligence-driven webcams or voice recognition to detect the emotions of its employees, and it is forbidden to use subliminal or deceptive AI interfaces to manipulate the user into making a purchase. 

As a further measure, it is also prohibited to introduce AI-based toys or systems specifically designed to target children, the elderly, or vulnerable individuals who are likely to engage in harmful behaviour. There is also a provision of the Act that prohibits artificial intelligence systems from interpreting political opinions and sexual orientation from facial analysis, thus ensuring stricter protection of individuals' privacy rights and privacy preferences.