Cybersecurity News that Matters

Cybersecurity News that Matters

EU reaches deal on comprehensive regulation of AI

by Kuksung Nam

Dec. 13, 2023
11:30 AM GMT+9

The world’s first comprehensive body of rules on artificial intelligence has taken a big step toward becoming reality as European Union negotiators clinched a political deal on the regulation of AI technologies.

On December 9, EU policymakers announced that they had reached a provisional agreement on governing the use of AI technologies after a three-day intensive debate. Lawmakers from three branches of the EU’s primary governing bodies—the European Parliament, the Council of the European Union, and the European Commission—agreed on the flagship legislation, named the AI Act. The new rule is intended to protect citizens of the EU from suffering possible harm due to the influence of AI technologies.

According to the press release from the EU governing bodies, the AI Act seeks to regulate AI systems according to a risk-based assessment approach. If any AI technology can be seen as threatening significant harm to society, the level of restriction it will face will be severe in proportion to its danger. Specifically, the EU lawmakers established a three-stage classification system for the level of threat an AI technology might pose: minimal risk, high-risk, and unacceptable risk.

Technologies that fall into the category of minimal risk will not be subject to restrictions. The AI Act states that the majority of AI technologies will be classed as being among the lowest level as they present minimal harm to citizens’ rights and safety.

High-risk technologies, such as AI systems on critical infrastructure such as water, gas, electricity, and medical devices, as well as those able to be used to influence elections, immigration, education, and employment, will face stricter requirements and obligations to gain access to the EU market. Additionally, according to the statement of the European Parliament, the deal allows citizens the right to launch complaints and ask for explanations on decisions made by high-risk AI systems that they feel are affecting their rights.

Furthermore, the EU lawmakers agreed to ban several AI applications, including technologies that manipulate human behavior to circumvent their free will. These include, for example, toys equipped with voice assistance able to mislead minors into behaving badly. AI technologies that could allow governments or companies to implement social credit scores, a grading system that could be used to penalize individuals based on their trustworthiness, are also prohibited under the new rule.

When the blueprint of the EU’s AI Act was first revealed to the public, experts expressed both interest in and concerns about its political implications. AlgorithmWatch, a digital rights group based in Berlin, noted the importance of the inclusion of rights for citizens to request explanations for and lodge complaints about AI applications.

“Without a right to explanation and a right to lodge complaints, affected people would have limited or no means to challenge AI-based decisions or to hold deployers of AI systems accountable,” said Kilian Vieth-Ditlmann, the Deputy Head of Policy & Advocacy at AlgorithmWatch, to The Readable in an email statement.

They also shared concerns about a “loophole” in the AI Act which would allow law enforcement agencies to use real-time remote biometric identification systems in publicly accessible spaces under three specific conditions: 1) when searching for crime victims 2) when searching for dangerous people suspected of serious crimes; and 3) when preventing genuine, present, and foreseeable threats such as terrorist attacks. The concern arises out of the fact that, while these uses are good as-such, they are easily able to be misapplied or abused in ways that could violate fundamental human rights, such as the right to be free from excessive surveillance and the right to enjoy a reasonable degree of privacy when outdoors in public.

“Whether or not you are in the database, the knowledge that you may be scanned has a profound ‘chilling effect’ on your rights and freedoms,” said the Deputy Head of Policy & Advocacy. “Public spaces are the last open spaces that are not constantly surveilled. We need to protect them in order to protect our fundamental rights such as freedom of expression, religion, and assembly.”

The emergence of AI technologies have raised concerns both about their being misused by cybercriminals as well as law enforcement agencies. According to a report on Graphika issued in December, services that provide the creation of deepfake images of real individuals, making them appear nude without their consent, have moved from niche internet forums to online platforms such as X. The Internet Watch Foundation released a report in October on how AI technology is increasingly being used to create child sexual abuse imagery online.

Regarding the misuse of AI by law enforcement agencies, human rights advocacy groups such as Amnesty International and the European Digital Rights (EDRi), have issued separate statements after the announcement of the AI Act, criticizing the decision of EU policymaker’s to partially allow live public face recognition technology by law enforcement agencies.

Cormac Callanan, the cybersecurity coordinator from Enhancing Security Cooperation In and With Asia (ESIWA), stressed the importance of balance. “There are separate concerns about safeguarding and supporting human rights when such systems are also widely adopted by law enforcement agencies,” said Callanan. “We need to focus on the correct balance between the rights of the citizens who are not involved with criminal activities and the rights of law enforcement to investigate crime.”

Furthermore, according to the AI Act, strict regulations will not be applied to AI systems that are used solely for the purpose of research and innovation as well as for the military and defense. The policymakers adjusted the requirements of high-level AI technologies to be more technically feasible and less burdensome for such entities and companies to comply with.

Following the provisional agreement, the policymakers will work together to finalize the details of the new regulation. The AI Act is expected to go into effect in 2026.

Readable Subscription Form - EU reaches deal on comprehensive regulation of AI

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Kuksung Nam
    : Author

    Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and...

  • Areum Hwang
  • Arthur Gregory Willers

    Arthur Gregory Willers is a copyeditor at The Readable, where he works to make complex cybersecurity news accessible and engaging for readers. With over 20 years in education and publishing, his exper...

Stay Ahead with The Readable's Cybersecurity Insights