By finalizing the AI Act this year, the EU has legally defined which AI use cases pose “unacceptable risk.” The world’s first comprehensive AI legislation, however, still wants more practical input on prohibited AI use cases – including controversial applications such as live facial recognition in public spaces for law enforcement.
The European AI Office has launched a consultation that aims to define what an AI system is and the prohibited AI practices established in the AI Act. The Office, which plays a key role in implementing the AI Act, is hoping that the consultation will provide additional practical examples and use cases from AI systems providers, businesses, public authorities, academics, civil society and the general public.
Aside from the use of real-time remote biometric identification by law enforcement, the consultation should create a more detailed picture of AI practices such as untargeted scraping of internet or CCTV material for facial recognition databases, emotion recognition in the workplace or education and biometric categorization software used to infer sensitive categories.
Other AI use cases put under the magnifying glass are AI that uses harmful subliminal, manipulative and deceptive techniques, unacceptable social scoring, and individual crime risk assessment and prediction.
Stakeholders are invited to provide their input by December 11th, 2024. The insights will be used to formulate the European Commission’s guidelines for national authorities and AI providers and deployers which are set to be released in early 2025.
EU institutions cannot prevent harmful AI exports
Although the European AI Office has just begun to operate, the agency is already facing criticism.
Despite its ambition to regulate AI, the EU’s AI Act also left significant loopholes. This includes allowing risky AI technology to be exported to non-EU countries, including biometric identification and emotion recognition systems. The loophole could lead to a proliferation of dangerous AI systems, especially in the Global South, according to science policy advisor William Burns.
EU institutions, including the AI Office, lack effective checks and balances on AI exports, particularly as they are often under-resourced and influenced by industry lobbies.
“It is implausible that the office would take on activist tasks outside the legislation, such as gathering data on harmful sales overseas,” writes Burns. “Perhaps it could dabble in lower-key horizon scanning activities that would meet some of these needs. But in the absence of export controls, there is no solid way to measure what is going on, let alone halt it.”
Article Topics
AI Act | biometric identification | EU | Europe | facial recognition | legislation | regulation