By Zoe Kleinman, Philippa Wain & Ashleigh Swan
Technology team
Discrimination is a more pressing concern from advancing artificial intelligence than human extinction, says the EU’s competition chief.
Margrethe Vestager told the BBC “guardrails” were needed to counter the technology’s biggest risks.
She said this was key where AI is being used to help make decisions that can affect someone’s livelihood, such as whether they can apply for a mortgage.
The European Parliament will vote on its proposed AI rules on Wednesday.
The AI Act is being considered by politicians amid warnings over developing the tech – which enables computers to perform tasks typically requiring human intelligence – too quickly.
In an exclusive interview with the BBC, Ms Vestager said AI’s potential to amplify bias or discrimination, which can be contained in the vast amounts of data sourced from the internet and used to train models and tools, was a more pressing concern.
“Probably [the risk of extinction] may exist, but I think the likelihood is quite small. I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are.
“If it’s a bank using it to decide whether I can get a mortgage or not, or if it’s social services on your municipality, then you want to make sure that you’re not being discriminated [against] because of your gender or your colour or your postal code,” she said.
On Tuesday, Ireland’s data protection authority said it had put Google’s planned EU roll-out of its AI chatbot Bard on hold.
It said it had been informed by Google that its ChatGPT competitor would be introduced in the EU this week, but was yet to receive details or information showing how the firm had identified and minimised data protection risks to prospective users.
Deputy Commissioner Graham Doyle said the DPC was seeking the information “as a matter of urgency” and had raised further data protection enquiries about it with Google.
‘A UN approach’
Ms Vestager, who is the European Commission’s executive vice president, said AI regulation needs to be a “global affair”.
She insisted a consensus among “like-minded” countries should be prioritised before getting more jurisdictions, such as China, on board.
“Let’s start working on a UN approach. But we shouldn’t hold our breath,” she said.
“We should do what we can here and now.”
Ms Vestager is spearheading EU efforts to create a voluntary code of conduct with the US government, which would see companies using or developing AI sign up to a set of standards that are not legally binding.
Being ‘pragmatic’
The current draft of the AI Act seeks to categorise applications of AI into levels of risk to consumers, with AI-enabled video games or spam filters falling into the lowest risk category.
High-risk AI systems include those that are used to evaluate credit scores or access to loans and housing. This is where the focus of strict controls on the tech will be.
But as AI continues to develop quickly, Ms Vestager said there was a need to be pragmatic when it comes to fine-tuning rules around this technology.
“It’s better to get, let’s say 80% now than 100% never, so let’s get started and then return when we learn and then correct with others,” she said.
Ms Vestager said there was “definitely a risk” that AI could be used to influence the next elections.
She said the challenge for police and intelligence services would be to be “fully on top” of a criminal sector where there is a risk they get ahead in the race to utilise the tech.
“If your social feed can be scanned to get a thorough profile of you, the risk of being manipulated is just enormous,” she said, “and if we end up in a situation where we believe nothing, then we have undermined our society completely.”
But Ms Vestager said this was not realistic.
“No-one can enforce it. No-one can make sure that everyone is on board,” she said, pointing out that a pause could be used by some as an opportunity to get ahead of competitors.
“What I think is important is that every developer knows that everyone has signed up for the same guardrails so that no-one takes excessive risks.”
Facial recognition
The European Parliament’s proposals for the AI Act seek to restrict the use of biometric identification systems and indiscriminate collection of user data from social media or CCTV footage for purposes such as facial recognition systems.
However, Ms Vestager said: “We want to put in strict guardrails so that it’s not used in real-time, but only in specific circumstances where you’re looking for a missing child or there’s a terrorist fleeing.
“The Parliament has a much more principled position that they will vote on tomorrow to basically ban it completely.”
Before the AI Act can become finalised as the world’s first rulebook on the use and development of AI systems, the EU’s three branches of power: the Commission, Parliament and Council will all have to agree on its final version.
It is not expected to come into effect before 2025.