By Chris Vallance
Technology reporter, BBC News
The prime minister’s plan for the UK to take the lead in AI regulation is at risk unless a new law is introduced in November, MPs have warned.
The EU could overtake the UK in efforts to make AI safe unless action is taken, members of the Commons Technology Committee said.
The UK will host an international AI summit at the start of November.
The government told the BBC it is willing to consider further steps if needed.
But it did not reveal if it agreed that a new law should be put forward so rapidly. Instead, a spokesperson highlighted the summit and a £100m initial investment in a task-force to encourage the safe development of AI models.
That is “more funding dedicated to AI safety than any other government in the world”, the UK government said.
If legislation isn’t introduced in the King’s Speech on 7 November, the earliest legislation could become law is 2025, the committee says in a report published Thursday.
The report argues not bringing in legislation for two years risks the UK “being left behind by other legislation—like the EU AI Act—that could become the de facto standard and be hard to displace”.
The situation could mirror data protection rules, where UK laws followed the EU lead, the report argues.
But although the government’s white paper on AI regulation has recognised a new law may be needed at some point, Rishi Sunak has previously argued that, initially, “we can probably do lots of this without legislation”.
A key part of his plan is the November summit which the government says will be the “world’s first major global summit on AI safety”.
The committee argued as wide a range of countries as possible should be invited, which would include China.
Imitation game
The report also highlights twelve “challenges” that the UK government must address, including:
- Bias: For example AI employment tools might associate women’s names with traditionally female roles
- Privacy: AI tools can be used to identify people in ways that are controversial. For example, police use of live facial recognition systems that scan faces and compare them to watchlists of suspects
- Employment: AI systems will replace some jobs and the economic impact of this will need to be addressed
The use of copyrighted material to train AI systems is also one of the challenges.
So-called generative AI systems can now create new works in the style of famous artists, actors and musicians.
But to pull off this feat AI is trained on huge amounts of copyrighted material. Many authors, actors, artists and musicians argue that AI should not be trained on their works without permission and compensation.
There are already steps to develop a voluntary agreement that would allow AI firms access to copyrighted works, while at the same time supporting artists, the report notes.
A planned exemption to copyright for AI firms was abandoned by the government in February.
AI’s power to imitate people could also be used to spread misinformation, or to commit fraud, or to fool bank voice-recognition security systems, MPs said.
No failsafe
The report follows a warning on Wednesday from the National Cyber Security Centre, which said that large language models – a type of AI that powers popular chatbots – could not be protected from certain types of attacks designed to persuade them to do malicious things. There were at present “no failsafe measures” that would remove the risk, the centre wrote.
MPs broadly supported the government’s approach to keeping AI safe which does not require the creation of a new AI regulator, but instead passes oversight onto existing regulators depending on what the AI does.
Some who spoke to the committee, including Hugh Milward of Microsoft UK, preferred this approach to that of the EU which he told the committee was “a model of how not to do it”.
But he has also previously told the BBC that care needs to be taken with any UK legislation too. There was a danger that a single piece of legislation tried to do everything “and then it becomes a bit like a Christmas tree and everybody tries to hang their own personal issues on it”, he said.