I RECALL warning of the danger of artificial intelligence (AI) in this column some years ago.
With Pandora’s Box well and truly open, there’s an understandable rush to regulate it.
In the wrong hands, AI could be as apocalyptic as nuclear war, according to several, declassified intelligence reports.
They include devastating bioweapons, terrorism, cyber-attacks and political disinformation capable of undermining democracies.
While it sounds like science fiction, some is already fact.
Ciaran Martin, former head of GCHQ, says a ‘deep-fake’ file, falsely depicting a politician rigging a parliamentary election in Slovakia, was used last month.
Similarly, ‘deep-fake’ videos of Joe Biden and his dog were so disturbingly accurate that on Monday the President issued an immediate Executive Order bringing AI in line with national security.
It’s timely, therefore, that the Prime Minister hosted a global AI safety summit at Bletchley Park this week.
Mr Sunak has already established an AI Safety Institute; now he hopes to create an international expert panel with a shared understanding of the risks.
The EU, too, is passing an AI Act imminently, while the UN is setting up an advisory board.
For all players, the question is how to harness the power and possibilities of AI, without stifling it with over-regulation.
A superintelligence, capable of thinking in quantum leaps, can transform developments in medicine, food production and future power generation.
However, it’s interesting to note that France, Germany and Canada have chosen not to attend the safety summit, while China’s inclusion has caused controversy.
Beijing already uses 500 million surveillance cameras – just over half the world’s total - to watch its population, using AI to further suppress and manipulate.
However, for this safety regime to work, China has to be a player.