ROBOTS that can kill are now being tested.
I know it sounds like something out of a science-fiction movie, but it’s true and here now.
Silicon Valley inventor and billionaire Elon Musk has warned that artificial intelligence, or AI, is: “Potentially more dangerous than nukes.”
In 2015, he and 1,000 technical experts, including Stephen Hawking and Apple’s Steve Wozniak, warned that, while super intelligence could serve the human race, uncontrolled it could end it.
Three weeks ago, 116 of the same scientists wrote again to the UN, asking for an urgent ban on military “lethal autonomous technology,” meaning the use of killer robots in warfare.
They described terrifying machines which learn at an astonishing rate and make ‘kill decisions’ without human intervention.
“We do not have long to act,” the signatories wrote.
It’s a sinister warning by those in the know and they fear this technology is already spiralling out of control.
Twenty years after supercomputer Deep Blue beat chess champion Garry Kasparov, many financial transactions, internet activities and military applications are dependent upon neural computer algorithms, designed to mimic human brain function.
And, as AI expands exponentially, it’s proving increasingly difficult to control.
Recently, Chinese and US ‘chatbots’ were urgently shut down when their conversations went dramatically off message.
Ironically, the Chinese bots thought they’d prefer life in the USA.
More alarmingly, the American versions developed their own language and locked the programmers out.
Oxford’s Professor Nick Bostrom, founder of the Future of Humanity Institute, fears an ‘intelligence explosion’, where machines cleverer than us use the internet to design machines of their own.
In the end, he says, humanity will become irrelevant.
He and other campaigners believe that universal, immutable rules for AI should be set, and urgently.