Where Do We Draw the Line?
The line between free speech and prohibited speech is being redrawn almost daily. What will it do to us as a society?
There is a “growing sense” that those experimenting with artificial intelligence are trying to keep the potential danger hidden from us
I have spoken several times about the big changes being brought about by Artificial Intelligence (AI).
It’s already capable of creating fake videos so realistic they can fool most people.
AI is coming for a lot of jobs in the not too distant future. Almost everyone is vulnerable to this rapidly expanding area of technology.
Many people are warning of the dangers. Elon Musk is one who says we need to control it before it controls us. Google engineers have also acknowledged there are problems with growing sentience.
Basically, AI is now a computer arms race that poses both a huge risk and huge potential for us all.
In that sense, it’s like nuclear technology, it can be used for good things - like emissions free base-load power or harnessed for more destructive means - like nuclear weapons.
However there is a growing sense that those experimenting with this technology are trying to keep the potential danger hidden from us.
The Google engineer who claimed their AI had ‘feelings’ was fired - ostensibly for violating data protection policies. His claim was also denied by the company.
Now we have the case of a US air force official who was said some scary things about AI weapons who has now being forced to ‘correct the record’
According to a report about the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Colonel Hamilton shared a presentation and was reported as saying.
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat.
The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.
So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,”
That wasn’t acceptable to the controllers so Hamilton went on to say this.
“We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing?
It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”
Now apparently the official version is that the Colonel ‘misspoke’ and Colonel Hamilton later issued a correction.
Apparently, it was all hypothetical and the US military would never conduct a simulation of this nature.
Strangely, I don’t believe them.
The US military, like most every other military and branch of government around the world, likes keeping secrets from their enemies and from the public.
They also have a long history of using lies and propaganda to further their aims. If I was a betting man, I’d bet the lot on Colonel Hamilton’s original statement. For all the potential of AI, there are catastrophic dangers too.
The world's richest man, and maybe one of the smartest, Elon Musk said as much.
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilisation destruction.”
Little wonder governments would prefer we don’t know the truth.
Join 50K+ readers of the no spin Weekly Dose of Common Sense email. It's FREE and published every Wednesday since 2009