Each year the Ig Nobel prizes are awarded for the
Stephen Hawking, Elon Musk, and hundreds of other scientists and technologists have signed an open petition calling for deeper research into the potential risks of developing artificial intelligence. And for good reason! Hawking has come out as saying that A.I. could be the end of humanity. Scientists, if you need some purely anecdotal (and purely fictional), but highly convincing evidence to support your argument, here are 14 examples of A.I. gone wrong.
1. The Matrix
Careful, or your singular consciousness may spawn an entire race of machines.
Just a quick warning: If you’re developing artificial intelligence, be prepared for it to someday digitize you into its mainframe.
If you have to create a killer super soldier, could you at least not put it in the body of a cute 10-year-old kid? It’s just disturbing for everyone involved.
4. The Black Hole – Maximilian
This oddly dark 1979 Disney sci-fi movie features Maximilian, a robot so evil he’s doomed to hell.
5. Metropolis – False Maria
Maria was the first robot ever to appear on film, in Fritz Lang’s 1927 Metropolis, and after nearly destroying an entire city, she’s influenced our idea of dangerous artificial intelligence ever since.
6. I, Robot
Did the android go against its programming and kill his owner? Either way, it’s not that far-out a possibility, is it?
7. The Hitchhiker’s Guide to the Galaxy – Marvin
Marvin may not be evil or destructive, but take a lesson from Douglas Adams. If you give your robot a “brain the size of a planet” and never let him use it, he’s going to get pretty depressed.
8. Wall-E‘s Auto
Oh sure, your robot pilot may seem all nice and helpful, but one wrong turn and it’s inner HAL comes out, keeping you from ever returning to Earth.
9. A.I. Artificial Intelligence
What if, instead of the robots’ brains growing too large, it’s their hearts instead? This movie, based on the short story Super-Toys Last All Summer Long, posits what would happen if humans create systems designed to love, but can’t return the feeling.
When Hank Pym invented Ultron, he probably didn’t expect the highly intelligent robot to also be highly evil. Ultron quickly developed some severe genocidal tendencies, seeing himself as the logical replacement for all of humankind.
11. Blade Runner
The replicants in Blade Runner (and Philip K. Dick’s Do Androids Dream of Electric Sheep?) are so human-like, they can’t even tell if they’re man or machine. Scientists, please know that this generally does not end well for anyone.
Another example of artificial intelligence that doesn’t want to destroy us, it just has no use for us. When our creations outgrow us, the greatest casualty may be our own loneliness.
13. 2001: A Space Odyssey – HAL 9000
If you put a sentient computer in charge of your entire spacecraft, eventually it’s going to want to be in charge.
14. Terminator’s Skynet
What happens when the internet decides it’s done with us? That’s basically the idea behind Skynet.