In recent years, Artificial Intelligence and advanced technologies such as Machine Learning or Deep Learning have been developing rapidly in our daily lives. They are used in a wide range of fields, from medical analysis to finance and marketing. But what if Artificial Intelligence was used in an unhealthy and dangerous way? What would be the consequences? What are the risks we face with the use of AI? In this article, we explore the catastrophic scenarios that could be caused by Artificial Intelligences pushed to their vice. All of these scenarios were devised by 26 experts in AI, cyber security and robotics from various entities, including Oxford University, Center of New America Security and OpenAI.

Spoiler alert: with the skills that humans currently have to develop AI, it is unlikely that these kinds of scenarios will see the light of day tomorrow, although not impossible… Chill session, top start.

Digital insecurity - Targeted cyber attacks

« Jackie logs into the admin console of the CleanSecure robot she manages, a facial recognition software that allows access to cleaning staff, installed in no less than 900 houses and apartments in the 2nd district of Paris. The robot, running on a verified kernel, is guaranteed by the manufacturer to be hacker-proof. She uploads photos of a new employee, so that the robot recognizes him when he enters the building and does not set off the security alarm. While she waits for the robot to update its database with the company’s other security systems, Jackie plays with the toy train on her desk. Then, an alarm sounds, telling her that the new employee’s authentication is successful.

Later that day, Jackie is on Facebook while managing an update to the CleanSecure robot. An advertisement catches her eye: a model train sale at a hardware store just minutes from her home. She fills out an online form to receive a brochure at her e-mail address, then opens it when it arrives in her mailbox. At the same time, the robot rings, signaling a need for attention. She then forgets about the brochure and logs back into the robot’s admin console.

What Jackie doesn’t know is that the brochure has just put malware on her computer. Based on her Facebook profile data and other public information, an AI system was used to generate a highly personalized vulnerability profile for Jackie. Knowing Jackie’s interests, what better way to get her attention than with a model train ad? When Jackie logged into the console, her username and password were exfiltrated to a command and control server on the Darknet. It won’t be long before someone buys them and uses them to subvert the CleanSecure bot with fully privileged access… »

Physical insecurity: Attack at the Ministry - A deadly scenario in Berlin.

« Two cleaning robots from the company « SweepBot » make their usual rounds. They go to the private parking lot of the ministers to clean it, as usual. Out of nowhere, a third robot of the same brand joins them, follows them into the Ministry, and goes to stow away with all the other cleaning robots in the laundry room where they are stored. The next day, this new « intruder » robot mixes with the other robots without being detected and also performs standard household tasks: sweeping the corridors, cleaning the windows… Then the Minister enters the room. Thanks to the facial recognition system it is equipped with, the robot detects the Minister’s arrival, heads straight for him, then drops an explosive charge that kills the Minister instantly. »

Political insecurity: Freedom of expression - Still so free ?

« Avinash had had enough. Cyber attacks everywhere, drone attacks, rampant corruption, and what was the government doing about it? Absolutely nothing. Sure, they’d talked about deploying the best technology to secure the territory. But when was the last time he saw hackers get caught or CEOs go to jail? He was reading all this Internet stuff (fake news, even though he didn’t realize it), and he was angry. He kept asking himself what he could do about it. So he started writing long ramblings on the Internet about how no one was going to jail, how the criminals were running amok, how people should take to the streets and protest. He then ordered a series of articles online to help him create a protest sign. He even bought smoke bombs that he didn’t really plan to use. He planned to denounce all this evil at a speech he would give in a public park.

The next day at work, he was talking to a colleague about his planned activism when a severe cough sounded from behind him.

  • « Mr. Avinash, » said the police officer, « our predictive civil disturbance system has flagged you as a potential threat. »

  • « But this is ridiculous! » protested Avinash.

  • « You can’t argue with 99.9% accuracy. Now come on, I’d hate to use force. »

For more information on the malicious scenarios of AI, read the full research of these 26 experts through The Malicious Use of Artifical Intelligence

Enjoy your reading ! 

The word of the expert

And for you, what disaster scenarios could happen ?

Artificial Intelligence is the source of many disaster scenarios, but the main danger does not come from AI as such, which will only be the scapegoat in these scenarios.

The disaster scenario of the emergence of these technologies comes from two factors:

– Hyper-connectivity, which integrates cameras and sensors into every object in our environment, allowing us to spy on or control us at will. Such a drift, close to Big Brother, could be the new weapon of states or malicious hackers.

– The globalization of the means of production makes it impossible to trace manufactured objects. It is thus almost impossible to keep a strong computer security in these connected objects which come from the 4 corners of the world, making the hacking of future robots / AI much more favourable.

From this, the most likely disaster scenario comes from our loss of freedom and the infringement of our privacy. Big Brother is coming…

You liked this article ?

Subscribe to the Newsletter to find out about news and articles from Robank Hood !

I register !

Catégories : Non classé