Hollywood has given us hundreds of scenarios where AI takes over the world. In each of them, however, humans always manage to overcome the diabolical AI masters. For instance, The Matrix and Terminator movies, where the humans instinctively wrestle back control.

Movie scripts are simplistic, where a bad situation is required to happen, and the heroes walk in at the right moment to save everyone. Such settings are as unlikely as they are spectacular.

What about an AI take-over by some malware code in reality? This may be technically possible, yet improbable for two good reasons:

  1. AI systems are well designed and are not programmed to act on its own. It could require considerable research and development budgets to do so.
  2. In an ecosystem of AI machines, no one device is aimed at controlling the rest. Systems will defy a take-over by competing AI-applications and will do its job of protecting the restrictions set by us. Systems with comparable intelligence can defend each other.

However, some very plausible factors make this eventuality not entirely impossible. It would be imprudent to ignore these dynamics completely.

Artificial Intelligence Aid for Malware

The first risk is in the AI-goal itself. The fuzziness of AI goals makes it a breeding ground for any malware to infuse itself into.

The second risk is in the secret and silent coordination between AI systems. This automated coordinated attack can spiral quickly, as we have already witnessed in 2015 with Facebook AI systems.

The third risk lies in the dark corners of malware. Malware is designed to work outside the supervision of its creators and can exist and infiltrate unknown to surveillance.

The DeepLocker hybrid malware created by IBM experts uses an Artificial Intelligence model to identify its target using facial recognition, geolocation and voice recognition.

How does AI Malware Spread?

There are two ways in which AI can be leveraged to spread malware. One is to design AI from scratch and then implant it as a parasite in other systems. The other is to hijack an AI system with low-level malware and then use the AI-capabilities of the victim to fulfil the goal of its creator.

It does not help here that AI systems are much more a black box, compared to traditional software. Traditional software is designed and then programmed. Nothing changes in a program until a designer or hacker makes a change. The beauty of AI is that it programs itself, in line with an initial target. And THAT is the critical point. There is a limited influence of the owner on the way it develops. The current breed of narrow AI systems offers limited risks. Those risks multiply when single AI systems grow broader in application, known as General AI.

A second risk is that AI is usually owned by a company and the IP is a jealously guarded company secret. But the AI systems are used throughout our society, scooping up data and taking simple decisions for us. Any medium or large company is required to allow accountants to do a detailed pervasive check on its accounts. But for AI, there are hardly any laws or oversight mechanisms.

Stealthy, AI Hybrid Malware

Hybrid malware can use AI to create malware that can act independently in setting its own goals, depending on the given factors.  If they multiply and improve themselves as they progress, they can create havoc on an unimaginable scale.

It would be safe to say that an AI system that acts autonomously and invasively, setting itself targets and victims, and then gain sufficient strength to take over the world may be unlikely, but not impossible.

It is more likely that AI systems will be invaded by malware of the future, even more so than the systems of today. In today’s world, for-profit companies, criminals and governments have invaded a substantial portion of our IT systems and illegally harvest data. In tomorrow’s world those same companies, criminals and foreign governments will tweak decisions of AI for us, while we do not realize they control a substantial part of our lives.

You may also read our blog on this subject, the one-way nature of a shift or the articles on the other five aftermath scenarios First Intelligence Explosion, Necessary Rescue, Ethnic Cleansing, Human Cyborgs or Lonely Dictator.