According to Factor Daily, nearly 1 billion people were affected by cyberattacks, costing around $172 billion in 2017. One of the main culprits behind these attacks was malware, “one of modern day’s biggest threats,” which becomes more ominous as it takes shape in deep learning in machines.
To show the ferocity of what future cyberattacks might look like within deep learning, Factor Daily focused on Deeplocker, an AI-powered malware that might one day become real. Deeplocker’s dangerousness to deep learning lies in its “obfuscation” and “targeting” capabilities:`
Obfuscation: can occur when malware writers go to great lengths to muddy code within deep learning systems so that the threat looks like a “harmless piece of functioning software.” While companies may invest in detection software and other antivirus methods, obfuscated code can sometimes be difficult to detect.
As a result, AI-based malware in deep learning could be “seeing the commercial emergence of virus and malware that spreads undetected across millions of computers around the world for long periods of time.”
This gets even more dangerous for companies when combined with targeting.
Targeting: when malware is programmed to wait for the right target to “unleash havoc,” according to Factor Daily. Targeting options increase the more technology advances, including visual recognition systems, voice identification solutions, and more. In the case of deep learning, a targeted attack might occur at a carefully selected time, or when a specific person utilizes a certain device, or even when a device is brought into a specific area – all pre-programmed by malware writers.
Autonomous vehicles, drones, and other technologies of the future will only increase cybercriminals’ target, too, Factor Daily says.
On the bright side:
While the risks for a cyberattack increase with the growing number of technologies and other machine learning advancements, the amount of machine learning aid goes up, too. In fact, Factor Daily says that “we have to rely on machine learning to make our defense powerful.” For example, deep learning can help systems detect what’s “normal” in a network in order to identify potential threats. Or, machine learning can harvest out a “malicious event” from a slew of security violation events seen by security analysts, ultimately making those jobs easier.
The key to keeping up with cyberattacks in deep learning is to keep checking on coding patterns, trends, and studying previous instances of obfuscation and targeting. And, “with time and more pervasive use,” end users might see a day where cyberattacks can be predicted before they strike, and cybersecurity measures might become automated and able to respond to a larger scale of threats, keeping sacred data safe.