Artificial Intelligence: Cybersecurity Friend or Foe?

Winter Soldier

Level 25
Thread author
Verified
Top Poster
Well-known
Feb 13, 2017
1,486
The next generation of situation-aware malware will use AI to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.

Second of a two-part series.

Just as organizations can use artificial intelligence to enhance their security posture, cybercriminals may begin to use it to build smarter malware. This is precisely why a security fabric approach is needed — security solutions for network, endpoint, application, data center, cloud and access working together as an integrated and collaborative whole — combined with actionable intelligence to hold a strong position on autonomous security and automated defense.

In the future, we will have attacker/defender AI scenarios play out. At first, they will employ simple mechanics. Later, they will play out as intricate scenarios with millions of data points to analyze and address. However, at the end of the day, there is only one output: whether a compromise occurred or not.

Threats are getting smarter and are increasingly able to operate autonomously. In the coming year, we expect to see malware designed with adaptive, success-based learning to improve the success and efficacy of attacks. This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next. In many ways, malware will begin to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.

This next generation of malware uses code that is a precursor to artificial intelligence, replacing traditional “if not this, then that” code logic with more complex decision-making trees. Autonomous malware operates much like branch prediction technology, which is designed to guess which branch of a decision tree a transaction will take before it is executed. A branch predictor keeps track of whether or not a branch is taken, so when it encounters a conditional jump that it has seen before, it makes a prediction so that over time, the software becomes more efficient.

Autonomous malware, as with intelligent defensive solutions, is guided by the collection and analysis of offensive intelligence, such as types of devices deployed in a network segment, traffic flow, applications being used, transaction details, or time of day transactions occur. The longer a threat can persist inside a host, it will be that much better able to operate independently, blend into its environment, select tools based on the platform it is targeting and, eventually, take counter-measures based on the security tools in place.

A New Threat: Transformers
We as an industry also will see the growth of cross-platform autonomous malware designed to operate on and between a variety of mobile devices. These cross-platform tools, or “transformers,” include a variety of exploit and payload tools that can operate across different environments. This new variant of autonomous malware includes a learning component that gathers offensive intelligence about where it has been deployed, including the platform on which it has been loaded, then selects, assembles and executes an attack against its target using the appropriate payload.

Transformer malware is being used to target cross-platform applications with the goal of infecting and spreading across multiple platforms, thereby expanding the threat surface and making detection and resolution more difficult. Once a vulnerable target has been identified, these tools can also cause code failure and then exploit that vulnerability to inject code, collect data and persist undetected.

The Big Picture
Autonomous malware, including transformers that are designed to proactively spread between platforms, can have a devastating effect on our increasing reliance on connected devices to automate and perform everyday tasks. Efforts to analyze data for competitive business insights will be hampered. Overcoming these challenges will require highly integrated and intelligent security technologies that can see across platforms, correlate threat intelligence and automatically synchronize a coordinated response. Artificial intelligence and machine learning will prove invaluable in this role, ultimately enabling the vision of Intent-Based Network Security (IBNS) that can automatically translate business requirements and apply them to the entire infrastructure.

In part one of the series, Extreme Makeover: AI & Network Cybersecurity, Derek describes how artificial intellegience and machine learning are playing a vital role in the way security professionals consume and analyze data.
 

Parsh

Level 25
Verified
Honorary Member
Top Poster
Malware Hunter
Well-known
Dec 27, 2016
1,480
This
This next generation of malware uses code that is a precursor to artificial intelligence, replacing traditional “if not this, then that” code logic with more complex decision-making trees. Autonomous malware operates much like branch prediction technology, which is designed to guess which branch of a decision tree a transaction will take before it is executed. A branch predictor keeps track of whether or not a branch is taken, so when it encounters a conditional jump that it has seen before, it makes a prediction so that over time, the software becomes more efficient.

Autonomous malware, as with intelligent defensive solutions, is guided by the collection and analysis of offensive intelligence, such as types of devices deployed in a network segment, traffic flow, applications being used, transaction details, or time of day transactions occur. The longer a threat can persist inside a host, it will be that much better able to operate independently, blend into its environment, select tools based on the platform it is targeting and, eventually, take counter-measures based on the security tools in place.
and this
A New Threat: Transformers
We as an industry also will see the growth of cross-platform autonomous malware designed to operate on and between a variety of mobile devices. These cross-platform tools, or “transformers,” include a variety of exploit and payload tools that can operate across different environments. This new variant of autonomous malware includes a learning component that gathers offensive intelligence about where it has been deployed, including the platform on which it has been loaded, then selects, assembles and executes an attack against its target using the appropriate payload.

Transformer malware is being used to target cross-platform applications with the goal of infecting and spreading across multiple platforms, thereby expanding the threat surface and making detection and resolution more difficult. Once a vulnerable target has been identified, these tools can also cause code failure and then exploit that vulnerability to inject code, collect data and persist undetected.
I didn't expect this kind of depth and motivation for use of AI techniques in cybercrime!
A few words here and there suggest the differences from the traditional attack vectors.
Just when the AI industry is blooming in an environment not so capable to totally adopt it with its current limitations, use of such techniques in an entirely different way for carrying out attacks as described is unnerving.
Some details mentioned are very complex to implement and perfect, but if collectively achieved, can take the threatscape to a whole new level you know. People willing to learn white-hacking will soon have to begin with concepts and models of AI after the basics:eek:.
This has to be one of the coolest (yet chilling) share @Winter Soldier!
 

Winter Soldier

Level 25
Thread author
Verified
Top Poster
Well-known
Feb 13, 2017
1,486
It's just a continuation of the same old cat and mouse game.
Ai quantum cat chasing an Ai quantum mouse.
Now the systems are based on a model of reality that is the current one, but once I filled in autonomous algorithms, designed to go in the current reality, the reality itself changes and is no longer the same.
 

ElectricSheep

Level 14
Verified
Top Poster
Well-known
Aug 31, 2014
655
This pic says it all.....
2e8FR6d.jpg
 

jamescv7

Level 85
Verified
Honorary Member
Mar 15, 2011
13,070
Well way back before where AI is not fully exposed; a clear path where malware can defend itself and knows which targets are headed.

  • It can kill itself when detected in an isolated environment
  • It can provide alternative solution when a program is run in isolated environment (e.g payloads or phishing sites)
  • It can disguise as a legitimate program and many more.
With powerful tools and equipment nowadays, no surprise that a develop AI malware can indeed take decisions seriously.

"Great power comes with great responsibility"

So as usual it can be use two both sides; the only main advantages is it can gather immediate information without relying other sources.

Frankly speaking, security companies should suppose to have better prevention capabilities against enormous threats however it seems those black hat/enthusiast programmers and coders that produce malware/viruses are much excelled rather on the security industry side.
 

Emmanuellws

Level 3
Verified
Mar 11, 2017
132
I am not an expert but I think the need for Hybrid AV protection that uses AI/ML+Traditional signature based AV+Anti Exploit+Continuos traffic monitoring is required. AI malware is created by human, and because of human, it has some limitation, for the time being though. IMO, To build a strong AI malware, requires a very long complex code full with "SELECT" or "IF-Else" or "FOR" statement, along with a database of exploits from multiple C&C servers.
 

Winter Soldier

Level 25
Thread author
Verified
Top Poster
Well-known
Feb 13, 2017
1,486
We know, machine learning is used to detect new malware, while malcoders have strong motivation to attack its algorithms.
They usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems, and therefore they can only perform common attacks.
But if malcoders are able to frequently change the attack strategy, they can learn stable patterns from machine learning systems and immediately cracking them.
This process makes machine learning based algorithms unable to work.

I don't want to give the impression that it is trivial to fool these systems, but only to highlight that there are safety issues, such as in any system, there are vulnerabilities.
Techniques exist to mitigate them, and from these, the research of new exploits and so on, in an eternal match between the shield and the sword, as it always has been.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top