Machine Learning and malware: The next big thing in cybersecurity?

silversurfer

Level 85
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Malware Hunter
Well-known
Aug 17, 2014
10,148

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,400
I don't think TODAY AI is capable enough of dealing on its own with fresh malware. It's simple as knowing that cybercriminals can play in real time with AI products and test how to bypass them. AI simply can't do that. So AI will learn with time and adapt to the ways cybercriminals use, but they will just keep and keep evolving. While machine learning will too, it will always be behind. Default deny seems to be the most trusteable source for security.
 
F

ForgottenSeer 58943

I see a few different potentials.

I see a place for default-deny at the doorstep, but also behind that AI based antimalware prowling the system to guard for anything that could arrive on back channels and/or be allowed past the default-deny.

Another place where it will all likely evolve are situations like Chromebook/ChromeOS where there isn't any user space available for execution of any threat. Fortinet Appliances secure themselves in a similar fashion in that there simply isn't any user space accessible in any fashion, thus your result is a secure ecosystem on the appliance.

Those are the two scenarios where I think all of this is heading. I like the idea of a default-deny and/or restricted execution environment combined with something like Cylance keeping tabs on activity within the closed ecosystem 'just in case' and to improve awareness of file activity/changes. Even if you are in a default-deny situation you are may have to update products at some point, that's where the AI products might prove handy.
 

RejZoR

Level 15
Verified
Top Poster
Well-known
Nov 26, 2016
699
The word "Ai" is so overused and overblown it's actually hilarious. Something has few rules to make decisions and everyone plasters "Ai" on it. Give me a break. Ai is when you throw a problem at the system and it's able to figure out the solution on its own. And NONE of systems we have now can do any of that. They all need to be taught in advance which just means they have specific rules. You can achieve most of this stuff by using IF commands. You can't just call everything "Ai" because of it. An neither is similarity search or comparison of things and so on. That's not "Ai".
 

AtlBo

Level 28
Verified
Top Poster
Content Creator
Well-known
Dec 29, 2014
1,711
The white paper is great. This way to look at the subject will be useful for the next 15 years, when it comes to communication with developers working on ML/AI programming projects. pdf is a keeper for me, thanks for the link @silversurfer.
 

Kubla

Level 8
Verified
Jan 22, 2017
355
I don't think TODAY AI is capable enough of dealing on its own with fresh malware. It's simple as knowing that cybercriminals can play in real time with AI products and test how to bypass them. AI simply can't do that. So AI will learn with time and adapt to the ways cybercriminals use, but they will just keep and keep evolving. While machine learning will too, it will always be behind. Default deny seems to be the most trusteable source for security.

I still see the advantage for the home user using an AI based AV like Cylance or others that are also used in corporate environments. Corporate environments are where most of the real bad malware is found first and mitigated. As such logic would suggest that these apps will will have learned how to deal with them well before home signature based AVs .
 
  • Like
Reactions: AtlBo
5

509322

I still see the advantage for the home user using an AI based AV like Cylance or others that are also used in corporate environments. Corporate environments are where most of the real bad malware is found first and mitigated. As such logic would suggest that these apps will will have learned how to deal with them well before home signature based AVs .

Default-allow is built upon the premise that users don't know how and won't put forth the effort to find out how to do.

The fallacy that the industry perpetuates is that by installing a security soft, all your security problems are solved.

"Install our Next-Gen Ai\ML soft and you are protected."

Nope. It's a lie. (At it's most basic level without any considerations of the gray-scale. What they really mean is that you are protected figuratively and not absolutely.)

And the greater lie being shilled over the past few years is that Ai\ML can and will do everything better.

"Let the Ai\ML do everything for you. It will do it better for you than you could ever do for yourself."

Nope. That's a lie too. (See previous note, above. When it comes to IT security, people have got to somehow come to understand that there are things that only they can do for themselves.)

The sad fact is that most people believe the lies (or half-truths or whatever one wishes to call them.)

It's all fine when talking within the context of gray-scale, but use absolutes (black and white) as the standard, then everything falls apart.
 
Last edited by a moderator:
  • Like
Reactions: RoboMan

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top