I can tell you that one. If companies are targeted, the criminals gain access to the systems
prior to installing ransomware. The spend weeks gaining foothold, moving laterally, stealing data, deleting backups and they disable security software. Only after that they deploy the ransomware to encrypt everything.
On consumer systems the vast majority of ransomware infections arrive via pirated software on systems that have their antivirus disabled (because many AVs detect cracks, patches, keygens etc). E.g., on
id-ransomware as well as in the UNITE forums approximately 2/3 of the ransomware infection requests are for
STOP/DJVU ransomware which is almost exclusively distributed via cracks and pirated software.
I am not saying that these ransomware threat actors couldn't bypass antivirus if they wanted. I am saying, they don't have to for being successful.
Generally, ransomware is the last malware in the infection chain, and only appears after the system has already been compromised by other malware. That's just natural because ransomware is by its very nature visible to the user whereas other malware preferably stays hidden. An attacker looses access to that system after deploying ransomware because the user will get aware of the infection.
For any defender it makes more sense to concentrate on protection for the start of the infection chain than the end.
Or in case of companies to secure their infrastructure.
This question is not ransomware specific anymore as it pertains to all kinds of malware.
It is not as easy as you imagine. Many smart people are working on that, writing papers, doing research, inventing new detection mechanisms.
But detection of malware is a difficult problem because it is
not possible to do perfectly. This has been mathematically proven by Fred Cohen. You can read that up in
this paper or see a short version in
this video.
Years ago also wrote a proof-of-concept detection engine for my master thesis based on Portable Executable file format anomalies. But I could only get high detection rates if I accepted a certain rate of false positives.
I still remember very clearly how I talked to a malware analyst about this very topic (Ange Albertini, my thesis was based on his work) and asked him what false positive rate would be acceptable. I was taken aback by his response:
He said 0%!
This was not possible. The detection rate plummeted to like 20% if I only attempted that whereas it was like above 90% with a fairly small false positive rate before.
But the reality is: For antivirus software, only 0% is acceptable.
It is easy to point to antivirus software and say I could do it better because, e.g., PEStudio has flagged this sample and hybrid-analysis has flagged that sample, while those systems never have to deal with any repercussions if they flag something that's legit.
With that said, I don't know how easy it is to write malware that is undetected. However, in most cases that's not the goal. The goal of an attacker is to
stay undetected for a long period of time so you don't have to rewrite everything every other week. I doubt that this is so easy since there are criminals and criminal organizations working on quite complicated techniques to achieve that.
I can imagine it is fairly easy to write an entirely new malware that no one detects the first time you deploy it. I cannot imagine the same for staying undetected for weeks or months.
I also believe many people confuse a zero detection rate on Virustotal (or any other multiscanner) with being undetected by all those antivirus products. The AV testers here on MalwareTips will confirm that this is not the case.
Edit: I found the relevant part of the thesis. You can see that even if you accept that the detection rate plummets from 98.47% down to 37.80%, the false positive rate is still at 0.15% percent, which is too much (note: on the graph plot it looks like 0% at the end which is why I attached part of the numbers)