Want to know how TechRadar tests popular security packages against ransomware? Read on and find out.
Ransomware may not make the headlines quite as often as it did in the past, but it hasn’t gone away. In December 2018, for instance, a new threat apparently created by a single hacker managed to infect at least 100,000 computers in China, encrypting files, stealing passwords and generally trashing users’ systems.
Antivirus companies like to claim they’ll keep you safe, with vague but impressive sounding talk about ‘multi-layered protection’, ‘sophisticated behavior monitoring’ and the new big thing: ‘machine learning’. But do they really deliver?
The easiest way to get an idea is to check the latest reports from the independent testing labs. AV-Comparatives Real-World Protection Tests and AV-Test’s reports are an invaluable way to compare the accuracy and reliability of the top antivirus engines, for instance.
The problem is that the test reports only give you a very general indicator of performance with malware as a whole. They won’t tell you how an engine performs specifically with ransomware, how quickly it can respond, how many files you might lose before a threat is stopped, and other nuances. That’s exactly the sort of information we really want to know, and that’s why we’ve devised our own anti-ransomware test.
It’s possible to test anti-ransomware software by pitting it against known real-world threats, but the results aren’t often very useful. Typically, the antivirus will detect the threat by its file signature, ensuring it never reaches any specialist anti-ransomware layer.
What we decided to do, instead, was write our own custom ransomware simulator. This would act very much like regular ransomware, spidering through a folder tree, detecting common user files and documents and encrypting them. But because we had developed it, we could be sure that any given antivirus package wouldn’t be able to detect our simulator from the file alone. We would be testing its behavior monitoring only.
There are weaknesses with this concept. Most obviously, using our own simple, unsophisticated code would never provide as effective or reliable an indicator as using real undiscovered ransomware samples for each review.
But there are plus points, too. Using different real-world ransomware for one-off reviews means some anti-ransomware packages might be faced with very simple and basic threats, while others got truly dangerous and stealthy examples, depending on what we could find at review time.