Today, NSS Labs released a report detailing the performance of several vendors’ ability to detect advanced attacks. We declined to participate in this test because we believe the NSS methodology is severely flawed. In fact, the FireEye product they used was not even fully functional, leveraged an old version of our software and didn’t have access to our threat intelligence (unlike our customers). We did participate in the BDS test in 2013 and at that time we also commented on the flaws of the testing methodology. In fact, we insisted that the only way to properly test was to run in a REAL environment. NSS declined to change their testing methodology so we declined to participate in the most recent test, results of which have been published today. When NSS tested our product a year ago, they used a sample set that included 348 total samples. FireEye detected 201 of 348 total samples. Of the 147 “missed” samples:
- 11 were non-malicious.
- 19 were corrupted (as to why other vendors detected these because some vendors scored higher – close to 100% – means that their detection engines are based on hashes which will match regardless of whether the sample is malicious).
- 117 were duplicates (as to why FireEye didn’t receive credit for detecting these, we never received a response from NSS).
Clearly, nobody could take this approach seriously—it was a major mismatch versus what we see in the wild.
Understanding advanced threats still represents a black hole for many in today’s security industry. The test unfortunately perpetuates a general failure by many to fully understand and appreciate the inner workings of advanced threats that continue to plague organizations despite millions invested in legacy security technologies. In this case, the test contained a number of flaws that security professionals should thoroughly understand before taking these results at face value.
Issue #1: Poor sample selection. Specifically:
- NSS mostly relied on VirusTotal to download payloads (clear text executable files). The NSS sample set doesn’t include Unknowns, Complex Malware (Encoded/Encrypted Exploit Code & Payload), and APTs. Almost by definition, APTs use new or updated code to bypass detection, which is standard procedure. However, NSS used a known corpus of malware. Advanced threats are in, out, and cleaned-up in minutes. In the past, the malware samples used in the NSS tests were available on VirusTotal (an aside: the oldest sample on VirusTotal is from 2006 and the median sample age is 17.2 months). By contrast, when tests specifically leverage malware samples that are new and unknown, antivirus detection rates fall dramatically. For example, the Imperva study found that antivirus detected only 5% of malware. The other vendors in the NSS report are built for detecting known malware. By relying on VirusTotal, NSS missed out on AK-47s and spent time analyzing pea shooters.
- Even for Payloads, NSS doesn’t perform Forensics Analysis to understand if the sample is malicious, goodware or corrupt (can’t execute). NSS gives a positive score as long as a vendor sees the sample on the wire, even if the sample is not actually malicious.
Issue #2: Differing definitions of advanced malware: Vendors and test agencies differ in how they define advanced malware. The NSS test confused Adware, Spyware, & APTs and accounted for Adware and Spyware as APTs. For instance, some of the NSS tests expected Adware to be classified as malware. In this series of tests, Adware that changes the home page of the browser, but does not infect the system in any other way, must be flagged as malware by a product in order to receive a positive score. FireEye solutions wait for true malicious behavior to avoid false alerts. In the aforementioned case, the page load of the new home page would be analyzed to identify if the change was truly malicious or not.
Issue #3: Poor test methodology. Specifically, the NSS test:
- Doesn’t account for the use of zero day exploits. There were no zero day exploits in the test sample. This is difficult to do. Testing for zero days requires having a zero day on hand or developing one yourself, which is expensive. Finding new malware that utilizes zero day exploits is where FireEye thrives. In 2013, we found 11 exploitable zero days as well as countless malware campaigns used in cyber espionage, warfare or crime. This year, we have already uncovered two zero days.
- Did not have access to our security intelligence in the cloud. Unlike our customers, the FireEye appliances were NOT connected to our Dynamic Threat Intelligence cloud to get latest content updates, virtual machines and detection capabilities.
We respect NSS and the work they do—especially for IPS – and their testing methodology for BDS is also more suited to testing IPS products. However, we believe the issues we identified with their evaluation of advanced threats are indicative of the security industry’s broader lack of knowledge regarding sophisticated attacks. FireEye is designed to supplement legacy signature and reputation based technologies to protect against advanced threats—and the NSS tests didn’t properly gauge our capabilities. Our product’s efficacy is proven by how well we protect customers in real-world deployments. Consider that in 2013, FireEye:
- Found 11 exploitable zero day vulnerabilities, with two uncovered so far in 2014. (By comparison, among the top 10 cyber security companies ranked by security-related revenue, only 2 other zero-day vulnerability were reported in 2013.)
- Tracked more than 40 million callbacks.
- Tracked more than 300 separate APT campaigns.
- Deployed more than 2 million virtual machines globally.
Any lab test is fundamentally unable to replicate the targeted, advanced attacks launched by sophisticated criminal networks and nation-states. The best way to evaluate FireEye is for organizations to deploy our technology in their own environment and they will understand why we are the market leader in stopping advanced attacks. We believe it is erroneous for NSS to compare security efficacy, performance, and cost in the same graphic, because doing so assumes that all three buying criteria are all equally important. In our experience, security efficacy is much more important than the others. In fact, most users and vendors are moving toward a malware prevention, detection, and response architecture.
In August 2013, IDC issued a report, Worldwide Specialized Threat Analysis and Protection 2013–2017 Forecast and 2012 Vendor Shares. This report identified and ranked vendors claiming to stop advanced malware attacks. FireEye was listed as the top vendor based on market share (38%) compared to the nearest competitor with 14% market share. The market is voting with dollars based on their real-world experience while under real-world attacks from advanced threats.