Endpoint Detection Response Assessment = Failure of virtually all current leading Endpoint detection and response solutions to detect a range of ‘known’ attack methods.
Throughout this work, we went through a series of attack vectors used by advanced threat actors to infiltrate organizations. Using them, we evaluated state-of-the-art EDR solutions to assess their reactions, as well as the produced telemetry. In this context, we provided an overview for each EDR and the measures used to detect and respond to an incident. Quite alarmingly, we illustrate that no EDR can efficiently detect and prevent the four attack vectors we deployed. In fact, the DLL sideloading attack is the most successful attack as most EDRs fail to detect, let alone block, it. Moreover, we show that one may efficiently blind the EDRs by attacking their core, which lies within their drivers at the kernel level. In future work, we plan to assess positive, false negative, and false positive results produced by different EDRs to measure the noise that blue teams face in real-world scenarios. Moreover, the response time of EDRs will be measured as some EDRs may report attacks with huge delays, even if they have mitigated them. These aspects may significantly impact the work of blue teams and have not received the needed coverage in the literature.Journal of cybersecurity and Privacy
An Empirical Assessment of Endpoint Detection and Response Systems against Advanced Persistent Threats Attack Vectors, is a research article I recently came across, testing a number of leading Endpoint Detection and response solutions against well known attack types. The results are surprising considering the marketing around EDR solutions, but not at all unexpected.
The Mitre Att&ck evaluations show a different picture as far as Efficacy is concerned towards known attacks but not in the order of product capability as the results in this research paper demonstrate the same order of detection, where the leading company for endpoint detection and response is lacking in its ability to detect and respond to various kinds of cyber attack method pitted against it.
I have commented before on the ‘use’ of Artificial Intelligence and Machine Learning in cyber security software, this white paper continues to support my opinion that EDR software has a long way to go before accurately using the terms Artificial Intelligence or Machine Learning because the products as they stand today are not capable of doing very much more than smart Security Operations Analysts performing Threat hunting activities themselves.
Within the cyber security industry there are two terms that are used: false positives and false negatives.
- False positives are when a platform, in this case EDR gives too many alerts in relation to activities that are not actually malicious e.g. users performing activities that the EDRs AI detects as suspicious. these are bad because they cause security operations analysts to become overwhelmed with events that are not malicious causing the real malicious behaviour to be potentially missed and overlooked
- False Negatives are when a platform does not alert and there is a malicious event occurring. A false negative is arguably much more serious because the Security operations centre analysts are expecting that their EDR (or other security solution) is not alerting because there is nothing to be concerned about.
Most EDR software solutions are being tuned to minimise their “false positives” which in turn also minimises their alerting on “false negatives” as the tuning methods are almost always about sensitivity; reducing sensitivity reduces the alerts from both positive and negative.
In this case the research paper demonstrates well that these tools need to be providing much more information to the analyst, or to a SIEM for real analysis of behaviour by human operators.
EDR software should not replace real human security operations within organisations – whether this be an internal or MSSP function. EDR can be useful as an additional tool against Advanced Persistent Threats (APTs) and other forms of malicious cyber attack but with the inadequacy of most Endpoint and detection response solutions to even trigger alarms for known attack scenarios they can not be considered a good cyber security detection solution.
Beware where any company makes claims about Artificial intelligence and specifically Machine Learning use in their technologies and ask very specific questions to understand the basis of their claim.
- No software is AI based or ‘driven’
- No software has advanced AI or is “beyond AI” as I read recently for one vendor
- If the software is mature (old) then be sure to understand where Machine Learning is being utilised, because likely it is not in Threat Hunting or detection.
Artificial intelligence Machine learning technologies are not going to replace human security operations professions in the short to mid term, the technology is simply not good enough. Mathematical algorithms are on match for the human brain and be wary when any vendor claims otherwise.
Research article was originally published here: https://doi.org/10.3390/jcp1030021