On the fragility of EDR assumptions
The dashboard problem in endpoint detection is a confidence problem. EDR tools surface what they detect — which is, by construction, only the set of things the vendor's research team has seen, modelled, and decided to alert on. The set of things that did not generate an alert is invisible on the dashboard. The operational reflex is to read that silence as safety. This reflex is how persistent threats persist: not because they are undetectable in principle, but because the detection infrastructure is designed to respond, not to hunt.
Living-off-the-land attacks are the clearest illustration of the detection boundary. When an adversary avoids introducing new binaries and instead runs PowerShell, WMI, certutil, or bitsadmin — signed Microsoft tools doing things Microsoft tools sometimes legitimately do — the EDR heuristics face a categorisation problem with no clean answer. The behaviour is normal in some contexts and malicious in others. Distinguishing between them requires understanding what normal looks like for your specific environment: your user population, your software estate, your administration patterns. This is not something a vendor signature set knows. It is built from your own telemetry, your own baselines, and your own analysis.
Clarity about what EDR purchases is an operational question, not a theological one. It is effective against commodity attacks: ransomware-as-a-service loaders that have not been custom-compiled, known malware families, stealers and droppers that behave the way malware in a sandbox behaves. It raises the cost of unsophisticated attack meaningfully. It provides endpoint telemetry that would otherwise require custom instrumentation. What it does not provide is coverage against an adversary who studied your environment before touching a keyboard — who logs in with valid credentials obtained through phishing, escalates using a legitimate vulnerability in a way that mimics normal admin behaviour, and moves carefully enough to stay below the anomaly threshold for weeks.
The most useful reframe: treat EDR as the instrument that tells you the noise floor has changed. The alert tells you something deviated from baseline enough for the vendor's model to flag it. The interesting question is what happened in the period before the deviation became legible. Blue teams that wait for the dashboard to light up are always responding to day-N of an incident. Blue teams that build hypotheses about what patient adversary behaviour looks like — in their specific logs, against their specific assets, with their specific user population — have a chance at day-one or day-two detection. Assume the alert is late. Ask what the quiet period looked like and whether you would have noticed it.