Alert fatigue is talked about constantly in the security industry. It's discussed at conferences, cited in vendor whitepapers, and used to justify countless product purchases. But the actual numbers, how many alerts are being ignored, how often those ignored alerts are real threats, how this correlates with breach outcomes, are rarely published. We spent 18 months gathering that data. Here's what we found.
Key takeaways:
- The average enterprise SOC receives 4,227 alerts per day, up 31% from two years ago, but analyst headcount has grown only 8% in the same period.
- 72% of alerts are dismissed without investigation due to backlog, compared to 45% three years ago.
- Security teams with alert backlogs exceeding 1,000 alerts experience significantly higher rates of breach, not because of skill gaps, but because of volume.
- The root cause isn't bad detection. It's a detection model built for a world where alert volumes were 100x lower.
The Raw Numbers
Across the 47 enterprise SOC environments in our research cohort, we measured alert volumes, analyst capacity, and disposition rates over 18 months. The findings are worse than the industry narrative suggests.
- Average daily alert volume: 4,227 alerts per SOC (range: 890 to 18,400)
- Average analyst capacity: ~68 meaningful investigations per analyst per day
- Average team size: 12 analysts (range: 4 to 47)
- Average proportion of alerts dismissed without investigation: 72%
- Three years ago, the dismissal rate was 45%, it has grown every year
At the average enterprise SOC, more than 3,000 security alerts are dismissed every single day without any investigation. If one of them is a real intrusion, the attacker has days, not hours, to establish a foothold before anyone notices.
Why Volume Has Grown Faster Than Teams
Three factors are driving the gap between alert volumes and analyst capacity. First, cloud adoption: cloud environments generate 3–5x more security events than on-premises equivalents at the same scale. Second, identity sprawl: the average enterprise now manages 4.7x more service accounts than three years ago, each generating telemetry. Third, detection rule proliferation: well-intentioned detection engineering teams have added more rules without removing old ones, and the total rule count in the average SOC is up 40% since 2023.
The Real Cost: Missed Threats
Dismissed alerts don't just represent wasted detection capacity. They represent real threats that go uninvestigated. In our cohort, we were able to retrospectively analyze dismissed alerts against confirmed breach data.
Of the confirmed security incidents in our cohort that resulted in a breach or significant security event, 68% had at least one precursor alert that was dismissed without investigation due to backlog. Not missed by detection systems, caught by detection systems and then not investigated by humans.
“The problem isn't that we don't see the threat. The problem is that we see it, note it, and then move on because there are 3,000 other things also demanding attention. It's not an intelligence failure. It's a capacity failure.”
CISO, Fortune 500 Financial Institution, Research Cohort Participant (anonymized)
Alert Fatigue as a Security Risk
There's a well-documented psychological phenomenon called decision fatigue: the quality of decisions deteriorates after extended periods of decision-making. Alert triage is exactly this kind of sustained, high-volume decision-making. By the end of a shift, analysts are making significantly worse triage decisions, not because they're incompetent, but because humans aren't built for this kind of sustained cognitive load.
In our cohort, false-negative rates for alert triage (classifying a real threat as a false positive) were 23% higher in the final two hours of a shift than in the first two hours. Night-shift teams had false-negative rates 38% higher than day-shift teams on equivalent alert loads. These aren't individual failures, they're predictable, structural outcomes of a broken model.
What Fixes It, And What Doesn't
Having diagnosed the problem clearly, let's be equally clear about what actually addresses it.
What Doesn't Work
- Hiring more tier-1 analysts: alert volumes are growing faster than any reasonable hiring plan could address, and analyst burn rates in tier-1 roles are extremely high
- Better SIEM correlation rules: the SIEM is a contributing factor to the problem, not a solution to it, more rules generate more alerts, not fewer
- Alert prioritization: prioritization helps analysts choose which alerts to dismiss last, but doesn't address the fundamental volume problem
- Suppression rules: suppressing noisy detections reduces false positives but also increases false-negative risk for real threats disguised as routine activity
What Actually Works
The only structural fix is removing the human from the tier-1 triage loop entirely. AI-native investigation platforms that can autonomously triage, enrich, and disposition alerts, escalating only genuine, complex threats that warrant human judgment, address the root cause rather than the symptoms. In the 23 SOCs in our cohort that deployed AI-native triage, alert backlog dropped to near zero within 90 days. Analyst capacity for genuine investigations doubled.
The organizations in our cohort that deployed AI-native alert triage reduced their breach rate by 58% over 12 months compared to the control group. The technology works. The question is how quickly organizations adopt it.
Marcus Webb
Senior Security Research Analyst, Alaris
Marcus leads competitive security research at Alaris, with a decade of experience modernizing enterprise SOC environments across financial services and critical infrastructure. He has conducted security assessments across more than 200 enterprise SOC environments.