April 16thArizona CISO community event in PhoenixRead more
Pricing
All Articles
Industry

ASO vs AI SOC

David Colombo

David Colombo

March 17, 2026 · 12 min read

Every vendor at RSA this year told you they have an AI SOC. Every single one. The largest platform vendors said it. The funded startups said it. Thirty companies you have never heard of said it. And here is the problem: they were all describing completely different things using the same two words. A SIEM with a chatbot is now an "AI SOC." A triage agent that handles phishing alerts is now an "AI SOC." A full platform that autonomously operates across detection, investigation, hunting, containment, remediation, and reporting? Also an "AI SOC." When a term means everything, it means nothing. I do think we owe it to security leaders to be more precise than this. CISOs are making million-dollar purchasing decisions based on categories that do not actually describe what the product does. So let me walk through the differences the way I see them, and give credit where it is due to the people who saw these distinctions before the rest of the industry caught up.

Key takeaways:

  • "AI SOC" has become a catch-all that covers everything from a chatbot bolted onto your SIEM to a fully autonomous security operations platform. The term tells you almost nothing.
  • Anton Chuvakin, Security Advisor at Google Cloud's Office of the CISO, drew a critical line years ago between "AI in SOC" and "AI SOC." Most vendors are still on the wrong side of it.
  • Chuvakin's original vision, Autonomic Security Operations, was about applying DevOps and SRE principles to security. Adaptive. Agile. People operating at dramatically higher effectiveness. Not a world without analysts.
  • Autonomous Security Operations (ASO) takes that further: one unified platform covering all seven stages of the security operations lifecycle with AI making real decisions, not suggestions.
  • The fundamental difference is architectural. You cannot get to autonomy by stacking agents on top of a broken foundation.

Chuvakin Got Here First. And He Used a Different Word.

Back in 2021, Anton Chuvakin at Google Cloud published a paper that fundamentally reframed how we should think about security operations. He called it Autonomic Security Operations.

Not autonomous. Autonomic.

That word choice was deliberate, and I think most people missed the distinction. Chuvakin was not describing a SOC that runs without people. He was describing a SOC that operates like a well-engineered system: adaptive, self-regulating, and built on the same principles that transformed software development through DevOps and SRE.

His core thesis was that the SOC had a toil problem. If you read the SRE book (and every security leader should), you will find that 100% of what a typical SOC analyst does on a daily basis fits the textbook definition of toil: manual, repetitive, automatable, reactive, and with no enduring value. Chuvakin looked at that and said the same revolution that happened in IT operations needed to happen in security operations.

The key ideas in his original Autonomic Security Operations framework included Detection-as-Code with version control and CI/CD pipelines. Machine-readable workflows instead of tribal knowledge. Analysts retrained as engineers and "agent supervisors" rather than alert processors. And a culture shift where leadership accepts probabilistic outcomes rather than demanding a zero-error rate from automated systems.

This was not about throwing AI at the problem. It was about rebuilding the foundation so that any form of automation, AI or otherwise, could actually work.

Then Came "AI SOC," and the Nuance Got Lost

Fast forward to 2025 and 2026. Every vendor saw what triage agents could do and decided the entire SOC should be "AI-powered." The term "AI SOC" exploded. And Chuvakin, to his credit, pushed back immediately.

He drew a line that I think is the single most important distinction in the market right now: the difference between "AI in SOC" and "AI SOC."

"AI in SOC" means you are adding AI features to an existing SOC. A copilot that summarizes alerts. An agent that does triage enrichment. AI-generated risk scores on every ticket. This is useful work. But it does not change the operating model. The human is still carrying context between tools. The human is still making every decision. The human is still the connective tissue.

"AI SOC" means AI is the operating model. The system handles the work, the human governs the outcomes.

Chuvakin's position, and I respect him enormously for saying this plainly, is that almost nobody is actually doing the second thing. After RSA 2025 he wrote that he "definitely wants AI in SOC" but that a true "AI SOC" handling every task was "absolutely insane" at that stage. He compared the current crop of AI SOC vendors to SOAR circa 2015: new tools, same trajectory toward the same ditch.

He is right about most of the market. Where we diverge is on what is possible when you start from the right architecture.

The Detection Engineering Problem Nobody Wants to Talk About

One of Chuvakin's most important contributions is his insistence that the detection engineering problem comes before the AI problem.

Think about it like trying to build a self-driving car on roads with no lane markings, no traffic signals, and no consistent rules. You can put the most sophisticated AI in the world behind the steering wheel, but if the infrastructure is not there, it will fail.

That is exactly what most SOCs look like today. Detection logic lives in somebody's head. Playbooks are Word documents that nobody has opened since 2022. Tribal knowledge is the glue holding everything together. And then a vendor walks in, drops an AI triage agent on top of that, and wonders why it does not scale past the proof-of-value.

Chuvakin calls this "pilot purgatory." A proof-of-value converts to a small production deployment. The AI handles enrichment and summarization. Human analysts retain all decision authority. And expansion into anything higher-stakes just never happens. He and Oliver Rochford found this pattern repeated across 30+ vendor briefings.

The readiness framework Chuvakin published lays out what needs to be true before AI can work at all: security data queryable by machines via API or MCP, Detection-as-Code with version control, machine-intelligible workflows replacing ad-hoc communication, analysts reskilled as supervisors, and leadership that has signed off on a quantified "AI error budget."

If your detection logic is not codified, your AI agent has nothing to interact with except a brittle GUI. And that is, as Chuvakin puts it, a recipe for failure.

Where Autonomous Security Operations Is Different

Here is where we get to what we are building at Alaris, and why we use the word "Autonomous" rather than "Autonomic" or "AI SOC."

Chuvakin's Autonomic Security Operations was a philosophy. An approach. A set of principles about how security operations should modernize. It was brilliant, and we took it seriously. But it was deliberately agnostic about the technology stack. The point was: fix the process, fix the culture, fix the data foundations, and then the tools can do their job.

We started from the other end. What if you built the platform that embodies those principles from day one? What if detection engineering, alert triage, investigation, threat hunting, containment, remediation, and reporting all lived in one system, on one architecture, with one data model? What if the AI did not have to interact with a brittle GUI because there was no brittle GUI? What if Detection-as-Code was not something you had to retrofit, but how the system works out of the box?

That is Autonomous Security Operations. Not a philosophy applied to existing tools. A unified platform built to operate the full security operations lifecycle with AI making real decisions across all seven stages.

The difference from what most vendors call "AI SOC" comes down to three things:

Unified Architecture

Not separate agents assembled on top of separate tools. One platform, one data model, one decision engine. The Security Graph connects every entity, every event, every relationship so the AI operates with full context rather than the narrow view you get when you stitch point products together.

End-to-End Autonomy

Not just triage. Not just investigation. AI that handles detection engineering, alert triage, investigation, threat hunting, containment and response, remediation, and reporting. The entire lifecycle, not the two stages that are easiest to demo.

Full Operational Scope

This is a complete security operations function, not a feature you add to your existing stack. Over time, organizations can expand what Alaris handles at their own pace. You plug it in, you watch it operate, you expand from there. But the platform does not stop at triage and say "now hand it to a human to open five more browser tabs."

Most of the Market Is Stuck in the Middle

If you map out where everyone actually sits, the picture gets super clear.

The platform giants have massive portfolios and enormous customer bases. They are adding AI features to their existing products. Copilots, triage agents, AI-powered playbook authoring. All impressive engineering. But the underlying architecture is still built on the assumption that a human connects the stages. Adding AI to a legacy architecture does not produce autonomy. It produces a slightly faster version of the same broken workflow.

A wave of AI SOC startups proved that AI can handle alert triage autonomously. That was a super important breakthrough. The best of them are expanding into investigation, hunting, and response. Real progress. But expanding from two stages to five by stacking separate agents onto existing toolchains is not the same as building one platform for all seven from the ground up.

And then there is Chuvakin's observation that most of these deployments are stuck in pilot purgatory. The vendor claims "50% faster investigations" and Chuvakin asks: faster than what baseline, under what conditions, measured how? He wants falsifiable claims. Something specific enough that a buyer can hold the vendor to it.

He is right to demand that. The industry should too.

The Gap

Chuvakin gave us the framework. The philosophy. The readiness criteria. The honest assessment of where the market actually is versus where vendors say it is.

The AI SOC startups gave us the proof that AI can make real triage decisions at production quality.

The gap that nobody else is building for: full scope, full autonomy, unified architecture. All seven stages. One platform. AI that operates, not assists.

That is Autonomous Security Operations. That is what we are building.

Further Reading

Anton Chuvakin's work on security operations has been foundational to this space. For those who want to go deeper:

  • Autonomic Security Operations: 10X Transformation of the Security Operations Center (2021, the original paper)
  • Kill SOC Toil, Do SOC Eng (2021, applying SRE toil concepts to SOC)
  • RSA 2025: AI's Promise vs. Security's Past (2025, the "AI in SOC" vs "AI SOC" distinction)
  • Beyond "Is Your SOC AI Ready?" Plan the Journey (2026, the five-pillar readiness framework)
  • AI SOC Vendors Are Selling a Future That Production Deployments Haven't Reached Yet (2026, the Chuvakin and Rochford report on vendor claims vs. reality)
  • Learn Modern SOC and D&R Practices Using ASO Principles (2024, free Coursera course based on the ASO framework)
  • Google Cloud Security Podcast (ongoing, covers AI SOC, detection engineering, and more)
David Colombo

David Colombo

CEO & Co-Founder, Alaris

David Colombo is the CEO and Co-Founder of Alaris, the company pioneering Autonomous Security Operations. Before founding Alaris, David gained international recognition for his cybersecurity research, including the discovery of vulnerabilities affecting Tesla vehicles worldwide. He is based in San Francisco.

Frequently Asked Questions

What is the difference between AI SOC and Autonomous Security Operations (ASO)?

AI SOC typically refers to adding AI capabilities to one or two stages of security operations, most commonly alert triage. Autonomous Security Operations (ASO) is a fundamentally different architecture: a unified platform that covers all seven stages of the security operations lifecycle (detection engineering, triage, investigation, threat hunting, containment, remediation, and reporting) with AI making real decisions across the entire workflow, not just assisting at a single layer.

What did Anton Chuvakin mean by "Autonomic Security Operations"?

In 2021, Chuvakin published a framework at Google Cloud called Autonomic Security Operations. It applied DevOps and SRE principles to the SOC: Detection-as-Code, machine-readable workflows, analysts reskilled as engineers and supervisors, and leadership willing to accept probabilistic outcomes. It was a philosophy about modernizing operations, not a specific technology stack.

Why are most AI SOC deployments stuck in "pilot purgatory"?

According to Chuvakin and Rochford's research across 30+ vendor briefings, most organizations deploy AI SOC tools for enrichment and summarization but never expand into higher-stakes workflows. The root cause is architectural: bolting AI onto a broken workflow produces a slightly less broken workflow. Without codified detection logic, machine-queryable data, and proper foundations, the AI has nothing meaningful to work with.

What are the seven stages of the security operations lifecycle?

The seven stages are: (1) Detection Engineering, writing and tuning detection logic; (2) Alert Triage, determining true vs. false positives; (3) Investigation, understanding full scope and impact; (4) Threat Hunting, proactively searching for undetected threats; (5) Containment and Response, stopping active threats; (6) Remediation, fixing underlying vulnerabilities; and (7) Reporting, documenting outcomes and driving improvements.

How does ASO differ from SOAR?

SOAR relies on brittle, pre-scripted playbooks that break when an incident deviates from the expected pattern. ASO uses AI agents that can reason about context, asset criticality, blast radius, and business impact to determine the right response dynamically. SOAR automates predefined steps; ASO automates decision-making across the full lifecycle.

Does ASO eliminate the need for security analysts?

No. ASO elevates analysts from alert processors to strategic operators. Humans supervise AI agents, handle edge cases that require institutional knowledge and creativity, make judgment calls on high-stakes decisions, and focus on genuinely novel threats. The goal is to remove humans from repetitive, time-sensitive toil, not from security operations entirely.

What does Gartner say about AI SOC adoption?

Gartner places AI SOC agent adoption at 1 to 5 percent, the Innovation Trigger stage of the Hype Cycle. The market is approaching what Gartner calls the Trough of Disillusionment, where early expectations meet deployment reality.

What prerequisites does an organization need before adopting AI in security operations?

Chuvakin's readiness framework identifies five prerequisites: security data must be queryable by machines via API or MCP, detection logic must be codified (Detection-as-Code with version control), workflows must be machine-intelligible rather than ad-hoc, analysts must be reskilled as agent supervisors, and leadership must accept a quantified "AI error budget" for automated decisions.

Related