#RSAC26CISO SafeSpace at RSAC26
All Articles
Engineering

How We Built Self-Learning Threat Detection at Alaris

DS

Dr. Sana Rashid

February 2026 · 11 min read

Every security detection system degrades over time. Attackers evolve. Environments change. New tools get deployed. The rule you wrote last quarter to catch a specific lateral movement technique becomes irrelevant when the adversary switches TTPs. Self-learning detection is our answer to this fundamental problem, a system that continuously adapts to your environment so you don't have to manually keep up. Here's how we built it.

Key takeaways:

  • Static detection rules decay within 90 days as attacker TTPs evolve, self-learning systems maintain detection efficacy without manual intervention.
  • Alaris's detection engine builds a continuously updated behavioral baseline for every entity in your environment: users, devices, processes, and network flows.
  • The system uses a three-layer architecture: unsupervised anomaly detection, supervised classification, and LLM-based context reasoning.
  • False positive rates drop by 60–80% compared to rule-based detection because the system understands what's normal for your specific environment.

The Problem With Static Rules

Detection engineering in most enterprise SOCs is an arms race with a lagging indicator. Security teams write rules based on observed attack patterns. Attackers observe what gets caught and change their approach. Teams update their rules. Repeat.

The deeper problem is that static rules are inherently context-blind. A rule that fires on "PowerShell executing an encoded command" will generate thousands of false positives in an enterprise with legitimate automation. A rule calibrated to avoid those false positives will miss real attacks. There's no way to write a rule that's simultaneously specific enough to be useful and general enough to catch novel variants.

Rule Decay in Practice

In our analysis of 47 enterprise SOCs before they deployed Alaris, the average detection rule set had a 90-day half-life for efficacy. After three months, more than half of the rules in a given SOC were either generating predominantly false positives or had been suppressed entirely because the noise was unmanageable. The teams knew their rule sets were decaying, they just didn't have the bandwidth to continuously maintain them.

In environments running legacy rule-based detection, analysts spend an average of 23% of their time tuning and maintaining detection logic, time that doesn't directly contribute to catching threats.

The Architecture: Three Layers

Alaris's self-learning detection engine uses a three-layer architecture that combines the strengths of different machine learning approaches while mitigating their individual weaknesses.

Layer 1: Unsupervised Behavioral Baselining

The foundation of the system is a continuously updated behavioral model for every entity in the environment, users, devices, processes, service accounts, and network flows. We use a combination of autoencoders and probabilistic models to establish what's "normal" for each entity at multiple time scales: hourly patterns, daily patterns, weekly patterns, and long-term drift.

The key insight is that "normal" is entity-specific. A database administrator running database queries at 2 AM is normal. A finance user doing the same thing is anomalous. A rule can't capture this nuance at scale. A behavioral model can.

Layer 2: Supervised Threat Classification

When the behavioral layer detects an anomaly, it's passed to a supervised classification system trained on labeled threat data. This layer answers the question: "Is this anomaly consistent with known attack patterns?" We train on MITRE ATT&CK techniques, our own threat intelligence, and, with appropriate privacy controls, anonymized signal from our customer network.

Layer 3: LLM-Based Context Reasoning

The final layer uses a large language model to synthesize the signal from layers 1 and 2 with broader environmental context. This is where the system answers the question humans ultimately care about: "Is this a threat that requires investigation?" The LLM can reason about relationships between events, consider organizational context, and generate human-readable explanations of its conclusions.

The three-layer approach is what allows Alaris to achieve a 60–80% reduction in false positives compared to rule-based detection, while maintaining higher detection coverage for novel threats.

How the System Learns

Self-learning is a term that gets used loosely. Here's exactly what we mean when we say the Alaris detection engine learns.

  • Continuous baseline updates: behavioral models are updated every 4 hours using new telemetry, with separate update schedules for short-term and long-term patterns
  • Analyst feedback integration: when an analyst marks a detection as true positive or false positive, that signal is incorporated into the classification models within 24 hours
  • Environment change detection: when a major change occurs (new tool deployment, new user population, network reconfiguration), the system automatically triggers a baseline recalibration
  • Cross-environment transfer learning: detection improvements from signals observed across the customer network are propagated as model updates, with all customer data remaining private

The result is a system that gets better over time, both at understanding your specific environment and at detecting emerging attack patterns across the industry.

The hardest engineering problem wasn't building a system that learns. It was building one that learns without catastrophically forgetting what it already knows. The balance between adaptation and stability is where most self-learning systems fail.

Dr. Sana Rashid, Director of AI Research, Alaris

What This Means in Practice

For security teams, self-learning detection changes the daily workflow in two significant ways.

First, onboarding is faster. Traditional detection systems require weeks of tuning before they're useful. The Alaris detection engine reaches 85% of its steady-state performance within 72 hours of deployment because it's learning your environment in real time rather than waiting for someone to configure rules.

Second, maintenance burden drops to near zero. Teams that used to allocate 20–30% of their engineering capacity to detection rule maintenance redirect that capacity entirely to threat hunting and investigation. This is where the real ROI comes from, not just the detection itself, but the engineering hours recovered.

One of our enterprise customers estimated they had 2,400 active detection rules before Alaris deployment. After six months of running Alaris's self-learning engine, they identified that 1,900 of those rules were either redundant or actively degraded. They retired them all.

Detection engineering will always require human expertise, for understanding adversary behavior, designing detection logic for novel threats, and validating system behavior. But the maintenance treadmill of keeping static rules current can be eliminated entirely. That's what self-learning detection actually delivers.

DS

Dr. Sana Rashid

Director of AI Research, Alaris

Dr. Sana Rashid leads AI research at Alaris, specializing in anomaly detection, adversarial machine learning, and large-scale behavioral analysis. She holds a PhD in machine learning from Carnegie Mellon and previously led threat intelligence research at a major cloud security provider.

Related