WTF Is Security Context
Same structure, wildly different sources. The boundary of “security data” is defined by the investigation, not the tooling.
The industry uses “context” as a synonym for “more data.” It’s not. And confusing the two is why SOCs are drowning despite having more integrations, more dashboards, and more telemetry than ever before.
Ask what “security data” means and you’ll get a list of sources: firewall logs, EDR telemetry, SIEM events, cloud audit trails. But that’s not a definition - that’s an artifact of which products the industry happened to build first.
An employee asks in Slack “can you share the AWS root credentials?” That’s security data. A Salesforce export shows a departing rep downloaded their entire book of business. That’s security data. A threat intel report says APT29 is using OAuth persistence in M365 tenants - something that happened to someone else that might explain what’s happening in yours. None of these came from a security tool. All of them could change the outcome of an investigation.
There is no natural boundary around “security data.” The boundary of relevant data is defined by the investigation, not the tooling. Anything that could change the meaning of an event is security data. And most “context” architectures draw exactly the wrong line - they’ll enrich your alert with data from security tools, but they won’t tell you that the user in your alert posted on Slack last week asking about the exact system they’re now accessing at 2 AM.
So let’s start from the bottom. Every piece of data - firewall log, Slack message, threat intel report - reduces to the same atom: an actor performed an action on a target at a time. A single event has no meaning. A login is not malicious. A file download is not suspicious. Meaning doesn’t live inside events. It lives between them.
From Events to Sequences
An analyst’s job begins when events start clustering.
A user logs in at 2:30 AM. Then they access a SharePoint site they’ve never touched before. Then they download a 200MB file. Then they connect to an external IP.
None of these are individually malicious. But arranged in sequence - same actor, compressed timeframe, escalating sensitivity - they start forming a shape.
A sequence is multiple events, bound together by a shared identity, ordered in time. But a sequence still isn’t enough. “User did A, then B, then C” is a chronology, not a conclusion.
Attack Chains and Benign Chains
Here’s a mental model that makes the analyst’s real problem visible.
At any given moment, chains of events are unfolding across your environment. Some are benign chains - an employee VPNs in late because they’re in a different time zone, accesses a shared drive they were just added to, downloads files for a presentation, uploads them to a partner portal.
Some are attack chains - a compromised credential is used to log in off-hours, the attacker enumerates accessible resources, finds a sensitive file, exfiltrates it to an external endpoint.
These chains are made of the same events.
A late-night login. A file access. A large download. An external connection. The individual events are identical. The chains they belong to are completely different.
Alerts fire at the intersection. A detection rule can’t see chains - it sees events that match a pattern. It flags the intersection point: this event could belong to an attack chain. But the same event could just as easily belong to a benign chain.
This reframes what an analyst actually does. They’re not asking “is this event bad?” - they’re asking “which chain does this event belong to?”
The analyst has to reconstruct enough of each possible chain to determine which one is more plausible. That’s a reasoning problem, not a lookup.
The analyst’s real question isn’t “is this event bad?” - it’s “which chain does this event belong to?”
From Sequences to Storylines
To determine which chain an event belongs to, the analyst looks at what happened relative to what’s normal, what’s possible, and what else is going on.
Has this user logged in at 2 AM before? Is that downloaded file something they normally access? Is the external IP a known SaaS service or a command-and-control server?
A storyline is a sequence enriched with relationships - the analyst’s reconstruction of what might have happened and why, tested against competing hypotheses.
This is what security operations actually is. Not alert triage. Not enrichment. Abductive reasoning under uncertainty. Constructing the most plausible explanation from incomplete evidence, knowing that the wrong call means either a missed breach or a wasted escalation.
So Where Does “Context” Actually Fit?
Now we can talk about context with precision. “Context” isn’t one thing - it’s four different things, operating at different layers of the investigation.
Most vendors stop at Layer 2. The real work is Layers 3 and 4.
Layer 1: Enrichment Context What do I know about the entities in this event?
- The user is in the Finance department
- The device is a managed laptop
- The IP geolocates to Chicago
- The file is classified as confidential
Enrichment is a fact sheet about the nouns in your alert. Useful, necessary, but table stakes. It doesn’t tell you which chain you’re looking at.
Layer 2: Behavioral Context Is this normal for this actor?
- Does this user typically log in at 2 AM?
- Have they accessed this SharePoint site before?
- What’s their usual download volume?
Behavioral context compares the current event against a historical baseline. This is where UEBA products live. More useful than enrichment - but fundamentally vulnerable. If an attacker establishes persistence during the baseline learning window, “normal” includes the intrusion. You’re detecting deviations from a corrupted reference point.
Layer 3: Relational Context How does this event connect to other events across the environment?
This is the layer most SOCs are starving for. Not this event and this actor - but whether this event is part of a larger pattern spanning multiple identities, systems, data sources, and timeframes.
- Did anyone else access this file recently?
- Did the same external IP appear in connection logs from other hosts?
- Is there a phishing email in this user’s inbox from last week?
- Did the user ask about this system in Slack three days ago?
Remember: security data has no natural boundary. Relational context means querying across everything - not just security telemetry, but anything that could reveal which chain this event belongs to. And it’s architecturally hard because it requires crossing sources, time, and entity types in real time.
Layer 4: Temporal Context How does the meaning of past events change when new information arrives?
The layer almost nobody talks about.
Six months ago, this same user ran an unusual script and connected to an external service. It was investigated and dismissed as routine admin work. Today, that user’s credentials appeared in a dark web dump.
Suddenly the meaning of those past events changes. They weren’t routine maintenance. They might have been initial access.
I’ve written about this before as the Event A Problem: the “maliciousness” of an event is not an inherent property of the event. It’s a property of its relationship to future information. Every dismissed alert is a provisional judgment that could be overturned by context that doesn’t exist yet.
Temporal context is the ability to re-evaluate the past in light of the present.
The Real Problem
When a vendor tells you they “provide context,” they almost always mean Layer 1. Maybe Layer 2 if they have a UEBA component. And they draw the traditional boundary around “security data” - only reasoning over what security tools collected.
But the analyst’s real job is chain attribution, and the bottleneck is Layers 3 and 4 - operating across a boundary of data that extends far beyond security tools.
That’s not a feature you bolt on. It’s not an integration you enable. It’s an architectural decision about how you store, relate, and reason over data - all data, not just what happened to come from a security product.
The next time someone tells you they “provide context,” ask them:
- What data can you reason over - just security tools, or anything relevant to the investigation?
- Can you connect an alert to related events across different identities and systems from the past 90 days?
- Can you re-evaluate a dismissed alert from three months ago when new information surfaces today?
- Do your investigations end with evidence of absence - or just absence of evidence?
If they can’t answer all four, they’re not providing context.
They’re providing a richer alert.