Why Every Dismissed Alert Is Technical Debt
Maliciousness isn't an inherent property of an event - it's a property of its relationship to future context. Every dismissed alert is a liability on your balance sheet.
Longform writing on security, AI, and the systems that protect everything else.
Maliciousness isn't an inherent property of an event - it's a property of its relationship to future context. Every dismissed alert is a liability on your balance sheet.
Every time you tune a detection rule to silence a noisy alert, you're hard-coding a blind spot. We're trading false positives for false negatives.
AI's expanding context windows sound like a defensive breakthrough. In reality, they're structurally easier for attackers to exploit - attackers save the whole board state while defenders rebuild from fragments every move.
Security is theoretically simple. But SecOps in practice is a war against entropy - where the real task isn't correlation, it's intent recognition.
Teams build elaborate state machines to compensate for model limitations. The result benchmarks well - and doesn't think. Your architecture is a commitment, not a snapshot.
We spent two years perfecting our RAG pipeline. Then our AI started reading markdown files from a filesystem instead of querying our vector store. This isn't a bug - it's the future.
AI SOCs inherit and amplify human bias - favoring detections that reduce workload over ones that maximize threat coverage. Without a sparring partner, even AI-generated rules drift conservative.
You can't explain intelligence by inserting another intelligent agent. If GPT-5's router needs sophisticated judgment to route intelligence, who's routing the router?
Howard Marks argues you can't quantify risk even after the fact. We've built an entire vulnerability management industry around measuring the unmeasurable.
Traditional user-first thinking breaks down when technology constraints shift weekly. Start with rigorous backend validation, then design the minimal human interface around what actually works.
False Positive Rates were never about detection accuracy - they were always about human capacity. AI triage changes the equation entirely.
Sutton's Bitter Lesson says intelligence emerges from scalable learning, not encoded knowledge. Does that mean we'll outgrow MITRE ATT&CK?
AI agentic architectures are a modern manifestation of the oldest problem-solving paradigm in CS - divide and conquer, but with reasoning, adaptation, and evolution.
Every Gen AI product fundamentally excels at one core function: context management. The more precisely you infer and enrich the prompt, the better the application performs.
SecOps reasoning is graph-based but our data arrives as time-series logs. This impedance mismatch - like ORMs bridging objects and tables - is the core data engineering problem in security.