A technical overview for buyers who want to see the methodology before they sign up. No marketing abstraction — just what runs, how it runs, and why it's built this way.
Cairn doesn't run a single scanner. Each engagement deploys a coordinated set of OSS tools selected for the target type, running in parallel. Cairn's own agentic engine handles reasoning, chaining, and decision-making — the OSS tools contribute breadth and specialist coverage that no single tool provides alone.
When a tool finds something Cairn's native engine didn't, that discrepancy is logged as a training signal. The miss log is how Cairn gets better over time.
No BurpSuite Pro in the pipeline. We have it on the lab machine and use it to benchmark Cairn — but we don't have a license that covers automated SaaS use. Our entire production stack is OSS, zero licensing risk for you or us. What you see above is what runs.
Every tool produces raw findings in its own format, with its own severity scale, its own duplicate logic. Before any finding reaches a report — or you — it passes through Cairn's LLM normalization layer.
This is not marketing. It's a Claude-powered step that runs on every engagement at every tier, doing work that rule-based dedup cannot:
Key-based dedup (same URL + same title) catches obvious duplicates. LLM dedup catches the hard ones: "SQL Injection in login form" from ZAP and "SQLi via POST parameter username" from SQLmap are the same finding. The LLM knows this. Rule-based dedup doesn't.
Scanners generate noise. The LLM layer reviews each finding against its evidence, evaluates plausibility, and flags likely false positives before they reach you. Findings that don't survive review are suppressed — not hidden, but excluded from the active report with a suppression reason logged.
Every normalized finding is scored: CVSS base score, an AI-assigned triage priority (Critical / High / Medium / Low), and a confidence rating. You receive a ranked, actionable finding set — not an unsorted wall of scanner output.
Normalized findings are automatically mapped to relevant compliance frameworks (PCI-DSS Req 11.4, SOC 2 CC7.1, HIPAA §164.308). The report your auditor receives is pre-mapped — no post-processing required on your end.
AI auth note: Cairn's normalization layer runs Claude via Anthropic's API. Credentials are managed through our vault system — never hardcoded, never in shell profiles. The same security hygiene we enforce for clients, we enforce for ourselves.
The Premium tier is not premium-priced automation. It's a review session — Brendon McCaulley, CISSP, plus a purpose-built team of Claude-powered AI agents, working through your findings together every month.
Brendon and the AI agent team review the normalized finding set. High-confidence false positives are confirmed and removed. Findings that are technically correct but require context to interpret are annotated.
Brendon drives a live Claude Code session against the target environment. The AI agents assist with pattern recognition, correlation across findings, and surface areas the automated scanner didn't reason about. This is active security testing, not just report review.
The final report carries Brendon's CISSP signature and a DocuSeal-signed attestation. This is a professional attestation that a human security practitioner reviewed the findings — not just an automated scan certificate.
1 hour with Brendon each month. Walk through critical findings, prioritize remediation, and plan the next scan cycle. This is a working session, not a sales call.
Why "AI agent team" and not just "Brendon"? The AI agents aren't background tools — they actively participate in the review session. Pattern recognition at scale, correlation across hundreds of findings, real-time OSINT lookups, CVE cross-referencing. Brendon provides judgment and direction; the AI team provides speed and coverage. As AI models improve, your review team's capabilities improve automatically.
Every Premium engagement generates a category of data that Starter and Pro don't produce: human-identified attack methods. When Brendon and the AI team find something during the interactive crawl that the automated stack missed, that discrepancy is structured and logged.
The miss log captures: which tool (if any) found it, which category of vulnerability it represents, what the attack pattern or payload looked like, and what Cairn would need to detect it next time. These logs are reviewed, refined, and fed back into Cairn's detection logic.
After AI normalization, findings are tagged by source. Any finding that came from an OSS tool or human review with no corresponding Cairn-native detection is flagged as a Cairn miss.
Each miss is structured: tool that found it, vulnerability category, attack pattern or payload, target context, engagement ID. No client data. No PII. Attack methods only.
Logged misses are reviewed weekly. High-signal patterns are worked into Cairn's detection logic — new Nuclei templates, new Cairn native checks, new heuristics. The fix is verified by re-running against the same class of target.
Every Premium client's engagement improves Cairn for every client on the platform. A detection gap found in a fintech environment closes a blind spot in a healthcare environment. The platform learns across all engagements.
Privacy boundary: The learning loop extracts attack methods — the technical pattern of what worked. Never target identity, finding content, or any client data. The boundary is enforced at the miss log schema level: the only fields that exist are fields that could not contain client data.
In April 2026, Anthropic previewed Claude Mythos — a new AI model that found thousands of previously unknown zero-days across every major operating system and browser. Mythos is described as "extremely autonomous" with the reasoning capabilities of an advanced security researcher. It is currently in limited preview through Project Glasswing, available only to select infrastructure partners.
Trailhead Security is not a Project Glasswing partner. We don't have Mythos access today, and we won't claim otherwise. What we do have is an architecture built to use it the day it's available.
Cairn's AI layer is model-agnostic by design. The normalization, triage, and reasoning components reference the AI model through a configuration layer — not hardcoded to any specific model. Swapping Claude Sonnet for Claude Mythos is a configuration change, not an engineering project.
Cairn's AI layer routes through a configurable model interface. The day Mythos is GA, it's a config change — not a rewrite.
Mythos's strength is autonomous multi-step reasoning on complex security tasks — exactly the workload Cairn's normalization and triage pipeline runs.
We're monitoring Mythos's Project Glasswing rollout. When API access opens, Cairn will be in the first wave of security platforms to test and deploy it.
Mythos is an Anthropic product. Trailhead Security is not affiliated with or endorsed by Anthropic for Mythos testing. The above reflects our architectural readiness, not an access claim.
Starter subscriptions start at $299/month. No sales call. Sign up via API, run your first scan within the hour.