Platform · Integrations
Plug into the stack your client already runs.
Six observability sources live today. Four alert channels. MCP, REST, and webhooks for the long tail. Custom integrations on request, usually within a week.
Observability sources
Where findings come from.
Point Runtime at a client's AI stack with cross-account IAM, native connectors, a Logfire read token, or LangChain OpenTelemetry. All six sources live today. Model-agnostic: every source carries any LLM your client runs, from Claude and OpenAI to Gemini, Llama, Mistral, DeepSeek, and custom fine-tunes. Custom sources on request.
AWS Bedrock
CloudWatch log ingestion, cross-account IAM
AWS SageMaker
Endpoint monitoring, full observability
Azure OpenAI
Native connector
Google Vertex AI
Native connector
Logfire
Pydantic AI, OpenAI, Anthropic SDK tracing
LangChain
Agent framework traces via OpenTelemetry or Logfire integration
Workflow destinations
Where findings land.
Alerts, issues, and reports flow into the queue your SOC already runs. No new dashboard for your analyst to babysit.
Jira
Ticket sync, issue lifecycle, status round-trip
Slack
Alert notifications, per-client channel routing
Microsoft Teams
Alert notifications, per-client webhook
SMTP-compatible, configurable per tenant
Developer surfaces
For everything else.
Claude Code via MCP. Any HTTP client via REST. Any custom system via webhook. OpenTelemetry throughout, no proprietary data format.
MCP Server
Claude Code natively. Natural-language queries over your findings and traces
REST API
Full programmatic access to scans, findings, agents, and reports
Webhooks
Push alerts and events to any HTTP endpoint. Custom channels in 24h
Need an integration that isn't listed?
Tell us the client's stack. We ship custom connectors in about a week for MSSP partners on the roadmap. European MSSPs, NDA on request.