Platform · Integrations

Plug into the stack your client already runs.

Six observability sources live today. Four alert channels. MCP, REST, and webhooks for the long tail. Custom integrations on request, usually within a week.

Observability sources

Where findings come from.

Point Runtime at a client's AI stack with cross-account IAM, native connectors, a Logfire read token, or LangChain OpenTelemetry. All six sources live today. Model-agnostic: every source carries any LLM your client runs, from Claude and OpenAI to Gemini, Llama, Mistral, DeepSeek, and custom fine-tunes. Custom sources on request.

AWS Bedrock

Live

CloudWatch log ingestion, cross-account IAM

AWS SageMaker

Live

Endpoint monitoring, full observability

Azure OpenAI

Live

Native connector

Google Vertex AI

Live

Native connector

Logfire

Live

Pydantic AI, OpenAI, Anthropic SDK tracing

LangChain

Live

Agent framework traces via OpenTelemetry or Logfire integration

Workflow destinations

Where findings land.

Alerts, issues, and reports flow into the queue your SOC already runs. No new dashboard for your analyst to babysit.

Jira

Live

Ticket sync, issue lifecycle, status round-trip

Slack

Live

Alert notifications, per-client channel routing

Microsoft Teams

Live

Alert notifications, per-client webhook

Email

Live

SMTP-compatible, configurable per tenant

Developer surfaces

For everything else.

Claude Code via MCP. Any HTTP client via REST. Any custom system via webhook. OpenTelemetry throughout, no proprietary data format.

MCP Server

Live

Claude Code natively. Natural-language queries over your findings and traces

REST API

Live

Full programmatic access to scans, findings, agents, and reports

Webhooks

Live

Push alerts and events to any HTTP endpoint. Custom channels in 24h

Need an integration that isn't listed?

Tell us the client's stack. We ship custom connectors in about a week for MSSP partners on the roadmap. European MSSPs, NDA on request.