Foggy connects to your existing observability stack and infrastructure. When an investigation starts, Foggy queries your systems to gather evidence, correlates findings across data sources, and delivers results in the web interface or Slack.Documentation Index
Fetch the complete documentation index at: https://docs.foggyhq.com/llms.txt
Use this file to discover all available pages before exploring further.
Investigation Flow
- Trigger — an alert, a question in the Console, or a scheduled automation.
- Context — loads Knowledge Base entries, accepted Memory, and prior thread history.
- Plan — picks which tools to query first.
- Gather — queries connected integrations for metrics, logs, dashboards, cluster state.
- Correlate — links deploys to error spikes, metrics to log patterns.
- Iterate — refines the plan based on what comes back.
- Answer — evidence-backed summary with follow-up suggestions.
What Foggy Connects To
| Category | Integrations |
|---|---|
| Metrics & Logs | Grafana, Prometheus, OpenSearch |
| Infrastructure | Kubernetes |
| Alerts | Alert Webhook (generic — accepts any alerting tool) |
| Chat | Slack |
Learning
Foggy builds knowledge of your environment through two mechanisms:- Knowledge Base — You provide architecture docs, service ownership maps, and troubleshooting runbooks. Foggy references these during every investigation, using your team’s procedures instead of generic advice.
- Conversation context — Within a thread, Foggy remembers previous findings and builds on them. Multi-turn investigations get progressively more targeted.
Architecture
Foggy has two components: the Console (web UI for chat, connectors, automations, history) and the Agent (stateless AI engine that runs each investigation). Both run in every deployment mode — only their location changes.Deployment modes
- Cloud — Managed by Foggy. Connect observability tools via read-only tokens. Fastest setup.
- Self-hosted — Helm chart in your Kubernetes cluster. Data and credentials stay in your perimeter. See Install Foggy.
- Satellite — Outbound-only agent in a cluster you cannot expose to Foggy directly. Pairs with Cloud or self-hosted. See Satellite.
Data Flow
- Integrations — Foggy authenticates using scoped credentials you provide (API keys, service account tokens). All credentials are encrypted at rest.
- Queries — Observability queries are read-only. Foggy never modifies your infrastructure. Slack is the only integration where Foggy writes (sending investigation results).
- LLM processing — Investigation prompts and query results are sent to the selected LLM (Claude Sonnet 4.6, Claude Opus 4.7, and more). Credentials and connection strings are never sent to LLM providers.
- Storage — Investigation history (questions, reasoning, answers) is stored in your project. Raw observability data (logs, metrics) is queried at runtime and not persisted by Foggy.
Next steps
Quick Start
Your first investigation in under 5 minutes.
Knowledge Base
Teach Foggy about your infrastructure with architecture docs and runbooks.
Automations
Set up scheduled and alert-triggered investigations.
Integrations
Browse supported data sources.