Investigation Flow
- Trigger — An alert fires, you ask a question in the web interface, or a scheduled automation runs
- Context retrieval — Foggy loads your Knowledge Base entries (architecture docs, runbooks) and any conversation history from the thread
- Planning — Foggy analyzes the problem and decides which tools to query first (visible in the chain-of-thought panel)
- Data gathering — Foggy queries your connected integrations for relevant metrics, logs, dashboards, and cluster state
- Correlation — Foggy connects evidence across systems — correlating deployment timestamps with error spikes, metrics with log patterns
- Iteration — Based on findings, Foggy decides what to investigate next. This loop repeats until it has enough evidence.
- Results — Foggy delivers a synthesized answer with evidence, recommended actions, and follow-up suggestions
What Foggy Connects To
| Category | Integrations |
|---|---|
| Metrics & Logs | Grafana (Prometheus, Loki, and other data sources) |
| Infrastructure | Kubernetes |
| Alerts | Grafana Alerts, Alertmanager |
| Chat | Slack |
Learning
Foggy builds knowledge of your environment through two mechanisms:- Knowledge Base — You provide architecture docs, service ownership maps, and troubleshooting runbooks. Foggy references these during every investigation, using your team’s procedures instead of generic advice.
- Conversation context — Within a thread, Foggy remembers previous findings and builds on them. Multi-turn investigations get progressively more targeted.
Architecture
Foggy has two components:- Web interface — The web UI for chat, connector configuration, automations, and investigation history
- Agent — The stateless AI engine that receives questions, reasons about them, executes tool calls, and streams answers back
Data Flow
- Integrations — Foggy authenticates using scoped credentials you provide (API keys, service account tokens). All credentials are encrypted at rest.
- Queries — Observability queries are read-only. Foggy never modifies your infrastructure. Slack is the only integration where Foggy writes (sending investigation results).
- LLM processing — Investigation prompts and query results are sent to the selected LLM (Claude Sonnet 4.5, Claude Opus 4.6, and more). Credentials and connection strings are never sent to LLM providers.
- Storage — Investigation history (questions, reasoning, answers) is stored in your project. Raw observability data (logs, metrics) is queried at runtime and not persisted by Foggy.