Skip to main content
Foggy connects to your existing observability stack and infrastructure. When an investigation starts, Foggy queries your systems to gather evidence, correlates findings across data sources, and delivers results in the web interface or Slack.

Investigation Flow

  1. Trigger — An alert fires, you ask a question in the web interface, or a scheduled automation runs
  2. Context retrieval — Foggy loads your Knowledge Base entries (architecture docs, runbooks) and any conversation history from the thread
  3. Planning — Foggy analyzes the problem and decides which tools to query first (visible in the chain-of-thought panel)
  4. Data gathering — Foggy queries your connected integrations for relevant metrics, logs, dashboards, and cluster state
  5. Correlation — Foggy connects evidence across systems — correlating deployment timestamps with error spikes, metrics with log patterns
  6. Iteration — Based on findings, Foggy decides what to investigate next. This loop repeats until it has enough evidence.
  7. Results — Foggy delivers a synthesized answer with evidence, recommended actions, and follow-up suggestions

What Foggy Connects To

CategoryIntegrations
Metrics & LogsGrafana (Prometheus, Loki, and other data sources)
InfrastructureKubernetes
AlertsGrafana Alerts, Alertmanager
ChatSlack
Observability and infrastructure integrations are read-only — Foggy queries data but never modifies your systems. Slack is the only integration where Foggy sends messages (investigation results and alert notifications). See Integrations for setup guides.

Learning

Foggy builds knowledge of your environment through two mechanisms:
  • Knowledge Base — You provide architecture docs, service ownership maps, and troubleshooting runbooks. Foggy references these during every investigation, using your team’s procedures instead of generic advice.
  • Conversation context — Within a thread, Foggy remembers previous findings and builds on them. Multi-turn investigations get progressively more targeted.
See Knowledge Base for details.

Architecture

Foggy has two components:
  • Web interface — The web UI for chat, connector configuration, automations, and investigation history
  • Agent — The stateless AI engine that receives questions, reasons about them, executes tool calls, and streams answers back

Data Flow

  • Integrations — Foggy authenticates using scoped credentials you provide (API keys, service account tokens). All credentials are encrypted at rest.
  • Queries — Observability queries are read-only. Foggy never modifies your infrastructure. Slack is the only integration where Foggy writes (sending investigation results).
  • LLM processing — Investigation prompts and query results are sent to the selected LLM (Claude Sonnet 4.5, Claude Opus 4.6, and more). Credentials and connection strings are never sent to LLM providers.
  • Storage — Investigation history (questions, reasoning, answers) is stored in your project. Raw observability data (logs, metrics) is queried at runtime and not persisted by Foggy.
See Security for full details.

Next steps