Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.foggyhq.com/llms.txt

Use this file to discover all available pages before exploring further.

Foggy connects to your existing observability stack and infrastructure. When an investigation starts, Foggy queries your systems to gather evidence, correlates findings across data sources, and delivers results in the web interface or Slack.

Investigation Flow

  1. Trigger — an alert, a question in the Console, or a scheduled automation.
  2. Context — loads Knowledge Base entries, accepted Memory, and prior thread history.
  3. Plan — picks which tools to query first.
  4. Gather — queries connected integrations for metrics, logs, dashboards, cluster state.
  5. Correlate — links deploys to error spikes, metrics to log patterns.
  6. Iterate — refines the plan based on what comes back.
  7. Answer — evidence-backed summary with follow-up suggestions.

What Foggy Connects To

CategoryIntegrations
Metrics & LogsGrafana, Prometheus, OpenSearch
InfrastructureKubernetes
AlertsAlert Webhook (generic — accepts any alerting tool)
ChatSlack
Observability and infrastructure integrations are read-only — Foggy queries data but never modifies your systems. Slack is the only integration where Foggy sends messages (investigation results and alert notifications). See Integrations for setup guides.

Learning

Foggy builds knowledge of your environment through two mechanisms:
  • Knowledge Base — You provide architecture docs, service ownership maps, and troubleshooting runbooks. Foggy references these during every investigation, using your team’s procedures instead of generic advice.
  • Conversation context — Within a thread, Foggy remembers previous findings and builds on them. Multi-turn investigations get progressively more targeted.

Architecture

Foggy has two components: the Console (web UI for chat, connectors, automations, history) and the Agent (stateless AI engine that runs each investigation). Both run in every deployment mode — only their location changes.

Deployment modes

  • Cloud — Managed by Foggy. Connect observability tools via read-only tokens. Fastest setup.
  • Self-hosted — Helm chart in your Kubernetes cluster. Data and credentials stay in your perimeter. See Install Foggy.
  • Satellite — Outbound-only agent in a cluster you cannot expose to Foggy directly. Pairs with Cloud or self-hosted. See Satellite.

Data Flow

  • Integrations — Foggy authenticates using scoped credentials you provide (API keys, service account tokens). All credentials are encrypted at rest.
  • Queries — Observability queries are read-only. Foggy never modifies your infrastructure. Slack is the only integration where Foggy writes (sending investigation results).
  • LLM processing — Investigation prompts and query results are sent to the selected LLM (Claude Sonnet 4.6, Claude Opus 4.7, and more). Credentials and connection strings are never sent to LLM providers.
  • Storage — Investigation history (questions, reasoning, answers) is stored in your project. Raw observability data (logs, metrics) is queried at runtime and not persisted by Foggy.
See Security for full details.

Next steps

Quick Start

Your first investigation in under 5 minutes.

Knowledge Base

Teach Foggy about your infrastructure with architecture docs and runbooks.

Automations

Set up scheduled and alert-triggered investigations.

Integrations

Browse supported data sources.