Skip to main content
Every conversation with Foggy is an investigation. Here’s what the experience looks like from start to finish.

Starting an investigation

Type a natural-language question in the chat input. You can:
  • Ask anything — “Why is the payment service returning 500 errors?”
  • @Mention resources — Type @ to reference specific Kubernetes services, deployments, or Grafana dashboards. Mentioned resources get injected as structured context for more precise investigations.
  • Choose a model — Select between Claude Sonnet 4.5, Claude Opus 4.6, and GPT-4o using the model selector in the chat input. Your selection persists within the session.
Chat input with @mention dropdown and model selector

What you see during an investigation

A collapsible panel labeled “Thought for Xm Ys” shows Foggy’s reasoning in real time:
  • Reasoning steps — What Foggy is thinking and what it plans to check next
  • Tool calls — The exact Grafana queries, Prometheus PromQL expressions, Loki log searches, and kubectl commands being executed
  • Plan cards — High-level investigation strategy when the question is complex
This gives you full transparency into how Foggy reached its conclusions.
After the answer, a Sources section shows all tools that were invoked during the investigation. Click any source to see the raw query and result.
The final answer streams in rich Markdown:
  • Tables summarizing findings
  • Code blocks with relevant configs or commands
  • Mermaid diagrams visualizing dependencies or timelines
  • Inline evidence linking conclusions to specific data points
Chain-of-thought expanded during an investigation

Follow-up suggestions

After every answer, Foggy generates 3 contextual follow-up suggestions based on the investigation results. Click any suggestion to continue the conversation and dig deeper. Examples after a latency investigation:
  • “Show the deployment diff that caused this”
  • “What other services are affected?”
  • “How do I prevent this in the future?”
Follow-up suggestions below an answer

Conversation history

All investigations are saved as threads in the sidebar. You can:
  • Pick up where you left off — previous messages provide context for multi-turn conversations
  • Search through past investigations
  • Share thread links with teammates

Message actions

Each message has actions available:
ActionWhat it does
RegenerateRe-runs the investigation with fresh data from your connectors
StopCancels a running investigation mid-stream
FeedbackThumbs up/down to help improve Foggy’s answers

@Mention autocomplete

Type @ in the chat input to see an autocomplete dropdown of available resources:
  • Kubernetes services and deployments
  • Grafana dashboards
  • Infrastructure components
Mentioned resources are injected as structured context, giving Foggy precise information about what you want to investigate. Model selector showing available LLMs

Model selection

Choose the right model for each question:
ModelBest for
Claude Sonnet 4.5Default — fast, reliable investigations
Claude Opus 4.6Complex multi-step reasoning and deep analysis
GPT-4oAlternative perspective, broad general knowledge
Selection is per-message — you can switch models mid-conversation.

Next steps