Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.foggyhq.com/llms.txt

Use this file to discover all available pages before exploring further.

Every non-Slack integration is read-only — Foggy delivers findings, not actions. Credentials are encrypted at rest and never sent to LLM providers. Investigation data is never used to train models.

Read-only access

Foggy runs with read-only permissions for observability and infrastructure integrations — no agents, no sidecars, no writes to production. All non-Slack integrations are enforced as read-only at the connector level, regardless of the API token permissions you grant. The exception is Slack, where Foggy sends investigation results and alert notifications.

Credential encryption

All connector credentials (API tokens, URLs) are encrypted at rest using Fernet symmetric encryption:
  • Credentials are encrypted the moment you save them
  • Decrypted only when making API calls to your tools
  • Never logged, cached, or exposed in the UI after initial entry
  • Never sent to LLM providers — only query results are sent for analysis

Project isolation

All data is scoped to projects (workspaces). Users can only access projects they belong to.
RolePermissions
OwnerFull control — settings, connectors, knowledge base, automations, member management
MemberChat and view — can investigate and view results, but can’t change project configuration
See Teams and access for invite and workspace-setup details.

Integration scopes

Foggy requests the narrowest scopes each integration supports. Setup details are in each integration’s page under Integrations.
IntegrationAccessDirection
GrafanaRead-only Viewer token (optional for public dashboards)Query only
PrometheusHTTP read; bearer token optionalQuery only
KubernetesRead-only ClusterRole (self-hosted in-cluster) or via Satellite for any other clusterQuery only
OpenSearchRead-only bearer tokenQuery only
GitHubFine-grained PAT: read on Contents, Actions, Issues, Pull requestsQuery only (writes require explicit opt-in)
SentryPersonal token with org:read, project:read, team:read, event:read, alerts:readQuery only
DatadogAPI key + Application key with mcp_read and read on Monitors, Logs, APM, Metrics, IncidentsQuery only
LinearRestricted (read-only) API key; full-access only when you want Foggy to file ticketsQuery only by default
SlackOAuth app with chat:write plus channel and mention read scopesBidirectional — Foggy reads mentions and writes replies
Alert WebhookInbound HTTPS with a Bearer token per workspaceInbound only
SatelliteOutbound-only TLS from your cluster with ingest token; credentials stay localOutbound from your network

Data handling

  • No model training — Investigation data is never used to train models or shared across projects.
  • Encryption in transit and at rest — Connector credentials use Fernet symmetric encryption.
  • Credential isolation from LLMs — API tokens and connection strings are never sent to LLM providers. Only query results (metrics, logs, events) are sent for analysis.

Bring Your Own Model

Use your own LLM API keys to control which provider processes your data. Choose the model per-message via the model selector in the chat input.

LLM data flow

What data goes where:
Data typeSent to LLM?Details
Investigation promptsYesYour question is sent to the selected LLM for reasoning
Tool query resultsYesMetrics, logs, and events from your tools are sent for analysis
API tokens / credentialsNeverCredentials are used locally to query your tools, never included in LLM requests
Connection strings / URLsNeverUsed locally for connector API calls only
Knowledge Base entriesYesEnabled entries are included as context for investigations
Questions about security? Contact us — we’re happy to discuss your specific compliance requirements.