Every non-Slack integration is read-only — Foggy delivers findings, not actions. Credentials are encrypted at rest and never sent to LLM providers. Investigation data is never used to train models.Documentation Index
Fetch the complete documentation index at: https://docs.foggyhq.com/llms.txt
Use this file to discover all available pages before exploring further.
Read-only access
Foggy runs with read-only permissions for observability and infrastructure integrations — no agents, no sidecars, no writes to production. All non-Slack integrations are enforced as read-only at the connector level, regardless of the API token permissions you grant. The exception is Slack, where Foggy sends investigation results and alert notifications.Credential encryption
All connector credentials (API tokens, URLs) are encrypted at rest using Fernet symmetric encryption:- Credentials are encrypted the moment you save them
- Decrypted only when making API calls to your tools
- Never logged, cached, or exposed in the UI after initial entry
- Never sent to LLM providers — only query results are sent for analysis
Project isolation
All data is scoped to projects (workspaces). Users can only access projects they belong to.| Role | Permissions |
|---|---|
| Owner | Full control — settings, connectors, knowledge base, automations, member management |
| Member | Chat and view — can investigate and view results, but can’t change project configuration |
Integration scopes
Foggy requests the narrowest scopes each integration supports. Setup details are in each integration’s page under Integrations.| Integration | Access | Direction |
|---|---|---|
| Grafana | Read-only Viewer token (optional for public dashboards) | Query only |
| Prometheus | HTTP read; bearer token optional | Query only |
| Kubernetes | Read-only ClusterRole (self-hosted in-cluster) or via Satellite for any other cluster | Query only |
| OpenSearch | Read-only bearer token | Query only |
| GitHub | Fine-grained PAT: read on Contents, Actions, Issues, Pull requests | Query only (writes require explicit opt-in) |
| Sentry | Personal token with org:read, project:read, team:read, event:read, alerts:read | Query only |
| Datadog | API key + Application key with mcp_read and read on Monitors, Logs, APM, Metrics, Incidents | Query only |
| Linear | Restricted (read-only) API key; full-access only when you want Foggy to file tickets | Query only by default |
| Slack | OAuth app with chat:write plus channel and mention read scopes | Bidirectional — Foggy reads mentions and writes replies |
| Alert Webhook | Inbound HTTPS with a Bearer token per workspace | Inbound only |
| Satellite | Outbound-only TLS from your cluster with ingest token; credentials stay local | Outbound from your network |
Data handling
- No model training — Investigation data is never used to train models or shared across projects.
- Encryption in transit and at rest — Connector credentials use Fernet symmetric encryption.
- Credential isolation from LLMs — API tokens and connection strings are never sent to LLM providers. Only query results (metrics, logs, events) are sent for analysis.
Bring Your Own Model
Use your own LLM API keys to control which provider processes your data. Choose the model per-message via the model selector in the chat input.LLM data flow
What data goes where:| Data type | Sent to LLM? | Details |
|---|---|---|
| Investigation prompts | Yes | Your question is sent to the selected LLM for reasoning |
| Tool query results | Yes | Metrics, logs, and events from your tools are sent for analysis |
| API tokens / credentials | Never | Credentials are used locally to query your tools, never included in LLM requests |
| Connection strings / URLs | Never | Used locally for connector API calls only |
| Knowledge Base entries | Yes | Enabled entries are included as context for investigations |
Questions about security? Contact us — we’re happy to discuss your specific compliance requirements.