Read-only access
Foggy runs with strict read-only permissions for observability and infrastructure integrations — no agents, no sidecars, and zero ability to modify production systems. It queries data but never creates, updates, or deletes resources in your infrastructure. The exception is Slack, where Foggy sends investigation results and alert notifications. All other integrations are enforced as read-only at the connector level, regardless of the permissions granted by your API tokens.Credential encryption
All connector credentials (API tokens, URLs) are encrypted at rest using Fernet symmetric encryption:- Credentials are encrypted the moment you save them
- Decrypted only when making API calls to your tools
- Never logged, cached, or exposed in the UI after initial entry
- Never sent to LLM providers — only query results are sent for analysis
Project isolation
All data in Foggy is scoped to projects. Users can only access projects they’re members of.| Role | Permissions |
|---|---|
| Owner | Full control — settings, connectors, knowledge base, automations, member management |
| Member | Chat and view — can investigate and view results, but can’t change project configuration |
Data handling
- No model training — Investigation data is never used to train AI models or shared across projects. Your data is yours.
- Encryption in transit and at rest — All data is encrypted at rest and in transit. Connector credentials use Fernet symmetric encryption.
- Credential isolation from LLMs — API tokens and connection strings are never sent to LLM providers. Only query results (metrics, logs, events) are sent for analysis.
Bring Your Own Model
Optionally use your own LLM API keys so you control which provider processes your data. You choose which model to use per-message via the model selector in the chat input.LLM data flow
Understanding exactly what data goes where:| Data type | Sent to LLM? | Details |
|---|---|---|
| Investigation prompts | Yes | Your question is sent to the selected LLM for reasoning |
| Tool query results | Yes | Metrics, logs, and events from your tools are sent for analysis |
| API tokens / credentials | Never | Credentials are used locally to query your tools, never included in LLM requests |
| Connection strings / URLs | Never | Used locally for connector API calls only |
| Knowledge Base entries | Yes | Enabled entries are included as context for investigations |
Questions about security? Contact us — we’re happy to discuss your specific compliance requirements.