Skip to main content
Foggy is designed with SOC 2 and GDPR requirements in mind. Your privacy is our top priority — we use read-only access to observability tools, encrypt all credentials at rest, and never modify your infrastructure. Slack is the only integration where Foggy sends messages. You have full control over what data we see, and we never use your data to train models for others.

Read-only access

Foggy runs with strict read-only permissions for observability and infrastructure integrations — no agents, no sidecars, and zero ability to modify production systems. It queries data but never creates, updates, or deletes resources in your infrastructure. The exception is Slack, where Foggy sends investigation results and alert notifications. All other integrations are enforced as read-only at the connector level, regardless of the permissions granted by your API tokens.

Credential encryption

All connector credentials (API tokens, URLs) are encrypted at rest using Fernet symmetric encryption:
  • Credentials are encrypted the moment you save them
  • Decrypted only when making API calls to your tools
  • Never logged, cached, or exposed in the UI after initial entry
  • Never sent to LLM providers — only query results are sent for analysis

Project isolation

All data in Foggy is scoped to projects. Users can only access projects they’re members of.
RolePermissions
OwnerFull control — settings, connectors, knowledge base, automations, member management
MemberChat and view — can investigate and view results, but can’t change project configuration

Data handling

  • No model training — Investigation data is never used to train AI models or shared across projects. Your data is yours.
  • Encryption in transit and at rest — All data is encrypted at rest and in transit. Connector credentials use Fernet symmetric encryption.
  • Credential isolation from LLMs — API tokens and connection strings are never sent to LLM providers. Only query results (metrics, logs, events) are sent for analysis.

Bring Your Own Model

Optionally use your own LLM API keys so you control which provider processes your data. You choose which model to use per-message via the model selector in the chat input.

LLM data flow

Understanding exactly what data goes where:
Data typeSent to LLM?Details
Investigation promptsYesYour question is sent to the selected LLM for reasoning
Tool query resultsYesMetrics, logs, and events from your tools are sent for analysis
API tokens / credentialsNeverCredentials are used locally to query your tools, never included in LLM requests
Connection strings / URLsNeverUsed locally for connector API calls only
Knowledge Base entriesYesEnabled entries are included as context for investigations
Questions about security? Contact us — we’re happy to discuss your specific compliance requirements.