Reference
Configuration Reference
Complete reference for the omni.toml configuration file.
File Location
Omni reads its configuration from a TOML file. The file is created automatically on first launch with sensible defaults. You can edit it manually or through the Settings panel in the UI.
Linux
~/.config/omni/omni.toml
macOS
~/Library/Application Support/Omni/omni.toml
Windows
%APPDATA%\Omni\omni.toml
All sections are optional. Omni uses defaults for any missing values.
[general]
Top-level application settings like logging and data storage.
[general]
telemetry = false
log_level = "info"
max_history = 1000
[providers.<name>]
Configure one or more LLM providers. Each provider has a unique key (e.g., providers.openai). You can configure multiple providers and Omni will automatically fall back between them.
API keys are stored securely in the OS keychain (or via environment variables), not in the config file.
[providers.openai]
provider_type = "openai"
default_model = "gpt-4o"
max_tokens = 4096
temperature = 0.7
[providers.anthropic]
provider_type = "anthropic"
default_model = "claude-opus-4-6"
temperature = 0.8
[providers.ollama]
provider_type = "ollama"
default_model = "llama3.1"
endpoint = "http://localhost:11434"
[agent]
Controls the AI agent's behavior — the system prompt, how many tool-use iterations it can perform, and the overall timeout.
[agent]
system_prompt = "You are a helpful assistant."
max_iterations = 25
timeout_secs = 120
[guardian]
Configure the Guardian anti-injection pipeline that scans all inputs and outputs for prompt injection attacks.
[guardian]
enabled = true
sensitivity = "balanced"
allow_override = true
[permissions]
Default behavior for the permission system. Individual extensions can request specific permissions through their manifests.
[permissions]
default_policy = "deny"
trust_verified = false
audit_enabled = true
[ui]
Customize the appearance of the desktop application. All settings here can also be changed from Settings → Appearance in the UI.
[channels]
Pre-configure channel instances and bindings. Instances define which messaging platforms to connect to; bindings route incoming messages to specific extensions.
Channel Instances
Each instance is keyed by a compound ID in the format {type}:{instance_id}.
Channel Bindings
Bindings route incoming messages from a channel instance to an extension. Use glob patterns in peer_filter and group_filter to match specific senders or groups.
[channels.instances."discord:production"]
channel_type = "discord"
display_name = "Main Server"
auto_connect = true
[channels.instances."telegram:alerts"]
channel_type = "telegram"
display_name = "Alert Bot"
[[channels.bindings]]
channel_instance = "discord:production"
extension_id = "com.example.support-bot"
peer_filter = "*"
group_filter = "support-*"
priority = 100
[marketplace]
Marketplace connection settings.
Full Example
A complete configuration file showing all sections with typical values.
[general]
telemetry = false
log_level = "info"
max_history = 1000
[providers.openai]
provider_type = "openai"
default_model = "gpt-4o"
max_tokens = 4096
temperature = 0.7
[providers.anthropic]
provider_type = "anthropic"
default_model = "claude-opus-4-6"
[providers.ollama]
provider_type = "ollama"
default_model = "llama3.1"
endpoint = "http://localhost:11434"
[agent]
system_prompt = "You are a helpful assistant."
max_iterations = 25
timeout_secs = 120
[guardian]
enabled = true
sensitivity = "balanced"
allow_override = true
[permissions]
default_policy = "deny"
trust_verified = false
audit_enabled = true
[ui]
theme = "system"
accent_color = "#3b82f6"
message_style = "bubbles"
[channels.instances."discord:production"]
channel_type = "discord"
display_name = "Main Server"
auto_connect = true
[[channels.bindings]]
channel_instance = "discord:production"
extension_id = "com.example.bot"
priority = 100