Omni

Reference

Configuration Reference

Complete reference for the omni.toml configuration file.

File Location

Omni reads its configuration from a TOML file. The file is created automatically on first launch with sensible defaults. You can edit it manually or through the Settings panel in the UI.

Linux

~/.config/omni/omni.toml

macOS

~/Library/Application Support/Omni/omni.toml

Windows

%APPDATA%\Omni\omni.toml

All sections are optional. Omni uses defaults for any missing values.

[general]

Top-level application settings like logging and data storage.

Key
Type
Default
Description
data_dir
string?
OS default
Override the data directory path. Uses OS-specific default if not set.
telemetry
bool
false
Enable anonymous usage telemetry.
log_level
string
"info"
Log verbosity. Values: trace, debug, info, warn, error.
max_history
integer
1000
Maximum number of messages to keep in session history.
omni.toml

[general]

telemetry = false

log_level = "info"

max_history = 1000

[providers.<name>]

Configure one or more LLM providers. Each provider has a unique key (e.g., providers.openai). You can configure multiple providers and Omni will automatically fall back between them.

Key
Type
Default
Description
provider_type
string
required
Provider type: openai, anthropic, ollama, gemini, bedrock, custom.
default_model
string?
Default model to use (e.g., "gpt-4o", "claude-opus-4-6").
endpoint
string?
Custom API endpoint URL. Required for ollama and custom providers.
max_tokens
integer?
Maximum tokens per response.
temperature
float?
Sampling temperature (0.0 – 2.0). Lower = more deterministic.
enabled
bool
true
Whether this provider is active.

API keys are stored securely in the OS keychain (or via environment variables), not in the config file.

omni.toml

[providers.openai]

provider_type = "openai"

default_model = "gpt-4o"

max_tokens = 4096

temperature = 0.7

 

[providers.anthropic]

provider_type = "anthropic"

default_model = "claude-opus-4-6"

temperature = 0.8

 

[providers.ollama]

provider_type = "ollama"

default_model = "llama3.1"

endpoint = "http://localhost:11434"

[agent]

Controls the AI agent's behavior — the system prompt, how many tool-use iterations it can perform, and the overall timeout.

Key
Type
Default
Description
system_prompt
string?
Custom system prompt prepended to every conversation.
max_iterations
integer
25
Max tool-use iterations per turn before the agent stops.
timeout_secs
integer
120
Maximum seconds for a single agent turn.
omni.toml

[agent]

system_prompt = "You are a helpful assistant."

max_iterations = 25

timeout_secs = 120

[guardian]

Configure the Guardian anti-injection pipeline that scans all inputs and outputs for prompt injection attacks.

Key
Type
Default
Description
enabled
bool
true
Enable or disable the Guardian pipeline.
sensitivity
string
"balanced"
Detection sensitivity: strict, balanced, or permissive.
custom_signatures
string?
Path to a custom regex signatures JSON file.
allow_override
bool
true
Allow users to override blocked content from the UI.
omni.toml

[guardian]

enabled = true

sensitivity = "balanced"

allow_override = true

[permissions]

Default behavior for the permission system. Individual extensions can request specific permissions through their manifests.

Key
Type
Default
Description
default_policy
string
"deny"
What happens when no rule matches: deny (block silently) or prompt (ask user).
trust_verified
bool
false
Auto-approve permissions for marketplace-verified extensions.
audit_enabled
bool
true
Log all permission decisions to the audit trail.
omni.toml

[permissions]

default_policy = "deny"

trust_verified = false

audit_enabled = true

[ui]

Customize the appearance of the desktop application. All settings here can also be changed from Settings → Appearance in the UI.

Key
Type
Default
Description
theme
string
"system"
Color theme: light, dark, or system.
accent_color
string
"#3b82f6"
Primary accent color as a hex value.
font_family
string
"system"
Font family: system, Inter, JetBrains Mono, Fira Code, Source Sans 3.
font_size
integer
14
Base font size in pixels.
line_height
string
"normal"
Line spacing: normal, relaxed, or loose.
ui_density
string
"comfortable"
UI spacing: compact, comfortable, or spacious.
sidebar_width
integer
250
Sidebar width in pixels.
message_style
string
"bubbles"
Chat message layout: bubbles, flat, or compact.
max_message_width
integer
75
Maximum message width as a percentage of the chat area.
code_theme
string
"dark"
Code block theme: light, dark, or auto.
show_timestamps
bool
false
Show timestamps on chat messages.
border_radius
integer
8
Corner radius in pixels for UI elements.
reduce_animations
bool
false
Disable animations for accessibility.
high_contrast
bool
false
Increase text contrast for readability.
auto_update
bool
true
Automatically check for and install updates.

[channels]

Pre-configure channel instances and bindings. Instances define which messaging platforms to connect to; bindings route incoming messages to specific extensions.

Channel Instances

Each instance is keyed by a compound ID in the format {type}:{instance_id}.

Key
Type
Default
Description
channel_type
string
required
Channel type: discord, telegram, slack, whatsapp_web, etc.
display_name
string?
Human-readable label shown in the UI.
auto_connect
bool
false
Connect automatically on startup.

Channel Bindings

Bindings route incoming messages from a channel instance to an extension. Use glob patterns in peer_filter and group_filter to match specific senders or groups.

omni.toml

[channels.instances."discord:production"]

channel_type = "discord"

display_name = "Main Server"

auto_connect = true

 

[channels.instances."telegram:alerts"]

channel_type = "telegram"

display_name = "Alert Bot"

 

[[channels.bindings]]

channel_instance = "discord:production"

extension_id = "com.example.support-bot"

peer_filter = "*"

group_filter = "support-*"

priority = 100

[marketplace]

Marketplace connection settings.

Key
Type
Default
Description
api_url
string
"https://omniapp.org/api/v1/marketplace"
Marketplace API endpoint. Override for self-hosted instances.

Full Example

A complete configuration file showing all sections with typical values.

omni.toml

[general]

telemetry = false

log_level = "info"

max_history = 1000

 

[providers.openai]

provider_type = "openai"

default_model = "gpt-4o"

max_tokens = 4096

temperature = 0.7

 

[providers.anthropic]

provider_type = "anthropic"

default_model = "claude-opus-4-6"

 

[providers.ollama]

provider_type = "ollama"

default_model = "llama3.1"

endpoint = "http://localhost:11434"

 

[agent]

system_prompt = "You are a helpful assistant."

max_iterations = 25

timeout_secs = 120

 

[guardian]

enabled = true

sensitivity = "balanced"

allow_override = true

 

[permissions]

default_policy = "deny"

trust_verified = false

audit_enabled = true

 

[ui]

theme = "system"

accent_color = "#3b82f6"

message_style = "bubbles"

 

[channels.instances."discord:production"]

channel_type = "discord"

display_name = "Main Server"

auto_connect = true

 

[[channels.bindings]]

channel_instance = "discord:production"

extension_id = "com.example.bot"

priority = 100

Configuration — Settings & Options Guide | Omni AI Agent Builder