Docs
Getting Started
Set up Omni, connect your first channel, and install extensions in under 10 minutes.
What is Omni?
Omni is a privacy-first AI agent that runs locally on your desktop. It connects to your preferred LLM provider (OpenAI, Anthropic, Google Gemini, Ollama, AWS Bedrock, or any custom endpoint) and gives your AI agent real capabilities through a secure extension system.
With Omni, your AI agent can browse the web, read and write files, send messages across 21+ communication channels, schedule tasks, analyze images, search your memory, and much more — all controlled by a fine-grained permission system that keeps you in charge.
Extensions are written in Rust, compiled to WebAssembly (WASM), and run in isolated sandboxes. Every extension published to the marketplace is scanned by our 4-layer antivirus pipeline before it reaches users.
Installation
Omni is a desktop application available for Windows, macOS, and Linux.
System Requirements
- Windows 10/11 (x64), macOS 12+ (Apple Silicon or Intel), or Linux (x64)
- 4 GB RAM minimum (8 GB recommended)
- 500 MB free disk space
- Internet connection for LLM API access (not required for Ollama local models)
Download & Install
# macOS (Apple Silicon)
curl -L https://github.com/omni-platform/omni/releases/latest/download/omni-macos-arm64.dmg -o omni.dmg
# macOS (Intel)
curl -L https://github.com/omni-platform/omni/releases/latest/download/omni-macos-x64.dmg -o omni.dmg
# Windows — download the .msi installer from the releases page
# Linux (Debian/Ubuntu)
curl -L https://github.com/omni-platform/omni/releases/latest/download/omni-linux-x64.deb -o omni.deb
sudo dpkg -i omni.deb
On macOS, open the .dmg file and drag Omni to your Applications folder. On Windows, run the .msi installer. On Linux, install the .deb package or extract the .tar.gz archive.
First Launch
When you first open Omni, you'll be guided through the initial setup.
1. Connect an LLM Provider
Go to Settings → Providers and add your API key for one of the supported providers:
# For local models with Ollama:
ollama pull llama3.1
# Then in Omni Settings → Providers, add Ollama
# URL: http://localhost:11434
2. Start Chatting
Once connected, type a message in the chat input and press Enter. The agent will respond using your configured LLM and any activated tools.
Connecting Channels
Channels let your Omni agent communicate through external messaging platforms. Go to Settings → Channels to configure.
Supported Channels
Example: Connect Discord
Create a Discord bot at discord.com/developers
Copy the bot token
In Omni, go to Channels → Add Instance → Discord
Paste the bot token and click Connect
Invite the bot to your Discord server
Each channel type has its own authentication method. Some channels (like WhatsApp Web and Signal) use QR code pairing, while others use bot tokens or API keys. You can create multiple instances of the same channel type.
Installing Extensions
Extensions add new capabilities to your AI agent. Browse the marketplace to find extensions for web scraping, file management, scheduling, and more.
Open the Extensions tab in Omni or visit the marketplace website
Browse or search for an extension
Click Install and review the permissions it requests
Approve the permissions you're comfortable with
The extension downloads, activates, and its tools become available
# Install via CLI:
omni ext install com.example.weather-tool
omni ext list
omni ext uninstall com.example.weather-tool
Basic Usage
Chat naturally
Ask your agent to do things in plain language. It will automatically choose the right tools.
Build visual workflows
Use the Flowchart Builder to create no-code automations with 19 node types. Flowcharts can call LLMs, make HTTP requests, send messages, and use all 29 native tools.
Channel routing
Use channel bindings to route incoming messages from specific channels to specific extensions.
Permission prompts
When a tool needs a permission you haven't pre-approved, Omni will prompt you. Allow once, always, or deny.
Multiple providers
Configure multiple LLM providers and Omni will automatically rotate with exponential backoff if one fails.