Skip to content

Overview

AgentOS is a Rust engine that brokers between Python skills (that write to the graph) and apps (that read from it). The engine is the matchmaker: apps ask for capabilities, the engine picks a skill that provides them, and neither side learns the other’s name.

The entire system is built from a small number of primitives. Learn these and the rest of the docs fall into place.

Three kinds of thing live in SQLite at ~/.agentos/data/agentos.db:

  • Nodes — bare identities. A node has an ID and timestamps; nothing else.
  • Edges — labeled, directional links between two nodes (tagged_with, replied_to, parent).
  • Values — keyed fields on a node or an edge (name = "Joe", done = true).

That’s the whole schema. There is no “tasks” table, no “messages” table, no type column on nodes. Semantic types are defined by shapes — YAML files loaded into the graph at engine startup. The engine is shape-aware but entity-agnostic: it can coerce priority (integer) without knowing what “priority” means.

Read more: Memex & the graph · Identity & change

Data moves across four inter-process boundaries. Understanding these is how you reason about security, failure modes, and where code belongs.

AI clients (Cursor, Claude Code, Claude Desktop) speak Model Context Protocol over STDIO to agentos-mcp, a thin proxy. The proxy translates MCP calls to JSON-RPC and forwards them over the engine’s Unix socket.

2. Engine Unix socket (~/.agentos/engine.sock)

Section titled “2. Engine Unix socket (~/.agentos/engine.sock)”

The engine daemon is a single Rust binary. One engine per machine, enforced by a flock on ~/.agentos/engine.lock. The socket accepts JSON-RPC from MCP, from the agentos CLI, and from the web bridge. Everything funnels through here.

Skills are Python. When a skill is called, the engine spawns a fresh Python subprocess, loads the skill module, and runs the SOP. The SDK in the worker (from agentos import http, secrets, sql) forwards requests back to the engine over a wire protocol — every outbound HTTP call, every credential lookup, every graph write returns to the engine for brokering.

Per-call subprocess, not a long-lived daemon. Clean exit, no shared state between skill runs.

Optional. For apps that want a browser UI, the web bridge serves /graph, /observer/stream (SSE), /user, and /shapes from a read-only SQLite connection. The engine retains write monopoly.

flowchart LR
AI["AI client<br/>(Cursor, etc)"] -->|STDIO| MCP["agentos-mcp<br/>(proxy)"]
MCP -->|JSON-RPC<br/>Unix sock| ENG["Engine daemon<br/>(Rust, Tokio)"]
ENG -->|subprocess| PY["Python skill worker<br/>(reads, writes, http, auth)"]
ENG -->|SQLite pool| DB["~/.agentos/data/agentos.db<br/>(graph + encrypted creds)"]
ENG -->|HTTP :3456| BR["Web bridge"]
BR --> APPS["apps (React, TS)"]

Skills never name apps. Apps never name skills. They meet through capabilities.

A skill declares what it offers with a decorator:

@provides("llm")
def chat(messages): ...

An app asks for a capability by name:

await agentos.capability("llm").invoke({ messages })

The engine picks the skill. If you install five LLM skills, the engine resolves by user preference and freshness — no hardcoded provider order. If a skill is uninstalled, dependent apps keep working as long as some skill provides the capability.

This is the decoupling law. Installing or uninstalling an app has zero impact on skills, and vice versa. The engine is the sole broker.

Security explains why this matters for trust and auth.

~/.agentos/
data/agentos.db The graph + encrypted credentials (one SQLite file)
logs/ engine.log, mcp.log, engine-io.jsonl
engine.sock, mcp.sock IPC endpoints
engine.pid, engine.lock Singleton guards

One directory. Portable. Back it up, copy it, nuke it. Local-first explains what is and isn’t committed to that directory.