LOCAL-FIRST · MCP NATIVE · OPEN SOURCE

Knowledge infrastructure
for AI and humans

Built on Teide—an ultrafast columnar engine. Teidelum syncs your work tools, indexes everything into searchable and queryable storage, and serves it to AI agents via MCP and applications via REST API. Single binary. Zero config. Data never leaves your machine.

Single Binary No Docker, no database server, no config files
Local-First Data never leaves your machine. Zero cloud dependency
Open Source MIT licensed. Fork it, extend it, self-host it
Zero Config Set a data directory and start. That's it

AI agents deserve
better than file scanning

Most "knowledge base" setups point an LLM at a folder of files. No ranking. No structure. No relationships. Every query re-reads everything.

WITH TEIDELUM
  • Indexed BM25 search with fuzzy matching
  • Ranked results with source attribution
  • SQL analytics across all structured data
  • Graph traversal over entity relationships
  • MCP for any AI agent, REST for any app
  • Stays fast regardless of data volume
WITHOUT INFRASTRUCTURE
  • LLM scans raw files on every query
  • No relevance ranking
  • Can't aggregate or join data
RESULT
  • No entity relationships
  • Locked to one tool, one LLM
  • Slows down as knowledge grows

Six tools in one binary

Everything your AI agent and your applications need to work with organizational knowledge.

Indexed Search

BM25 ranking with fuzzy matching and highlighted snippets. Your AI agent asks a question and gets the most relevant results—not a token-heavy dump of everything.

SQL Analytics

Powered by Teide—an ultrafast columnar engine. Filter, group, sort, and join structured data from any synced source.

Graph Traversal

FK relationships create a navigable graph. Ask "what's connected to this customer?" and get answers without teaching your LLM a custom schema.

Universal API

MCP over stdio for AI agents. REST over HTTP for apps and scripts. Same data, same tools, two transports. Connect Claude, GPT, your dashboard, or a cron job.

Work Tool Sync

Pull from Notion pages, Zulip messages, and live databases. Incremental sync means only changed records on each run. Your knowledge base stays current.

Single Binary

One Rust binary with the Teide columnar engine embedded. No Docker. No database server. No config files. Data never touches a third-party cloud.

Sync, index, serve

Data flows from your tools through dual storage engines into 11 tools exposed over both MCP and HTTP.

Notion Zulip Databases SOURCES TEIDELUM search sql graph sync full-text columnar catalog TRANSPORTS MCP stdio MCP HTTP REST API AI Agents Applications Scripts CONSUMERS

Synced content splits into a full-text index for search and the Teide columnar engine for ultrafast SQL analytics. The query router dispatches automatically. Applications push data in via REST or MCP write tools.

Connect and query

Start the server, search content, run SQL, and traverse relationships.

terminal

Same operations available as MCP tools—AI agents call search, sql, graph, create_table, and 7 more directly.

Built for teams that
use AI seriously

AI Agent Infrastructure

Give MCP-enabled agents 11 tools for reading, writing, and traversing knowledge. They create tables, index documents, and discover relationships autonomously.

Automation & Pipelines

Scripts and cron jobs push data in via REST, query it with SQL, and pull results out. Automate knowledge workflows without any SDK—just curl and JSON.

Knowledge Consolidation

Notion pages, Zulip threads, database tables—fragmented across tools. Teidelum syncs it all into one index. Search, query, and connect everything.

Start building
in 30 seconds

$ cargo install teidelum
$ teidelum --port 8080
 HTTP server on http://127.0.0.1:8080
 MCP stdio ready · 11 tools registered