LLMRing
llms.txtOne interface to run them all …
LLMRing is an Open Source provider-agnostic Python library for talking to LLMs. It lets you easily manage which model you use for any task with aliases, use a single interface for all providers, and track usage and cost via an optional server. Your aliases are stored in a version-controlled llmring.lock
file, making your model choices explicit, easy to change, and easy to share.
Your API calls go directly to OpenAI, Anthropic, Google, or Ollama.
The call’s metadata can be optionally logged to a server managed by you.
Components
- Library (llmring) - Python package for unified LLM access with built-in MCP support
- Server (llmring-server) - Optional backend for usage tracking, receipts, and MCP persistence
- Registry - Versioned, human-validated database of model capabilities and pricing
Quick Start
Install and create a basic lockfile:
uv add llmring
llmring lock init
This creates llmring.lock
with sensible defaults and pinned registry versions. For intelligent, conversational configuration that analyzes the live registry and recommends optimal aliases (e.g., fast
, balanced
, deep
), use:
llmring lock chat
Lockfile + Aliases
Your configuration lives in llmring.lock
, a version-controlled file that makes your AI stack reproducible:
# llmring.lock (excerpt)
# Registry version pinning (optional)
[registry_versions]
openai = 142
anthropic = 89
google = 27
# Default bindings
[[bindings]]
alias = "summarizer"
models = ["anthropic:claude-3-haiku"]
[[bindings]]
alias = "pdf_converter"
models = ["openai:gpt-4o-mini"]
[[bindings]]
alias = "balanced"
models = ["anthropic:claude-3-5-sonnet", "openai:gpt-4o"] # With fallback
Use aliases in your code:
from llmring import LLMRing, Message
ring = LLMRing() # Loads from llmring.lock
response = await ring.chat("summarizer", messages=[
Message(role="user", content="Summarize this document...")
])
Unified Structured Output
LLMRing provides one interface for structured output across all providers. Use a JSON Schema with response_format
, and LLMRing adapts it per provider:
from llmring import LLMRing
from llmring.schemas import LLMRequest, Message
ring = LLMRing()
request = LLMRequest(
model="balanced",
messages=[Message(role="user", content="Generate a person")],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name", "age"]
}
},
"strict": True
}
)
response = await ring.chat(request)
print(response.content) # valid JSON
print(response.parsed) # dict
How it works per provider:
- OpenAI: Native JSON Schema strict mode
- Anthropic: Tool-based extraction with validation
- Google Gemini: FunctionDeclaration with schema mapping
- Ollama: Best-effort JSON with automatic repair
CLI Commands
The configuration is in your lockfile
# Create basic lockfile with defaults
llmring lock init
# Intelligent conversational configuration (recommended)
llmring lock chat
# Bind aliases locally (escape hatch)
llmring bind pdf_converter openai:gpt-4o-mini
# Validate against registry
llmring lock validate
# Update registry versions
llmring lock bump-registry
Two Modes of Operation
1. Lockfile-Only (No Backend)
Works completely standalone with just your llmring.lock
file. Safe, explicit configuration per codebase. No costs tracking, no logging, no MCP persistence.
2. With Server (Self-Hosted)
Add receipts, usage tracking, and MCP tool/resource persistence by connecting to your own llmring-server
instance.
See Server Docs for endpoints, headers, and deployment.
The Open Registry
Model information comes from versioned, per-provider registries:
- Current: https://llmring.github.io/registry/openai/models.json
- Versioned: https://llmring.github.io/registry/openai/v/142/models.json
Each provider’s registry is versioned independently. Your lockfile records these versions to track drift:
[registry_versions]
openai = 142 # Registry snapshot when you last updated
anthropic = 89 # What the registry knew at version 89
Note: These versions track what the registry knew at that point, not the actual model behavior. Providers can change prices and limits anytime - the registry helps you detect when things have drifted from your expectations.
See Registry Docs for schema and curation workflow.
Profiles for Different Environments
Support multiple configurations in one lockfile:
# llmring.lock (profiles excerpt)
# Production: High quality with fallbacks
[profiles.prod]
[[profiles.prod.bindings]]
alias = "summarizer"
models = ["anthropic:claude-3-haiku"]
[[profiles.prod.bindings]]
alias = "analyzer"
models = ["openai:gpt-4", "anthropic:claude-3-5-sonnet"]
# Development: Cheaper models
[profiles.dev]
[[profiles.dev.bindings]]
alias = "summarizer"
models = ["openai:gpt-4o-mini"]
[[profiles.dev.bindings]]
alias = "analyzer"
models = ["openai:gpt-4o-mini"]
Switch profiles via environment:
export LLMRING_PROFILE=prod
python app.py
CLI Workflow
Core lockfile management:
# Create basic lockfile with defaults
llmring lock init
# Intelligent conversational configuration (recommended)
llmring lock chat
# Bind aliases (updates lockfile)
llmring bind summarizer anthropic:claude-3-haiku
# List aliases from lockfile
llmring aliases
# Validate against registry
llmring lock validate
# Update registry versions
llmring lock bump-registry
MCP operations (requires backend):
# Connect to any MCP server for interactive chat
llmring mcp chat --server "stdio://python -m your_mcp_server"
# List registered MCP servers
llmring mcp servers list
# Register new MCP server
llmring mcp register calculator http://calculator-mcp:8080
# List available tools
llmring mcp tools
# Execute a tool
llmring mcp execute calculator.add '{"a": 5, "b": 3}'
With a server connected:
# View usage stats (requires server)
llmring stats
# Export receipts (requires server)
llmring export
Environment Variables
# LLM provider keys (required)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# Gemini supports either of these
export GEMINI_API_KEY="..." # or
export GOOGLE_API_KEY="..." # or
export GOOGLE_GEMINI_API_KEY="..."
# Optional profile selection
export LLMRING_PROFILE="prod"
# Optional server connection
export LLMRING_API_URL="http://localhost:8000"
Why LLMRing
- Lockfile: Version control your AI configuration with reproducible deployments
- Task-oriented: Think in terms of tasks, not model IDs
- No vendor lock-in: Works completely without any backend
- Drift detection: Track when models change from your expectations
- MCP Integration: Full Model Context Protocol support for tool orchestration
- Flexible: Use standalone or with optional self-hosted server for receipts and tracking
Source Code
Everything is open source on GitHub:
- llmring - Python package and CLI
- llmring-server - Optional API server
- registry - Model registry source
License
MIT License. Use it however you want.
One registry to find them
One API to track them all
And with aliases bind them