LLMRing

llms.txt

One interface to run them all

LLMRing is an Open Source provider-agnostic Python library for talking to LLMs. It lets you easily manage which model you use for any task, sync configurations across services, and track usage and costs. Your aliases are stored in a version-controlled llmring.lock file, making your model choices explicit and shareable.

Your API calls go directly to OpenAI, Anthropic, Google, or Ollama. We never see your prompts or responses.

Components

Quick Start

Install and initialize with sensible defaults:

uv add llmring
llmring lock init

This creates llmring.lock with auto-suggested aliases based on your available API keys:

Core Concept: Lockfile + Aliases

Your configuration lives in llmring.lock, a version-controlled file that makes your AI stack reproducible:

# llmring.lock
[registry]
openai = 142
anthropic = 89
google = 27

[aliases]
summarizer = "anthropic:claude-3-haiku"
pdf_converter = "openai:gpt-4o-mini"
deep_analysis = "anthropic:claude-3-opus"

Use aliases in your code:

from llmring import LLMRing, Message

ring = LLMRing()  # Loads from llmring.lock

response = await ring.chat("summarizer", messages=[
    Message(role="user", content="Summarize this document...")
])

Fully Functional Without Backend

LLMRing works perfectly without any server:

# Initialize lockfile with defaults
llmring lock init

# Bind aliases locally
llmring bind pdf_converter openai:gpt-4o-mini

# Validate against registry
llmring lock validate

# Update registry versions
llmring lock bump-registry

All configuration is in your lockfile. No accounts, no tracking, just reproducible AI.

Optional Server Features

Add a server for receipts and team sync:

# With server for receipts and logging
ring = LLMRing(api_url="http://localhost:8000")

# Sync aliases with team
llmring push  # Upload bindings
llmring pull  # Download bindings

Server adds:

See Server Docs for endpoints, headers, and deployment.

The Registry

Model information comes from versioned, per-provider registries:

Each provider’s registry is versioned independently. Your lockfile records these versions to track drift:

[registry]
openai = 142      # Registry snapshot when you last updated
anthropic = 89    # What the registry knew at version 89

Note: These versions track what the registry knew at that point, not the actual model behavior. Providers can change prices and limits anytime - the registry helps you detect when things have drifted from your expectations.

See Registry Docs for schema and curation workflow.

Profiles for Different Environments

Support multiple configurations in one lockfile:

# llmring.lock
[profiles.prod.aliases]
summarizer = "anthropic:claude-3-haiku"
analyzer = "openai:gpt-4"

[profiles.dev.aliases]
summarizer = "openai:gpt-3.5-turbo"
analyzer = "openai:gpt-3.5-turbo"

Switch profiles via environment:

export LLMRING_PROFILE=prod
python app.py

CLI Workflow

Core lockfile management:

# Initialize with smart defaults
llmring lock init

# Bind aliases (updates lockfile)
llmring bind summarizer anthropic:claude-3-haiku

# List aliases from lockfile
llmring aliases

# Validate lockfile against registry
llmring lock validate

# Update registry versions
llmring lock bump-registry

Optional server sync:

# Push bindings to server
llmring push

# Pull bindings from server
llmring pull

# View usage stats (requires server)
llmring stats --by-alias

Environment Variables

# LLM provider keys (required)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

# Optional profile selection
export LLMRING_PROFILE="prod"

# Optional server connection
export LLMRING_API_URL="http://localhost:8000"

Why LLMRing

Source Code

Everything is open source on GitHub:

License

MIT License. Use it however you want.


One interface to run them all
One registry to find them
One API to sync them all
And with aliases bind them