LLMRing Open Registry
llms.txtPublic, versioned, human-validated registry of model capabilities and pricing, hosted on GitHub Pages. Models are keyed as provider:model
.
Base URL: https://llmring.github.io/registry/
Curation Philosophy: All published registry files are reviewed and validated by humans. Automation is used only to generate draft candidates; no auto-publish to ensure data accuracy and trustworthiness.
Files
- Current per provider:
/[provider]/models.json
- Archived versions:
/[provider]/v/[n]/models.json
- Manifest:
/manifest.json
Schema (per provider)
{
"version": 2,
"updated_at": "2025-08-20T00:00:00Z",
"models": {
"openai:gpt-4o-mini": {
"provider": "openai",
"model_name": "gpt-4o-mini",
"display_name": "GPT-4o Mini",
"max_input_tokens": 128000,
"max_output_tokens": 16384,
"dollars_per_million_tokens_input": 0.15,
"dollars_per_million_tokens_output": 0.60,
"supports_vision": true,
"supports_function_calling": true,
"supports_json_mode": true,
"supports_parallel_tool_calls": true,
"tool_call_format": "json_schema",
"is_active": true
}
}
}
Curation Workflow (Human-Validated, Canonical)
LLMRing’s registry prioritizes accuracy through manual review:
- Gather sources (recommended): Collect pricing/docs HTML and PDFs from each provider for audit trail
- Generate draft: Use automation to create best-effort draft from sources (automation allowed for drafts only)
- Review changes: Compare draft vs current published file, field-by-field; manually adjust as needed
- Promote: Bump per-provider
version
, setupdated_at
, archive previous underv/<n>/models.json
, replace currentmodels.json
Critical: Published models.json
files are always human-reviewed. Automation generates candidates only; humans make final decisions to ensure accuracy.
CLI (from registry
package)
# Install browser for PDF fetching (first time only)
uv run playwright install chromium
# Fetch documentation from all providers
uv run llmring-registry fetch --provider all
# Extract model information to create drafts
uv run llmring-registry extract --provider all --timeout 120
# Review draft changes for each provider
uv run llmring-registry review-draft --provider openai
uv run llmring-registry review-draft --provider anthropic
uv run llmring-registry review-draft --provider google
# Accept all changes (after review)
uv run llmring-registry review-draft --provider openai --accept-all
# Promote reviewed file to production and archive
uv run llmring-registry promote --provider openai
Single provider update example:
uv run llmring-registry fetch --provider openai
uv run llmring-registry extract --provider openai
uv run llmring-registry review-draft --provider openai --accept-all
uv run llmring-registry promote --provider openai
Clients
- The
llmring
library fetches current models and uses the registry for cost calculation and limit validation. - The server proxies the registry and may cache responses.
Client-side Lookup Rules
- Models map is a dictionary keyed by
provider:model
. Clients should prefer O(1) lookups by that key. - When only
model
is available, clients may attempt fallback keys:models[model]
ormodels[f"{provider}/{model}"]
for legacy data.
Links
- Live data: https://llmring.github.io/registry/
- Source: https://github.com/llmring/registry