One interface for LLMs, optional database, clean Python
llmbridge
is a small, no‑nonsense Python library that gives you a single API to talk to OpenAI, Anthropic, Google (Gemini), and Ollama. It can run with or without a database. With a DB, it logs usage, costs, and keeps a model registry you control.
- One request/response schema across providers
- Optional logging to SQLite (dev) or PostgreSQL (prod)
- Model registry with capabilities and pricing
- Simple helpers for images and documents
- A CLI to refresh models and initialize the DB quickly
https://github.com/juanre/llmbridge
Installation
# SQLite only
uv add llmbridge-py
# With PostgreSQL support
uv add "llmbridge-py[postgres]"
Or clone for development:
git clone https://github.com/juanre/llmbridge.git
cd llmbridge
uv sync
Quick start (no DB)
import asyncio
from llmbridge.service import LLMBridge
from llmbridge.schemas import LLMRequest, Message
async def main():
service = LLMBridge(enable_db_logging=False)
request = LLMRequest(
messages=[Message(role="user", content="Say hello.")],
model="gpt-4o-mini",
)
response = await service.chat(request)
print(response.content)
asyncio.run(main())
With SQLite (local logging)
import asyncio
from llmbridge.service_sqlite import LLMBridgeSQLite
from llmbridge.schemas import LLMRequest, Message
async def main():
service = LLMBridgeSQLite(db_path="llmbridge.db")
request = LLMRequest(
messages=[Message(role="user", content="Hello!")],
model="claude-3-5-haiku-20241022",
)
response = await service.chat(request)
print(response.content)
asyncio.run(main())
With PostgreSQL (production logging)
import asyncio
from llmbridge.service import LLMBridge
async def main():
service = LLMBridge(db_connection_string="postgresql://user:pass@localhost/dbname")
# use service.chat(...) as above
asyncio.run(main())
CLI: initialize and refresh
# PostgreSQL (use DATABASE_URL)
export DATABASE_URL=postgresql://user:pass@localhost/dbname
llmbridge init-db
# SQLite
llmbridge --sqlite ./llmbridge.db init-db
# Load curated JSONs into the registry
llmbridge json-refresh
# Or with SQLite
llmbridge --sqlite ./llmbridge.db json-refresh
Files and images
from llmbridge.file_utils import analyze_image
image_content = analyze_image("path/to/image.png", "What's in this image?")
request = LLMRequest(messages=[Message(role="user", content=image_content)], model="gpt-4o")
response = await service.chat(request)
Notes:
- OpenAI PDF analysis is routed via the Assistants API for now (stable behavior across SDK versions). Tools and custom response formats are not supported in this PDF path.
- OpenAI reasoning models (
o1
,o1-mini
) use the Responses API. They do not support tools or custom response formats.
Keeping model data up to date (JSON from PDFs)
Providers change models, prices, and context windows often. llmbridge
includes a small pipeline to generate curated JSONs from the official provider PDFs and then load them into your database.
The flow is simple:
Download the official PDFs to the
res/
directory (the filenames include dates):- Anthropic: pricing and model overview
- OpenAI: pricing and models comparison
- Google: Gemini API pricing and models
Generate JSON from those PDFs:
# Uses an LLM prompt to extract a clean model list with prices/capabilities
llmbridge extract-from-pdfs generate
- Load the JSONs into your registry (SQLite or PostgreSQL):
llmbridge json-refresh
# Or with SQLite
llmbridge --sqlite ./llmbridge.db json-refresh
There’s also a “download-instructions” mode that prints where to get the PDFs:
llmbridge extract-from-pdfs download-instructions
Why this matters: keeping prices and context sizes right is hard if you do it by hand. This workflow gives you reproducible, timestamped JSONs you can review and re‑apply.
Why
Calling multiple providers should not require rewriting your app. llmbridge
keeps a clean, typed Python API and gives you database logging only if you want it. SQLite for demos and local dev. PostgreSQL for production, with a schema dedicated to usage tracking and a model registry you can curate from JSON or provider discovery.
What’s in the box
- Providers: OpenAI, Anthropic, Google (Gemini), Ollama
- Schemas:
Message
,LLMRequest
,LLMResponse
,LLMModel
- DB (optional): usage logs, per‑user stats, curated model list with prices and capabilities
- CLI:
init-db
,json-refresh
, and utilities to inspect models
Version
llmbridge
v0.2.0
- CLI renamed to
llmbridge
(wasllm-models
) - Added
init-db
command (SQLite & PostgreSQL) - OpenAI PDFs stable via Assistants (deprecation warnings suppressed)
- OpenAI
o1*
via Responses API (no tools/response_format) - PostgreSQL migrations use
pgcrypto
(gen_random_uuid()
)