A PostgreSQL async library for modular applications
pgdbm
is a small, pragmatic library on top of asyncpg
that gives you:
- Connection pooling with retries and backoff
- Schema-aware SQL templating (
{{tables.users}}
) for multi-tenant/multi-module apps - A simple migration system with per-module isolation and checksums
- Optional monitoring (slow queries, pool stats, EXPLAIN, health checks)
- First-class testing utilities (ephemeral DBs, fixtures, isolation)
It is designed to be used by standalone services or by libraries that need to bring their own schema and migrations.
https://github.com/juanre/pgdbm
Installation
uv add pgdbm
or using pip:
pip install pgdbm
Core ideas
- Keep it asyncpg-fast and SQL-centric
- Make modules portable: the same code can own its DB or use a shared pool
- Isolate schemas:
{{tables.name}}
expands safely toschema.name
with identifier validation - Make migrations simple and safe: checksums, module-scoped history, advisory-locked application
- Provide production guardrails: TLS, statement/idle/lock timeouts
Quick usage
Standalone (module owns the DB)
from pgdbm import AsyncDatabaseManager, DatabaseConfig, AsyncMigrationManager
config = DatabaseConfig(
connection_string="postgresql://localhost/app",
schema="users",
# Production guardrails
ssl_enabled=False, # Enable with 'verify-full' in production
statement_timeout_ms=60_000,
idle_in_transaction_session_timeout_ms=60_000,
lock_timeout_ms=5_000,
)
db = AsyncDatabaseManager(config)
await db.connect()
migrations = AsyncMigrationManager(db, migrations_path="./migrations", module_name="users")
await migrations.apply_pending()
user_id = await db.execute_and_return_id(
"INSERT INTO {{tables.users}} (email) VALUES ($1)",
"user@example.com",
)
Shared pool (module uses a DB owned by another)
# The host application owns the pool
shared_pool = await AsyncDatabaseManager.create_shared_pool(
DatabaseConfig(connection_string="postgresql://localhost/app")
)
# Your module uses the pool but owns its schema and migrations
user_db = AsyncDatabaseManager(pool=shared_pool, schema="users")
await AsyncMigrationManager(user_db, "./migrations", module_name="users").apply_pending()
Migrations
-- migrations/001_initial.sql
CREATE TABLE IF NOT EXISTS {{tables.users}} (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
- Use
{{tables.name}}
in DDL/DML; it is expanded safely at runtime - Migrations are tracked in
schema_migrations
with checksums - Set a unique
module_name
per module/library
Monitoring (optional)
from pgdbm import MonitoredAsyncDatabaseManager
db = MonitoredAsyncDatabaseManager(config)
await db.connect()
metrics = await db.get_metrics() # pool sizes, query stats
slow = db.get_slow_queries(200) # queries over 200ms
plan = await db.explain_query("SELECT * FROM {{tables.users}} WHERE id = $1", 42)
Testing utilities
from pgdbm.fixtures.conftest import test_db
async def test_user_flow(test_db):
await test_db.execute("CREATE TABLE {{tables.users}} (id serial primary key, email text)")
uid = await test_db.execute_and_return_id("INSERT INTO {{tables.users}} (email) VALUES ($1)", "a@b.com")
assert uid == 1
Production notes
- Enable TLS with certificate verification:
ssl_enabled=True
,ssl_mode='verify-full'
, andssl_ca_file
as needed
- Use server-side timeouts:
statement_timeout_ms
,idle_in_transaction_session_timeout_ms
,lock_timeout_ms
- Migrations are serialized per
module_name
via advisory locks - Keep debug logging off in production (args are masked, but prefer redaction)
When to use pgdbm
- You want SQL and
asyncpg
performance without an ORM - You build libraries/services that must run standalone or inside a larger app
- You need clean migrations per module and safe schema isolation
- You want strong testing utilities and pragmatic observability