Postgres gets called “not suitable for web-scale” by people who’ve never benchmarked it. In reality, a well-tuned Postgres instance on modern hardware handles tens of thousands of transactions per second and terabytes of data. Most teams will never outgrow it. This article is about getting the most from a single node before reaching for sharding.

What modern Postgres can do

Rough numbers on a reasonable cloud box (16 cores, 64GB RAM, fast SSD):

  • Point queries: 50,000+ / sec
  • Simple writes: 20,000+ / sec
  • Complex joins: depends, often thousands per second
  • Dataset: 2-5 TB comfortable; 10+ TB with care

These are not exotic. They’re achievable with standard tuning.

Tuning the basics

Postgres’s defaults are for 1990s hardware. Adjust:

# postgresql.conf
shared_buffers = 16GB          # ~25% of RAM
effective_cache_size = 48GB    # ~75% of RAM
work_mem = 32MB                # per-connection working memory
maintenance_work_mem = 2GB     # for VACUUM, index builds
wal_buffers = 16MB
max_wal_size = 8GB
checkpoint_completion_target = 0.9
random_page_cost = 1.1         # for SSD
effective_io_concurrency = 200 # for SSD
max_connections = 200          # usually lower than you think

These alone give 2-5× performance over defaults on modern hardware.

Indexing discipline

An index that isn’t used is pure cost. Every index slows every write. Audit:

SELECT schemaname, relname, indexrelname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND schemaname NOT IN ('pg_catalog','information_schema');

Zero scans over a week = drop it. Saves write time, disk, cache.

Use CREATE INDEX CONCURRENTLY always in production. Regular CREATE INDEX locks writes.

For complex WHERE clauses, consider partial or expression indexes:

CREATE INDEX CONCURRENTLY idx_orders_pending
    ON orders (customer_id)
    WHERE status = 'PENDING';

Indexes the subset of rows you actually query. Smaller, faster, less bloat.

Connection pooling

Postgres is process-per-connection. Connections are expensive (~10MB each). 500 app-server connections to Postgres = 5GB RAM overhead.

Use PgBouncer in transaction mode between app and Postgres. Thousands of app connections multiplex onto 50-100 Postgres backends. Cheap, reliable, battle-tested.

For the app side, size Hikari pools to match Postgres capacity, not app concurrency (see the Hikari tuning article).

Partitioning

For tables > 100GB, declarative partitioning helps:

CREATE TABLE events (
    id bigint,
    customer_id uuid,
    occurred_at timestamptz,
    payload jsonb
) PARTITION BY RANGE (occurred_at);

CREATE TABLE events_2024_01 PARTITION OF events
    FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');

Benefits:

  • Queries with partition key hit only relevant partitions
  • Retention = drop old partitions (instant)
  • VACUUM runs per partition, not per whole table
  • Index sizes stay manageable

Partitioning is one of the single biggest scaling levers, often overlooked.

Autovacuum tuning

Postgres accumulates bloat from updates and deletes. Autovacuum cleans it up, but defaults are tuned for small databases. Increase aggressiveness on hot tables:

ALTER TABLE orders SET (
    autovacuum_vacuum_scale_factor = 0.01,   -- vacuum at 1% dead tuples
    autovacuum_analyze_scale_factor = 0.01,
    autovacuum_vacuum_cost_limit = 2000      -- work harder when running
);

Default scale factor is 20%. On a billion-row table, that’s waiting until 200M dead tuples accumulate. Tables slow down long before then.

Query optimization

EXPLAIN ANALYZE before any query goes to production on a large table. Common wins:

  • Seq Scan on a big table → add index
  • Nested Loop with thousands of rows → larger work_mem or refactor query
  • Sort spilling to disk → work_mem increase
  • Fetch 500 rows but only use 20 → add LIMIT early

Track slow queries with pg_stat_statements. The top 20 are usually 80% of the DB load; fixing those reshapes the whole system’s performance.

Read scaling

Single node reaches limits on read-heavy workloads. Solutions:

Read replicas. Stream WAL to replicas; route read-only transactions there. 3-5 replicas is typical. Eventually consistent — don’t read your own writes from replicas without care.

Hot caching. Redis or in-memory cache in front of Postgres. Point lookups and small result sets cache well.

Materialized views. For expensive aggregations that don’t need to be real-time.

Reads scale almost linearly with replicas for read-dominated workloads.

Write scaling

Writes are harder. Options in order:

  1. Vertical scaling — bigger instance. Surprisingly effective up to hundreds of thousands of writes/sec.
  2. Partitioning — hot tables split into partitions; locks and index bloat localized.
  3. Application-level sharding — customers or regions routed to different DB instances.
  4. Logical sharding with Citus — Postgres extension that handles sharding.
  5. Alternative stores — Cassandra, DynamoDB for write-dominated workloads with simple access patterns.

Most teams exhaust options 1-2 before ever reaching 3.

Signs you’ve hit a limit

  • Average query latency climbing even after index optimization
  • Autovacuum struggling to keep up (table bloat growing)
  • Replication lag sustained > a few seconds
  • Connection count maxed out even with PgBouncer
  • CPU pegged with no query fixable

When multiple signs appear together, scaling the instance usually buys 2-3× more runway. Beyond that, architectural changes.

Closing note

A well-tuned Postgres handles far more than most teams need. Investing in indexing discipline, connection pooling, partitioning, and query analysis keeps a single instance viable for years. Reach for sharding only when you’ve confirmed a single node can’t handle the load — not because of architectural fashion. The boring, well-operated Postgres is still the database that wins most real systems.