HikariCP is the default JDBC connection pool in Spring Boot. It’s also one of the fastest-correct connection pools ever written. But its defaults — tuned for an average case — often miss for real workloads. This article covers the four knobs that make the difference, and the symptoms of misconfiguration.
The mental model
A connection pool is a bounded queue of reusable DB connections. Threads borrow, use, return. The pool protects the DB from unbounded load and saves the cost of opening new connections.
Two failure modes:
- Pool too small — threads wait; latency climbs; throughput stalls
- Pool too large — DB overloaded by concurrent work; everything slows
Tuning is about matching pool size to what the DB can actually do.
The counterintuitive truth
Your pool should be smaller than you think. The Postgres formula (works for most DBs):
connections = ((cores × 2) + effective_spindles)For cloud Postgres on SSD: usually 10-20 connections per service instance, period. Not 50. Not 100.
More connections don’t help because:
- Postgres processes are heavy (10MB each)
- More concurrent queries = more lock contention
- Context switching hurts cache efficiency
- Disk bandwidth doesn’t scale with connections
If you need more concurrency, the answer is PgBouncer in front of Postgres (transaction pooling), not a bigger Hikari pool.
The four settings
1. maximum-pool-size
Start at 15. Measure. Tune based on actual DB capacity.
spring.datasource.hikari.maximum-pool-size: 15For small services: 5-10. For large services with high concurrency: up to 30. Almost never above that per service instance.
2. connection-timeout
How long a thread waits for a connection before giving up. Default 30s is too long — a user has already given up by then.
spring.datasource.hikari.connection-timeout: 2000 # 2sBetter: fail fast with a clear error. The caller can retry or serve a degraded response. Hanging for 30 seconds serves no one.
3. max-lifetime
How long a connection can live before forced retirement. Defaults to 30 minutes. Matches typical network/firewall policies that drop idle connections.
spring.datasource.hikari.max-lifetime: 1800000 # 30 minShould be shorter than the shortest idle timeout anywhere in your network path (DB, firewall, load balancer). Otherwise you get mysterious “connection closed” errors.
4. leak-detection-threshold
The underrated one. Logs a stack trace for any connection held longer than N ms.
spring.datasource.hikari.leak-detection-threshold: 20000 # 20sFirst time you enable it in a non-trivial codebase, you will find a bug. A forgotten close(), a transaction that escaped its boundary, a lock held too long. Fix, rinse, repeat.
Other settings that matter less often
minimum-idle— defaults tomaximum-pool-size. Leave it alone.idle-timeout— only meaningful ifminimum-idle < maximum-pool-size. Otherwise ignored.validation-timeout— time to validate a connection before use. Default 5s is reasonable.
Symptoms and diagnoses
Latency spikes, threads blocked on HikariDataSource. Pool too small. Either more capacity in DB or more Hikari connections, but check DB first.
DB CPU pegged, many slow queries. Pool too large. Reduce Hikari size, or add a query-limiter before DB.
Periodic “connection is closed” errors. max-lifetime longer than network idle timeout. Reduce.
Occasional hangs during startup. DB under load during warmup. Increase initialization-fail-timeout or reduce warmup concurrency.
Leak warnings in logs. Find the leak. Don’t suppress the warnings.
Per-service vs per-DB
Each service instance has its own pool. If you run 20 instances × 15 connections = 300 DB connections total. Postgres’s default max_connections is 100. Trouble.
Solutions:
- Reduce per-instance pool size
- Run PgBouncer (transaction mode) between services and Postgres — multiplexes thousands of app connections to 100 DB backends
- Increase
max_connections(carefully; each connection costs memory)
PgBouncer is the standard answer at scale. If you have more than 5 service instances hitting the same Postgres, install it.
Metrics to monitor
Hikari exposes Micrometer metrics:
hikaricp.connections.active— currently in usehikaricp.connections.idle— availablehikaricp.connections.pending— threads waiting (non-zero = pool too small)hikaricp.connections.acquire— time to acquire a connection (p99 climbing = pool under pressure)hikaricp.connections.usage— time a connection is held (long = long queries or leaks)
Dashboard these. Alerts on pending > 0 sustained and acquire p99 > 100ms.
Multiple pools
For services with mixed workloads, split pools:
Primary pool: 15 connections — for transactional writes
Reader pool: 30 connections (to replica) — for read-only queries
Reporting pool: 5 connections (to reporting DB) — for slow aggregationsKeeps slow reporting queries from starving transactional operations. Effectively a bulkhead at the DB level.
Closing note
HikariCP is one of those tools where the default is nearly right, but “nearly” hides real production issues. Fifteen minutes tuning the four knobs above, then leak detection on, then dashboard the metrics — and you’ve handled 95% of what goes wrong with connection pooling. Beyond that, the right move is usually architectural (read replicas, PgBouncer, sharding) not more Hikari tweaking.