Virtual threads became stable in Java 21 (October 2023). After running them in production for two years across several services, the summary is: for I/O-bound Java backends, they’re a free win, with a few sharp edges worth knowing about. This article is the field report.
The big idea, briefly
Platform threads (the traditional kind) are OS threads. Expensive — 1MB stack each, limited to thousands before the OS balks. Virtual threads are managed by the JVM, stacks grow dynamically, millions can coexist. Blocking I/O no longer pins a thread.
For I/O-bound services — the typical Java backend — this means thread-per-request scales from ~200 concurrent requests to tens of thousands without code changes.
Enabling in Spring Boot
One line:
spring:
threads:
virtual:
enabled: trueTomcat now runs each request on a virtual thread. Blocking operations (JDBC, HTTP clients, Kafka) don’t block platform threads. Throughput improves without any other changes.
The wins I’ve actually measured
- I/O-bound service throughput 3-10×. Payment service handling 2k RPS peak now handles 15k. Same code.
- Simpler code. “Just write blocking code” is a legitimate answer now. No reactive backflips.
- Better stack traces. Request flow is one stack; no async context loss.
- Memory savings on high-concurrency services. Thousands of waiting HTTP requests cost far less memory than with platform threads.
Where virtual threads don’t help
CPU-bound work. If your service computes more than it waits, virtual threads add overhead without benefit. Stick with a regular thread pool of ~cores.
Synchronized blocks (historically). Before Java 24, synchronized blocks pinned the virtual thread to a platform thread. If your hot path used synchronized, you lost most of the benefit. Java 24+ fixed this; check your JDK version.
Native method calls. JNI code pins the thread. Rare in app code but present in some libraries (native crypto, old JDBC drivers).
ThreadLocal heavy code. Memory cost — each virtual thread still has its own ThreadLocal storage. Thousands of virtual threads with many ThreadLocals = memory bloat. Prefer Scoped Values (Java 21+).
Sharp edge: connection pool sizing
This one bit every team I’ve worked with.
With platform threads, the HTTP thread pool was implicitly the concurrency limit. 200 threads = max 200 concurrent DB queries. Hikari at 20 connections = some waiting but not catastrophic.
With virtual threads, 10,000 concurrent requests all try to hit the DB. Hikari at 20 connections = 9,980 threads waiting. Worse, the DB sees requests piling up and may slow down.
Fix: add explicit concurrency limiting before expensive operations. A semaphore around DB calls, or a bounded executor for specific work. The implicit concurrency cap is gone; you need an explicit one.
Sharp edge: synchronized pinning (pre-Java 24)
If you’re on Java 21-23, check for pinning:
-Djdk.tracePinnedThreads=shortPrints stack traces where virtual threads pinned. Common culprits:
- Lock classes using
synchronized - Some older HTTP clients
- Some JDBC drivers (MySQL Connector/J had this; Postgres JDBC was cleaner)
Java 24 eliminates the pinning for synchronized. Upgrading solves most of this.
Sharp edge: debugging
A service with 10,000 active virtual threads is hard to debug with traditional thread dumps. jstack produces a massive output.
JDK 21+ has jcmd <pid> Thread.dump_to_file -format=json which groups virtual threads by state. More useful.
Also, distributed tracing matters more than ever — individual threads are harder to follow when there are thousands. Invest in OpenTelemetry if you haven’t.
Sharp edge: metrics that were per-thread
Some older metrics frameworks assumed “threads” meant “platform threads”. A metric tracking thread-pool utilization is meaningless when there are unlimited virtual threads. Replace with:
- Requests in flight
- Operations pending at each external dependency
- Active DB connections
When the old model still wins
Some situations where I still prefer explicit thread pools:
Strict concurrency limits. Batch job processing 100 items in parallel with bounded pool = easy to reason about. Virtual threads with a semaphore works too, just less explicit.
CPU-bound work. Fixed pool of cores + 1 virtual or platform threads — doesn’t matter. Default to platform.
Very short-lived services. Virtual thread infrastructure has ~5-15% startup overhead. For short-lived batch jobs, platform is simpler.
Migration checklist
For an existing Spring Boot service:
- Upgrade to Java 21+ (preferably 24+ for synchronized fix)
- Enable
spring.threads.virtual.enabled=true - Audit connection pools (Hikari, Redis, HTTP clients) — add explicit limits where needed
- Audit for
synchronizedusage in hot paths; replace withReentrantLockif on older JDK - Scan logs for
tracePinnedThreadsoutput during load test - Load test with realistic concurrency
- Deploy with observation; roll back if regressions
Most services see immediate wins with minimal changes. The audit step matters — without it, latent bugs surface under higher concurrency.
Structured concurrency
Java 25 finalized structured concurrency. With virtual threads, fan-out becomes trivial:
try (var scope = StructuredTaskScope.open()) {
Subtask<A> a = scope.fork(() -> serviceA.call());
Subtask<B> b = scope.fork(() -> serviceB.call());
Subtask<C> c = scope.fork(() -> serviceC.call());
scope.join();
return combine(a.get(), b.get(), c.get());
}Three parallel service calls, first-failure cancellation, clean error propagation. Cleaner than CompletableFuture juggling.
Closing note
Virtual threads are the best thing to happen to server-side Java in a decade. For an I/O-bound service, turning them on is usually a 3-10× throughput gain with minor configuration work. For CPU-bound work, they’re neutral. The sharp edges — connection pool sizing, early pinning issues, metric assumptions — are real but manageable. If you’re still running thread-per-request on Java 8 in 2025, the upgrade to Java 21+ and virtual threads is probably the single highest-leverage change available.