GraalVM’s native image compiles a Java application ahead of time into a standalone executable. No JVM to start, no warmup. For the right use cases, it’s transformative; for the wrong ones, it’s a productivity tax with no benefit. This article covers where the line is.
What you get
A typical Spring Boot service, JVM vs native:
| Metric | JVM | Native |
|---|---|---|
| Startup time | 2-5 seconds | 30-100 ms |
| Memory at rest (RSS) | 300-500 MB | 50-100 MB |
| Peak throughput | High (JIT) | Lower (10-30%) |
| Latency tail | JIT warmup spike | Consistent from start |
| Build time | 30 seconds | 3-10 minutes |
| Image size | 250 MB (JDK) | 80 MB |
| Debug / profile tooling | Rich | Limited |
The shape: native wins startup and memory, loses peak throughput and tooling.
When native wins big
Serverless and FaaS. Lambda or Cloud Run scales to zero. Cold start dominates user latency. Native’s 50ms cold start vs JVM’s 3 seconds is the difference between usable and unusable.
CLI tools and batch jobs. Short-lived invocations. JIT warmup never pays off. Native’s instant start + low memory is a perfect fit.
High-density deployments. Many service instances per host. Native’s 80MB vs JVM’s 400MB means 5× density.
Edge compute. Low-latency startup on resource-constrained edges. Native earns its keep.
When JVM still wins
Long-running services with sustained traffic. JIT warms up; throughput is higher than native. Startup is a one-time cost amortized over hours of runtime.
Code with heavy reflection. Native image needs to know which classes use reflection at build time. Dynamic reflection without configuration breaks at runtime.
Libraries not yet native-friendly. Some libraries still don’t play well with native image. Checking compatibility is extra work.
Rich observability needs. JFR, heap dumps, profilers — all work on JVM. Native image support is growing but limited.
The friction
Every team considering native image hits these:
Reflection configuration. Every class accessed via reflection needs to be declared at build time. Spring Boot generates most of this automatically, but third-party libraries often need manual config.
Dynamic proxies. Interfaces used with Proxy.newProxyInstance must be known at build time. Some libraries assume JVM dynamic capability.
Class loading tricks. Anything loading classes at runtime via Class.forName needs registration.
Build times. 3-10 minutes for a medium service. Slows iteration; CI must be set up accordingly.
Testing in native mode. Some bugs only surface in native-mode. You need a dedicated test suite that runs against the native binary.
Spring Native (now called Spring Boot AOT) handles most of this automatically since Spring Boot 3. For new services starting in 2025, it’s less painful than it was two years ago.
A minimal native-ready service
pom.xml:
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
</plugin>Build:
./mvnw native:compile -PnativeRun:
./target/orders-service
# Started in 52 ms, RSS 67 MBFor Gradle:
./gradlew nativeCompileSpring Boot AOT handles Spring-internal reflection. For application-specific classes using reflection:
@RegisterReflectionForBinding({MyDto.class, OtherDto.class})
class NativeHints {}Most Spring Boot services compile native without much fuss in 2025. Non-Spring Java apps need more manual hinting.
Frameworks tailored for native
- Quarkus — native-first from day one, smallest footprint, fastest build
- Micronaut — native-first, AOT-focused
- Spring Boot AOT — added later, solid but not as optimized as Quarkus or Micronaut
If you’re starting a new service with native-first intent, Quarkus is worth considering. For existing Spring services, Spring Boot AOT is the incremental path.
Monitoring native services
JVM tools often don’t apply. Replace:
- Heap dump →
jmapdoesn’t work; GraalVM has separate tooling - JFR → partially supported; improving but not full parity
- Prometheus metrics → work fine (Micrometer is JDK-agnostic)
- Distributed tracing → OpenTelemetry works with additional configuration
Plan monitoring before you go native, not after.
Build pipeline
Native builds are slow. Typical mitigations:
- Multi-stage Docker — build once, cache layers
- Builder images — pre-warmed GraalVM in CI
- Parallel builds — native image parallelizes well with RAM
- Conditional native — JVM build for dev + PR, native only for release
Keep iteration fast with JVM builds; ship native only for production artifacts.
Honest assessment
Native image delivers big wins for specific workloads. It’s not the default answer for “make Java faster.”
My rule:
- Serverless / short-lived: native
- Traditional service running 24/7: JVM with virtual threads; native if startup or memory is a genuine constraint
- CLI tool: native
- Library: JVM-friendly, with native hints so downstream apps can choose
Closing note
The Java ecosystem has two mature runtimes in 2025: the HotSpot JVM and GraalVM native. Both are excellent. The choice depends on your workload’s characteristics more than ideology. For most backend services, JVM with virtual threads is still the simpler, faster-iteration, richer-tooling choice. Native earns its slot where cold start or memory density genuinely matters. Know when each applies, and the tool chest becomes richer instead of confusing.