The REST vs gRPC question comes up on every new service. The honest answer is “depends”, but the conversation usually collapses into religion. This article lays out the real trade-offs so the choice becomes less about taste and more about the actual requirements of the call site.

Quick refresher

REST — HTTP/1.1 or HTTP/2, JSON payloads, URL-based endpoints, verbs (GET/POST/PUT/DELETE), status codes. Human-readable, easy to curl, universally supported.

gRPC — HTTP/2 only, Protocol Buffers (binary) payloads, RPC-style method calls from .proto schemas, code generation in every language. Faster, stronger contracts, less human-readable.

Both solve the same problem: call a function on another service. They make different trade-offs.

The honest trade-offs

DimensionRESTgRPC
Payload sizeLarger (JSON text)Smaller (binary Protobuf)
LatencyHigher per callLower per call
Schema enforcementOptional (OpenAPI)Mandatory (.proto)
Code generationOptionalBuilt-in
StreamingLimited (SSE, WebSockets)Native (uni/bi-directional)
Browser supportNativeNeeds gRPC-Web proxy
Debug ergonomicscurl, Postmangrpcurl, BloomRPC
Load balancer supportUniversalL7 only (needs HTTP/2-aware)
Tooling maturityMature everywhereGrowing, strong in some langs
Learning curveEveryone knows itModerate

When REST wins

  • Public or partner APIs. Anyone can hit REST with curl. gRPC requires special clients.
  • Browser clients. gRPC from the browser needs a gRPC-Web proxy — it’s possible but adds friction.
  • Small teams or early-stage products. The time saved on tooling and debugging outweighs performance gains you don’t yet need.
  • Human-debuggable traffic is important. When on-call debugging in production, seeing JSON in logs and being able to replay with curl is a real superpower.
  • Heterogeneous infrastructure. Any load balancer, any gateway, any CDN handles REST. gRPC needs specific support.

When gRPC wins

  • High-throughput internal services. Protobuf is 2-5× smaller than JSON on the wire. At scale, that’s real bandwidth and CPU savings.
  • Strong contracts across many teams. The .proto file is the contract. Breaking changes fail the build before they hit production.
  • Streaming. Bidirectional streams are first-class in gRPC. In REST, you’re bolting on SSE or WebSockets.
  • Polyglot environments. Proto-generated clients for every language beat writing HTTP clients by hand.
  • Low-latency requirements. For tight inner loops (e.g. service-to-service in a critical path), the lower per-call latency adds up.

The real world: use both

Most production systems I’ve seen have this shape:

   [Browser / Mobile / Partner]

              ▼  REST / JSON
        [API Gateway]

              ▼  gRPC / Protobuf
  [internal services mesh]

              ▼  (various)
         [databases, queues]
  • REST at the edge — because clients, browsers, and partners speak it.
  • gRPC internally — because the performance and contract guarantees matter more when services call each other.

The gateway translates between the two. This is the default for most large platforms.

What most teams get wrong

“gRPC is faster, so it’s better.” Faster per call, yes. But if your call site is idle 99% of the time, the extra 2 ms you save is meaningless. REST’s simplicity wins when performance isn’t the bottleneck.

“REST is simpler, so it’s better.” True until the contract drifts. An undocumented REST API is a nightmare at 30 services. gRPC forces the contract.

“We must pick one and use it everywhere.” You really don’t. It’s fine to expose a REST API externally and use gRPC between three specific services that call each other in a hot path.

“REST just means HTTP + JSON.” Not quite — REST implies a resource-oriented design (GET /users/123, not GET /getUser?id=123). Most “REST” APIs are actually RPC-over-HTTP. That’s fine, but if you’re going RPC-style anyway, gRPC might be a better fit.

Performance numbers that matter

Rough order-of-magnitude on typical backend calls:

  • REST JSON: 1-5 ms p50, 10-30 ms p99, payload 500 bytes - 5 KB
  • gRPC Protobuf: 0.5-2 ms p50, 5-15 ms p99, payload 200 bytes - 2 KB

Real savings appear at thousands of RPS per service. Below that, infrastructure latency (DNS, TCP setup, load balancing) dominates and the protocol choice barely matters.

Practical decision framework

Answer these, in order:

  1. Is this an external API? → REST. Move on.
  2. Does a browser call it directly? → REST. Move on.
  3. Is throughput > 5k RPS or latency budget < 10 ms? → gRPC is worth considering.
  4. Do multiple teams / languages consume it? → gRPC’s generated clients save time.
  5. Otherwise → use whichever your team already knows. The marginal technical difference is small; the human difference is real.

Closing note

Neither protocol is a silver bullet. The mistake is treating the choice as ideological. Pick per call site. Use both when it makes sense. Keep the internal contract strong and the external surface friendly — and the REST vs gRPC question becomes a small implementation detail instead of an architecture debate.