Retries happen. Networks drop responses. Message brokers deliver at-least-once. Users double-click buttons. If your write endpoint isn’t idempotent, all of these cause duplicates — charged twice, shipped twice, emailed twice. This article is about preventing that.
Definition
An operation is idempotent if executing it multiple times produces the same result as executing it once. GET is naturally idempotent. PUT and DELETE are supposed to be. POST usually isn’t — and that’s where duplicates come from.
Why it matters now more than ever
Modern distributed systems retry aggressively. Gateways retry. Service meshes retry. Kafka delivers at-least-once. A single user click might result in 2-5 backend attempts, and the only sane strategy is to make the write idempotent so duplicates don’t materialize.
The standard design — idempotency keys
Client generates a unique key per logical request. Sends it with the request. Server stores the result keyed by that value. If a duplicate arrives, server returns the stored result without re-executing.
POST /payments
Idempotency-Key: req-7a9b-2024-01-15-orderA
Body: { "amount": 4999, "currency": "USD", ... }Server logic:
@PostMapping("/payments")
public ResponseEntity<PaymentResponse> charge(
@RequestHeader("Idempotency-Key") String key,
@RequestBody ChargeRequest req) {
var existing = idempotencyStore.findByKey(key);
if (existing.isPresent()) {
return ResponseEntity.ok(existing.get().response());
}
PaymentResponse result = paymentService.charge(req);
idempotencyStore.save(key, req, result);
return ResponseEntity.ok(result);
}Two subtle requirements:
- The lookup + save must be atomic. Two concurrent requests with the same key must not both proceed. Use
INSERT ... ON CONFLICT DO NOTHING+ check of returned row, or a distributed lock per key. - Validate the request hash. Same idempotency key + different body = probably a bug in the client. Return 422, not the original result.
A sound schema
CREATE TABLE idempotency_keys (
key TEXT PRIMARY KEY,
request_hash TEXT NOT NULL,
response_body JSONB,
response_status INTEGER,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
expires_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_idempotency_expires ON idempotency_keys(expires_at);expires_at lets you clean up old keys periodically. Typical retention: 24-48 hours.
Key generation strategies
Client-supplied UUID. Simplest and most flexible. Client creates a UUID per action, reuses it on retries. Standard approach in SDKs (Stripe, Twilio).
Derived from request. hash(userId + actionType + naturalBusinessKey). Works when the client has no retry-safe state but you have a business key that identifies the action uniquely. Careful: parameter order matters, any unrelated change creates a new key.
Per-button-click UUID. Frontends generate a UUID when a button is mounted, send it on submission. Double-click = same key = one operation.
Idempotent event handlers
Same principle, different mechanism. Kafka consumers check “have I processed this event ID?” before acting:
@KafkaListener(topics = "order.placed", groupId = "fulfillment")
@Transactional
public void on(OrderPlaced event) {
if (processedRepo.existsByEventId(event.id())) return;
shipmentService.create(event.orderId());
processedRepo.save(new ProcessedEvent(event.id(), Instant.now()));
}The check and the action are in the same transaction. Duplicates short-circuit.
Operations that are naturally idempotent
Some designs are idempotent by nature, no key needed:
- PUT to a resource with a client-chosen ID.
PUT /orders/order-12345creates or updates that specific order. Retries land on the same record. - Merge operations. “Set balance to X” is idempotent. “Add X to balance” is not.
- Set operations.
SET role = 'admin'is idempotent.INCREMENT visitsis not.
Design toward these when you can.
Common mistakes
Assuming message brokers don’t re-deliver. They do. Always. Design for it.
Key reuse across different actions. Same idempotency key for a refund and a charge = chaos. Keys must be per-operation-type.
Forgetting to store failed results. If the first attempt failed with “insufficient funds”, the retry should also return “insufficient funds” — not try to charge again. Store both success and business-failure responses.
Storing too short a window. User reloads 2 days later, retries — no record, duplicate charge. Default to 24-48 hours, longer for financial operations.
Lock timeout bugs. Two concurrent requests, first acquires lock, slow operation, client times out and retries, lock still held — deadlock. Set reasonable lock timeouts with explicit fallback.
Idempotency and compensation
Sagas rely on compensations. Those must be idempotent too. Releasing an already-released reservation should be a no-op, not an error. Refunding an already-refunded charge should succeed.
Without this, saga retries cause double-compensations — a whole class of nasty bugs.
Testing idempotency
Every write endpoint test should include:
- Call with key K, assert success
- Call again with key K and same body, assert same response, verify no duplicate effect
- Call with key K and different body, assert 422
- Call concurrently with same key twice, assert only one succeeded
If your test suite doesn’t cover these, idempotency is a hope, not a guarantee.
Closing note
Idempotency sounds like a niche concern until you watch a duplicate charge incident unfold. The pattern is cheap to add, costs nothing at scale, and prevents a whole category of data integrity bugs. Make it part of every write endpoint from day one — not because of principle, but because the alternative is eventually explaining to a customer why they were charged twice.