First-Come-First-Served Systems Explained: 6 Implementation Strategies and How to Choose

First-Come-First-Served Systems Explained: 6 Implementation Strategies and How to Choose


Introduction

In the previous post, we covered the limits of FOR UPDATE. When 100 users hit simultaneously, 99 wait in line, the connection pool gets exhausted, and deadlock risk follows. DB locks alone can’t handle a high-traffic first-come-first-served system.

This post steps back to see the big picture — what problems does a FCFS system need to solve, what options exist, and when should you use each one?


1. What Is a First-Come-First-Served System?

Concert ticket sales, limited-edition sneaker drops, flash deals — all systems that allocate a fixed quantity to whoever requests first. Sounds simple, but when thousands flood in simultaneously, three core problems emerge.

The 3 Core Problems

ProblemDescriptionWhat happens if unsolved?
Concurrency controlThousands try to deduct the same stock at the same instant100 items in stock, 150 people purchase successfully (overselling)
Accurate stock deductionNon-atomic deduction operations cause miscountsStock goes negative, or two requests read the same value and only one deduction registers
Duplicate preventionSame user submits multiple requests and wins multiple timesOne person takes 10 items

Analogy: A bakery selling 100 limited loaves. Without a line (no concurrency control), people scramble and grab. Without counting (deduction errors), person #101 gets a loaf too. Without checking tickets (no duplicate prevention), one person lines up multiple times.


2. The 6 Implementation Strategies

2.1 DB Pessimistic Lock (SELECT FOR UPDATE)

Core principle: Lock the stock row first, deduct, then release — processing one request at a time.

StepClientServer (DB)Status
1Purchase requestSELECT stock FOR UPDATE → row locked🔒
2Check stock > 0 → UPDATE stock = stock - 1✅ Deducted
3COMMIT → lock released🔓
4Next requestCan now acquire lock → repeat
ProsCons
No additional infrastructure (just the DB)Concurrent requests are serialized (99 wait)
Simple implementationDeadlock risk
Strong data consistencyDB connection pool exhaustion risk

Best for: Under a few dozen concurrent users, early-stage services without extra infrastructure

2.2 Redis Atomic Operations (DECR)

Core principle: Redis DECR executes atomically on a single thread, so stock deduction is safe without locks.

StepClientServer (Redis → DB)Status
1Purchase requestDECR stock_key → check result
2Result ≥ 0 → purchase success, save order to DB
3Result < 0 → INCR stock_key (rollback) → sold out response
ProsCons
Tens of thousands of TPSData loss risk on Redis failure
Atomic without locksNeed to manage Redis-DB data consistency
Relatively simple implementationRollback logic needed (when result < 0)

Best for: Hundreds to thousands of concurrent users, services where fast response matters

2.3 Redis + Lua Script

Core principle: Bundle stock check and deduction into a single Lua script that executes atomically in Redis. Unlike plain DECR, “check → deduct” runs as one unit, eliminating negative stock entirely.

StepClientServer (Redis → DB)Status
1Purchase requestLua script execution starts (atomic)🔒
2Check stock > 0 + check duplicate + DECR → all at once
3Result: success → save order to DB, failure → immediate response✅ / ❌
ProsCons
Stock check + deduction + duplicate check in one atomic opLua script debugging is difficult
Negative stock is impossibleData loss risk on Redis failure
No rollback logic needed (check-then-deduct)Long scripts can block Redis

Best for: High traffic + want to handle duplicate prevention at the Redis level

2.4 Message Queue (Kafka / RabbitMQ)

Core principle: Push purchase requests into a queue, and consumers process them one by one in order. The requests themselves are serialized.

StepClientServer (Queue → Consumer → DB)Status
1Purchase requestPublish message to queue → immediate “accepted” response📨
2Consumer processes messages in order
3Check stock → deduct → create order✅ / ❌
4Check result (polling / websocket)Deliver processing result📬
ProsCons
Handles traffic spikes (acts as a buffer)Asynchronous response (no immediate result)
Server load distributionHigh implementation complexity (queue + consumer + result delivery)
Scalable by adding consumersAdditional infrastructure required (Kafka / RabbitMQ)

Best for: Massive traffic, cases where immediate response isn’t critical (coupon issuance, event entries)

2.5 Waiting Queue

Core principle: Place users in a queue with assigned positions, then admit them to the purchase page when their turn comes. The “N people ahead of you” pattern seen on Ticketmaster and similar platforms.

StepClientServer (Redis Sorted Set)Status
1ConnectZADD queue timestamp userId → enter queue🕐 Waiting
2”342 people ahead” displayZRANK queue userId → check current position
3Turn reachedRemove from queue → allow entry to purchase page🎫 Admitted
4Proceed to purchaseStock deduction (Redis or DB)✅ / ❌
ProsCons
Keeps server load constantUser waiting experience (UX cost)
Fair ordering guaranteedHigh implementation complexity (queue + admission control + expiry)
Distributes traffic into two phasesNeed to handle abandoned slots

Best for: Tens of thousands of concurrent users, ticketing/booking systems where fair ordering matters

2.6 Token Issuance

Core principle: Issue entry tokens first, and only token holders can purchase. Traffic is completely separated into “token issuance” and “actual purchase” phases.

StepClientServerStatus
1Request tokenToken server: issue tokens up to stock quantity → reject when exceeded🎟️ / ❌
2Purchase with tokenPurchase server: validate token → deduct stock → create order
3Mark token as expired/used🔒
ProsCons
Purchase server doesn’t get traffic spikesRequires separate token server + purchase server
Token count = stock count → overselling impossibleNeed token expiry/fraud prevention logic
Load distribution via server separationUser experience split into 2 steps

Best for: Limited-edition sales, pre-orders, large-scale systems needing physical traffic separation


3. Side-by-Side Comparison

StrategyThroughput (TPS)ComplexityExtra InfraData ConsistencyScale
DB Pessimistic LockLow (tens~hundreds)NoneHighSmall
Redis DECRHigh (tens of thousands)⭐⭐RedisMediumMedium
Redis + LuaHigh (tens of thousands)⭐⭐⭐RedisHighMedium~Large
Message QueueHigh (scalable)⭐⭐⭐⭐Kafka/RabbitMQHighLarge
Waiting QueueHigh (controllable)⭐⭐⭐⭐RedisHighLarge
Token IssuanceHigh (distributed)⭐⭐⭐⭐Token serverHighLarge

Throughput isn’t a simple number comparison. DB locks are low because they process “one at a time, in order.” Redis strategies are high because they’re “atomic without locks.” Message queues and waiting queues are “scalable” because you can add consumers to adjust throughput.


4. How to Choose?

Decision Flow

StepQuestionAnswer → Direction
1How many concurrent users?A few dozen → DB pessimistic lock is enough
2Can you add infrastructure (Redis)?No → DB pessimistic lock
3Is immediate response required?Yes → Redis DECR or Redis + Lua
4Want duplicate prevention at Redis level?Yes → Redis + Lua
5Tens of thousands of users, async response OK?Yes → Message queue
6Is fair waiting order important?Yes → Waiting queue
7Need to physically separate purchase traffic?Yes → Token issuance

Real Systems Combine Strategies

Production systems rarely use just one approach. Examples:

SystemCombination
Flash dealsWaiting queue (admission control) + Redis (stock deduction)
Concert ticketingWaiting queue (ordering) + Token (entry rights) + DB lock (final payment)
Limited-edition sneakersToken issuance (bot prevention) + Redis + Lua (stock deduction)
Small event couponsRedis DECR alone is enough

The key question isn’t “which strategy is best” but “which fits our situation.” Bringing in Kafka for 50 concurrent users is over-engineering. Using only DB locks for 100,000 concurrent ticket buyers will crash your servers.


Summary

Key PointDetails
3 core problems of FCFSConcurrency control, accurate stock deduction, duplicate prevention
6 strategiesDB lock, Redis DECR, Redis + Lua, message queue, waiting queue, token issuance
Choose by scaleSmall → DB lock, Medium → Redis, Large → queues/waiting/tokens
Real systems combineWaiting queue + Redis, token + DB lock, etc.
Avoid over-engineeringStart with the simplest approach that fits your traffic and infrastructure

The next post covers Part 4: Implementing FCFS with DB Locks. We’ll start with the simplest approach, build it in code, and verify its limits with concurrency tests.

This post is part of the Coupang Partners program, and a commission is earned from qualifying purchases.