Lightweight Python message queuing with Redis and built-in publish-side deduplication. Deduplicate publishes within a TTL window, with optional crash recovery — across any number of producers and consumers.
pip install "redis-message-queue>=3.0.0,<4.0.0"Requires Redis server >= 6.2.
from redis import Redis
from redis_message_queue import RedisMessageQueue
client = Redis.from_url("redis://localhost:6379/0")
queue = RedisMessageQueue("my_queue", client=client, deduplication=True)
queue.publish("order:1234") # returns True
queue.publish("order:1234") # returns False (deduplicated)
queue.publish({"user": "alice"}) # dicts work toofrom redis import Redis
from redis_message_queue import RedisMessageQueue
client = Redis.from_url("redis://localhost:6379/0", decode_responses=True)
queue = RedisMessageQueue("my_queue", client=client)
while True:
with queue.process_message() as message:
if message is not None:
print(f"Processing: {message}")
# Auto-acknowledged on success; cleaned up on exceptionThe problem: You're sending messages between services or workers and need guarantees. Simple Redis LPUSH/BRPOP loses messages on crashes, doesn't deduplicate, and gives you no visibility into what succeeded or failed.
The solution: Atomic Lua scripts for publish + dedup, a processing queue for in-flight tracking (with optional crash recovery via visibility timeouts), and optional success/failure logs for observability.
| Feature | Details |
|---|---|
| Deduplicated publish | Lua-scripted atomic SET NX + LPUSH prevents duplicate enqueues within a configurable TTL window (default: 1 hour), even with producer retries. Supports custom key functions for content-based deduplication. Note: deduplication is publish-side only and does not prevent duplicate delivery under at-least-once visibility-timeout reclaim |
| Visibility-timeout redelivery | Crashed or stalled consumers' messages are reclaimed and redelivered when a visibility timeout is configured |
| Success & failure logs | Optional completed/failed queues for auditing and reprocessing, with configurable max length to prevent unbounded growth |
| Dead-letter queue | Poison messages that exceed a configurable delivery count are automatically routed to a dead-letter queue instead of being redelivered indefinitely |
| Graceful shutdown | Built-in interrupt handler lets consumers finish current work before stopping |
| Lease heartbeats | Optional background lease renewal keeps long-running handlers from being redelivered prematurely |
| Connection retries | Exponential backoff with jitter for Redis operations (deduplicated publish, ack, lease renewal). Publish and cleanup paths use replay markers so retryable connection drops preserve the original result within the same call. Message-claim paths use idempotent Lua claim IDs plus persisted claim metadata so retryable errors can recover the original claim safely, either in the same wait call or on the next call from the same gateway instance if the original wait had to give up before Redis became reachable again. Active waits keep their in-flight claim IDs private until they exit, so a concurrent caller on the same gateway instance cannot recover the same claim twice. Timed waits also stay bounded: once the configured wait window expires, the queue only replays persisted state for that same claim attempt and will not claim fresh work after the deadline. If a graceful interrupt arrives during claim recovery, the wait call stops instead of taking fresh work. Non-deduplicated publish is not retried — the exception propagates so the caller can decide whether to retry (accepting potential duplicates) |
| Async support | Drop-in async variant with identical API |
All features are optional and can be enabled or disabled as needed.
| Configuration | Delivery guarantee |
|---|---|
| Default (no visibility timeout) | At-most-once — a consumer crash loses the in-flight message |
With visibility_timeout_seconds |
At-least-once — expired messages are reclaimed and redelivered |
See Crash recovery with visibility timeout for details and tradeoffs.
# Default: deduplicate by full message content (1-hour TTL)
queue = RedisMessageQueue("q", client=client, deduplication=True)
# Custom dedup key (e.g., deduplicate by order ID only)
queue = RedisMessageQueue(
"q", client=client,
deduplication=True,
get_deduplication_key=lambda msg: msg["order_id"],
)
# Disable deduplication entirely
queue = RedisMessageQueue("q", client=client, deduplication=False)queue = RedisMessageQueue(
"q", client=client,
enable_completed_queue=True, # track successful messages
enable_failed_queue=True, # track failed messages for reprocessing
)To prevent unbounded growth, cap the queue lengths:
queue = RedisMessageQueue(
"q", client=client,
enable_completed_queue=True,
enable_failed_queue=True,
max_completed_length=10000, # keep only the most recent 10,000
max_failed_length=1000, # keep only the most recent 1,000
)When set, LTRIM is called after each message is moved to the completed/failed queue. This is best-effort cleanup — if the trim fails, the queue is slightly longer until the next successful trim.
queue = RedisMessageQueue(
"q",
client=client,
visibility_timeout_seconds=300,
heartbeat_interval_seconds=60,
)This enables lease-based redelivery for messages left in processing by a crashed worker and renews the lease while a healthy long-running handler is still working.
Tradeoffs:
- delivery becomes at-least-once after lease expiry
- the timeout must be longer than your normal processing time if you do not use heartbeats
- if you do use heartbeats, the heartbeat interval must be less than half of the visibility timeout
- recovery happens on consumer polling cadence rather than instantly
- heartbeats add background renewal work for active messages
- if a heartbeat fails (network error or stale lease), the heartbeat stops silently; the consumer continues processing but may find at ack time that the message was reclaimed by another consumer
Pass on_heartbeat_failure to receive a best-effort callback when the heartbeat stops because renewal failed:
queue = RedisMessageQueue(
"q", client=client,
visibility_timeout_seconds=300,
heartbeat_interval_seconds=60,
on_heartbeat_failure=lambda: log.warning("heartbeat failed; lease may be stale"),
)The callback is advisory — it may fire briefly after a successful process_message exit when a final renewal coincided with the success path. Use it for metrics or alerting, not as a correctness signal. For the async queue (redis_message_queue.asyncio), the callback may also be async def.
Without a visibility timeout, messages already moved to processing remain there indefinitely after a consumer crash and are not redelivered, even if the crash happened before your handler started running.
queue = RedisMessageQueue(
"q",
client=client,
visibility_timeout_seconds=300,
max_delivery_count=5,
)When a message has been delivered more than max_delivery_count times (due to consumer crashes causing visibility-timeout reclaim), it is automatically routed to a dead-letter queue ({name}::dead_letter) instead of being redelivered. This prevents poison messages from cycling indefinitely.
Notes:
- requires
visibility_timeout_secondsto be set (poison messages are only a concern with VT reclaim) - the delivery count is tracked per-message in a Redis HASH and cleaned up on successful ack or move to completed/failed
- the delivery count increments when Redis grants the claim/lease, not when your handler begins running. If a process exits after Redis claims a message, that claim still counts toward
max_delivery_count max_delivery_count=1means the message is delivered once; any reclaim routes it to the dead-letter queue- without
max_delivery_count, messages are redelivered indefinitely (existing behavior) - dead-lettered messages contain the raw payload only — the internal envelope (which carries a per-delivery UUID) is stripped before pushing to the DLQ, consistent with how completed/failed queues store messages. Two identical payloads dead-lettered separately are indistinguishable in the DLQ
from redis_message_queue import RedisMessageQueue, GracefulInterruptHandler
interrupt = GracefulInterruptHandler()
queue = RedisMessageQueue("q", client=client, interrupt=interrupt)
while not interrupt.is_interrupted():
with queue.process_message() as message:
if message is not None:
process(message)
# Consumer finishes current message before exiting on Ctrl+CNote:
GracefulInterruptHandlerclaims process-global signal handlers for its signals (default: SIGINT, SIGTERM, SIGHUP), but only when those signals are still using Python's default disposition. If another handler is already installed, or if anotherGracefulInterruptHandleralready owns the signal, construction raisesValueError. A repeated owned signal falls back to the default behavior (for example, a second Ctrl+C raisesKeyboardInterrupt). If you need multiple shutdown hooks, use a single handler and fan out in your own code.
from redis_message_queue._redis_gateway import RedisGateway
# Tune retry budget, dedup TTL, or wait interval
gateway = RedisGateway(
redis_client=client,
retry_budget_seconds=120, # total retry window (set 0 to disable retry)
retry_max_delay_seconds=5.0, # cap on per-attempt backoff
retry_initial_delay_seconds=0.01, # first backoff
message_deduplication_log_ttl_seconds=3600,
message_wait_interval_seconds=10,
message_visibility_timeout_seconds=300,
)
queue = RedisMessageQueue("q", gateway=gateway)The retry knobs configure an internal tenacity strategy: exponential
backoff with jitter, retry on transient Redis errors only, capped at
retry_budget_seconds. The budget is wall-clock time from the first attempt (including attempt duration), not inter-attempt delay; a single attempt that takes longer than the budget results in zero retries. Setting retry_budget_seconds=0 disables retry
entirely (single attempt; exceptions propagate). The library uses
retry_budget_seconds to size the operation-result cache TTL automatically,
so the previous footgun of an over-long retry budget out-living the cache
and producing misleading "cleanup was a no-op" warnings is now structurally
impossible. Note: tenacity may allow one additional attempt beyond the budget if the budget check passes at attempt start — total wall-clock time can exceed retry_budget_seconds by the duration of that final attempt.
To plug in a different retry library (backoff, asyncstdlib.retry, or your
own logic) or fundamentally different semantics, subclass
AbstractRedisGateway from redis_message_queue._abstract_redis_gateway
(or redis_message_queue.asyncio._abstract_redis_gateway) and override the
operation methods directly.
If your custom gateway uses visibility timeouts, it must expose a public
message_visibility_timeout_seconds value and return ClaimedMessage from
wait_for_message_and_move(). The queue now fails closed if a lease-capable
gateway returns plain str/bytes, because cleanup without a lease token can
ack a message that has already been reclaimed by another consumer.
If a lease-capable custom gateway omits message_visibility_timeout_seconds,
the queue cannot detect that lease semantics are in play and will treat the
gateway as a non-lease gateway. In that misconfigured state, lease-token safety
checks and heartbeat validation are bypassed.
When using a custom gateway with dead-letter queue support, configure max_delivery_count
and dead_letter_queue directly on the gateway — do not pass max_delivery_count to
RedisMessageQueue:
gateway = RedisGateway(
redis_client=client,
message_visibility_timeout_seconds=300,
max_delivery_count=3,
dead_letter_queue="myqueue::dead_letter",
)
queue = RedisMessageQueue("myqueue", gateway=gateway)Use a separate gateway instance per queue when max_delivery_count is enabled.
Dead-letter routing is gateway-scoped, so reusing the same gateway across different
queues is rejected.
Replace the import to use the async variant — the API is identical:
from redis_message_queue.asyncio import RedisMessageQueueAll examples work the same way. Remember to close the connection when done:
import redis.asyncio as redis
client = redis.Redis()
# ... your code
await client.aclose()- No metrics or observability hooks. The library logs warnings (stale leases, heartbeat failures, transient errors) via Python's
loggingmodule but does not expose callbacks, event hooks, or metric counters. To monitor queue health, inspect the underlying Redis keys directly or parse log output. - Timed waits use polling claim loops. To make claims recoverable after ambiguous connection drops,
wait_for_message_and_move()uses idempotent Lua claim polling instead of raw blocking list-move commands. This adds a small polling cadence during timed waits. - Redis Lua is atomic, not rollback-transactional. The built-in scripts now preflight queue key types and fail closed on
WRONGTYPEbefore mutating queue state, but Redis does not undo earlier writes if a later script command fails for another reason (for exampleOOMunder severe memory pressure). - Batch reclaim limit of 100. The visibility-timeout reclaim Lua script processes at most 100 expired messages per consumer poll. Under extreme backlog this may delay recovery, but prevents any single poll from blocking Redis.
- Claim-attempt loop limit of 100 per poll. The VT claim Lua script attempts at most 100 LMOVE+delivery-count checks per invocation. Under pathological conditions (>100 consecutive poison messages in pending), a single poll returns no message even though non-poison messages exist deeper in the queue. Subsequent polls drain the poison batch 100 at a time.
- Default dedup key is the full message. Without a custom
get_deduplication_key, the entire serialized message becomes a Redis key name for dedup tracking. For large messages (>1KB), provide a custom key function to avoid excessive Redis memory usage. - Cluster detection uses
isinstance(client, RedisCluster). Wrapped or instrumented cluster clients that delegate without inheriting will bypass hash-tag validation. Custom gateways should setis_redis_cluster = Trueexplicitly. - Redis Cluster requires hash tags. The built-in queue uses multiple Redis keys per operation. Wrap the queue name in hash tags (for example
{myqueue}) so every generated key lands in the same slot. When you pass a Redis Cluster client to the built-in queue/gateway path, incompatible names are rejected early. - Non-ASCII payloads use ~2x storage. The default
ensure_ascii=Truein JSON serialization encodes non-ASCII characters as\uXXXXescape sequences. This is a deliberate compatibility choice. - Client-side
Retrycan duplicate non-deduplicated publishes. If you construct yourredis.Redisclient withretry=Retry(...), redis-py retriesConnectionError/TimeoutErrorat the connection layer — below this library. Idempotent operations (deduplicatedpublish(), lease-scoped cleanup) are safe because their Lua scripts replay the original result.add_message()(used bypublish()whendeduplication=False) is a bareLPUSH: this library deliberately does not retry it, but a client-levelRetrywill, and if the server executed the command before the response was lost the message is enqueued twice. Leaveretry=None(the default) if you need strict at-most-once semantics for non-deduplicated publishes, or accept the duplication risk. More broadly, any non-idempotentLPUSHpath is vulnerable if the connection drops after server execution but before the client receives the response; all other built-in operations (deduplicated publish, lease-scoped ack/move, lease renewal) use replay markers and are safe under client-levelRetry.
For a full analysis, see docs/production-readiness.md.
Warning: These changes are destructive on live queues. Drain the queue completely before applying them.
- Do not change
key_separatoron a live queue. All existing Redis keys become invisible to the new key scheme. Drain the queue completely before changing separators. - Do not switch from no-VT to VT with messages in processing. Messages claimed by non-VT consumers have no lease deadline entries. VT-enabled consumers cannot reclaim them. Drain the processing queue first.
- Reducing
max_delivery_countretroactively DLQs messages. The delivery count hash persists across restarts. Messages whose accumulated count exceeds the new limit are immediately dead-lettered on next claim.
v3.0.0 replaced the retry_strategy: Callable constructor parameter with retry_budget_seconds, retry_max_delay_seconds, and retry_initial_delay_seconds. Users with custom retry strategies should subclass AbstractRedisGateway instead (see Custom gateway).
You'll need a Redis server:
docker run -it --rm -p 6379:6379 redisTry the examples with multiple terminals:
# Two publishers
poetry run python -m examples.send_messages
poetry run python -m examples.send_messages
# Three consumers
poetry run python -m examples.receive_messages
poetry run python -m examples.receive_messages
poetry run python -m examples.receive_messages