Add persistent, crash-safe background job processing to any Rust application. No Redis. No RabbitMQ. Just a library. Or run as a standalone server.
No Redis. No PostgreSQL. No message broker. Just one binary. Start the server and push your first job in under 30 seconds.
Hybrid TCP backend with batch_size=50. Fresh benchmarks against RabbitMQ, Redis, BullMQ, and Celery (March 2026).
ops/secops/sec
Hybrid storage keeps jobs in DashMap memory with periodic redb snapshots.
TCP pipelining reads all buffered commands before processing, single flush per batch.
Per-queue BTreeSet index enables O(log N) dequeue instead of scanning all jobs.
Write coalescing batches individual push/ack into group flushes for 60x throughput boost.
Zero-copy binary protocol avoids allocation for payload validation.
Production-ready job processing with a complete feature set. No external services required.
Job dependencies with depends_on, BFS cycle detection, cascade DLQ failure, and flow status tracking. Build complex pipelines.
Full schedule engine with cron expressions and interval-based execution. Pause, resume, and set max execution limits.
cron • interval • max_executionsHMAC-SHA256 signed HTTP callbacks with event/queue filtering. Real-time WebSocket stream for live monitoring.
HMAC-SHA256 • WebSocket • retry deliveryredb (ACID default), hybrid memory+disk, in-memory, SQLite, PostgreSQL. Swap via config without changing code.
redb • hybrid • SQLite • PostgresUse as a standalone server or embed in your Rust application. RustQueue::memory().build() for zero-config usage.
Official Node.js (TypeScript), Python, and Go SDKs. HTTP + TCP transports. Zero runtime dependencies.
TypeScript • Python • Go • zero depsDownload the binary, start the server, push your first job. No configuration required.
One command via cargo install, or build from source.
Zero config needed. Starts HTTP + TCP listeners with redb storage.
Push via HTTP, TCP, or CLI. Pull, process, acknowledge.
TypeScript, Python, or Go. Zero runtime dependencies.