Open Source • MIT / Apache-2.0

Background jobs
without infrastructure

Add persistent, crash-safe background job processing to any Rust application. No Redis. No RabbitMQ. Just a library. Or run as a standalone server.

0 jobs/sec end-to-end
0 binary size
0 startup time
0 tests passing
0 memory (idle)

Download. Run.
Push jobs.

No Redis. No PostgreSQL. No message broker. Just one binary. Start the server and push your first job in under 30 seconds.

Dual protocol HTTP REST (port 6790) + TCP (port 6789)
5 storage backends redb, hybrid, memory, SQLite, PostgreSQL
Crash-only design All state persisted before acknowledging writes
rustqueue — terminal
$ rustqueue serve
INFO HTTP listening on 0.0.0.0:6790
INFO TCP listening on 0.0.0.0:6789
INFO Storage: redb (./data)
INFO Scheduler started (tick: 1000ms)
$ curl -X POST localhost:6790/api/v1/queues/emails/jobs \
-H "Content-Type: application/json" \
-d '{"name":"send-welcome","data":{"to":"alice@co.com"}}'
{"ok":true,"id":"019474a1-b2c3-7def-8901-234567890abc"}
$ rustqueue status
Queues: 1 | Jobs: 1 waiting | Workers: 0
emails: 1 waiting, 0 active, 0 completed
$

Fastest end-to-end
throughput

Hybrid TCP backend with batch_size=50. Fresh benchmarks against RabbitMQ, Redis, BullMQ, and Celery (March 2026).

Produce throughput ops/sec

RabbitMQ
42,471
RustQueue
40,504
Redis
9,586
BullMQ
5,238

Consume throughput ops/sec

RustQueue
26,716
Redis
10,306
RabbitMQ
5,067
BullMQ
4,385

Why it's fast

Hybrid storage keeps jobs in DashMap memory with periodic redb snapshots.

TCP pipelining reads all buffered commands before processing, single flush per batch.

Per-queue BTreeSet index enables O(log N) dequeue instead of scanning all jobs.

Write coalescing batches individual push/ack into group flushes for 60x throughput boost.

Zero-copy binary protocol avoids allocation for payload validation.

Everything you need.
Nothing you don't.

Production-ready job processing with a complete feature set. No external services required.

DAG Workflows

Job dependencies with depends_on, BFS cycle detection, cascade DLQ failure, and flow status tracking. Build complex pipelines.

depends_on • cycle detection • cascade

Cron & Interval Scheduling

Full schedule engine with cron expressions and interval-based execution. Pause, resume, and set max execution limits.

cron • interval • max_executions

Webhooks & Events

HMAC-SHA256 signed HTTP callbacks with event/queue filtering. Real-time WebSocket stream for live monitoring.

HMAC-SHA256 • WebSocket • retry delivery

5 Storage Backends

redb (ACID default), hybrid memory+disk, in-memory, SQLite, PostgreSQL. Swap via config without changing code.

redb • hybrid • SQLite • Postgres

Embeddable Library

Use as a standalone server or embed in your Rust application. RustQueue::memory().build() for zero-config usage.

zero-config • library • no server needed

Client SDKs

Official Node.js (TypeScript), Python, and Go SDKs. HTTP + TCP transports. Zero runtime dependencies.

TypeScript • Python • Go • zero deps

Running in 60 seconds

Download the binary, start the server, push your first job. No configuration required.

1

Install

One command via cargo install, or build from source.

2

Start the server

Zero config needed. Starts HTTP + TCP listeners with redb storage.

3

Push and process jobs

Push via HTTP, TCP, or CLI. Pull, process, acknowledge.

4

Use a client SDK

TypeScript, Python, or Go. Zero runtime dependencies.

install.sh
1# Install via cargo 2cargo install rustqueue 3 4# Or build from source 5git clone https://github.com/ferax564/rustqueue.git 6cd rustqueue 7cargo build --release 8 9# Or run with Docker 10docker compose up -d
1# Start with defaults (redb storage, ports 6790/6789) 2rustqueue serve 3 4# Or with custom config 5rustqueue serve --config rustqueue.toml 6 7# Or with environment variables 8RUSTQUEUE_STORAGE_BACKEND=hybrid rustqueue serve 9 10INFO HTTP listening on 0.0.0.0:6790 11INFO TCP listening on 0.0.0.0:6789
1# Push a job via HTTP 2curl -X POST localhost:6790/api/v1/queues/emails/jobs \ 3 -H "Content-Type: application/json" \ 4 -d '{"name":"send-welcome","data":{"to":"a@b.com"}}' 5 6# Pull, process, acknowledge 7curl localhost:6790/api/v1/queues/emails/jobs 8curl -X POST localhost:6790/api/v1/jobs/{id}/ack 9 10# Or use the CLI 11rustqueue push --queue emails --name send-welcome
1// TypeScript SDK 2import { RustQueueClient } from "@rustqueue/client" 3 4const client = new RustQueueClient({ 5 baseUrl: "http://localhost:6790" 6}) 7 8const id = await client.push("emails", "send-welcome", 9 { to: "alice@co.com" }) 10 11const jobs = await client.pull("emails") 12await client.ack(jobs[0].id)

Replace your message broker
with a single binary

Open source, zero dependencies, production-ready. Works with any language via HTTP, TCP, or official SDKs.