Launch Post

Background Jobs in Rust Without Redis

March 29, 2026 · RustQueue v0.2.0

What if adding background jobs to your Rust app was as simple as adding SQLite?

Every Rust web application eventually needs background job processing. Send welcome emails. Generate reports. Process uploads. Retry failed API calls. And the standard answer is always the same: install Redis, set up a queue library, run a separate worker process, configure monitoring, add Redis to your Docker Compose, your CI, your staging environment, your production...

For most applications, that's overkill. You don't need a distributed message broker. You need a way to run tasks in the background that won't be lost if your process crashes.

Today we're releasing RustQueue v0.2.0 — and it does exactly that.

Three Lines of Code

let rq = RustQueue::redb("./jobs.db")?.build()?;
rq.push("emails", "send-welcome", json!({"to": "user@a.com"}), None).await?;

// That's it. The job is persisted to disk. It survives crashes.

No Redis. No RabbitMQ. No Docker. No config file. No separate process. Just cargo add rustqueue and you have a persistent, crash-safe job queue embedded in your application.

The job is written to an embedded ACID database (redb). If your process crashes mid-flight, the job is still there when you restart. Retries, exponential backoff, and a dead-letter queue are built in.

The Full Example

use rustqueue::RustQueue;
use serde_json::json!;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let rq = RustQueue::redb("./jobs.db")?.build()?;

    // Push a job
    let id = rq.push("emails", "send-welcome",
        json!({"to": "user@example.com"}), None).await?;
    println!("Queued: {id}");

    // Pull and process
    let jobs = rq.pull("emails", 1).await?;
    println!("Processing: {}", jobs[0].name);
    rq.ack(jobs[0].id, None).await?;

    Ok(())
}

Run it with cargo run. That's your first background job — processed, acknowledged, done. The jobs.db file persists everything. Kill the process, restart it, and your pending jobs are still there.

Adding Background Jobs to an Existing Axum App

Most Rust web apps are built on Axum. RustQueue v0.2.0 ships with a native Axum integration — the RqState extractor. Here's what it looks like to add background email processing to an existing web application:

use axum::routing::post;
use axum::{Json, Router};
use rustqueue::axum_integration::RqState;
use rustqueue::RustQueue;
use serde_json::json!;
use std::sync::Arc;

async fn enqueue_email(rq: RqState, Json(body): Json<serde_json::Value>)
    -> Json<serde_json::Value>
{
    let to = body["to"].as_str().unwrap_or("unknown");
    let id = rq.push("emails", "send-welcome",
        json!({"to": to}), None).await.unwrap();
    Json(json!({"queued": true, "job_id": id.to_string()}))
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let rq = Arc::new(RustQueue::redb("./jobs.db")?.build()?);
    let app = Router::new()
        .route("/send-email", post(enqueue_email))
        .with_state(rq);

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?;
    axum::serve(listener, app).await?;
    Ok(())
}

The RqState extractor handles everything. Add it as a handler parameter and call .push(), .pull(), .ack() directly. No setup, no wiring, no boilerplate.

Why Not Just Use Redis?

RustQueueRedis + BullMQRabbitMQCelery
External depsNoneRedis serverErlang + RabbitMQRedis/RabbitMQ
Time to first job~60 seconds15–30 min15–30 min15–30 min
Deploymentcargo add2+ services2+ services3+ services
Binary / footprint6.8 MB, 15 MB RAMN/A, ~100 MB+N/A, ~100 MB+N/A, ~200 MB+
Crash recoveryACIDConfigurableYesDepends

Redis is a fantastic tool. But running an entire Redis server just for background jobs is like renting a warehouse to store your grocery list. For most applications, an embedded queue that lives inside your process is the right abstraction.

Start Embedded. Grow Into a Server.

The best part: you're never locked in. When your application outgrows embedded mode, run the same engine as a standalone server — same data file, same guarantees, zero migration:

# Day 1: embedded in your app
let rq = RustQueue::redb("./jobs.db")?.build()?;

# Day 30: standalone server, same data
$ rustqueue serve --storage ./jobs.db

# Day 60: workers in any language
const rq = new RustQueueClient("http://localhost:6790");
await rq.push("emails", "welcome", { to: "user@a.com" });

Client SDKs for Node.js, Python, and Go are included — all zero-dependency. REST API, TCP protocol, WebSocket events, web dashboard, Prometheus metrics, Grafana dashboards — everything is built in.

Performance

Because we know you'll ask:

SystemProduceConsumeEnd-to-end
RustQueue TCP47,129/s38,048/s22,685/s
RabbitMQ47,588/s5,367/s4,686/s
Redis9,337/s9,511/s4,951/s
BullMQ7,559/s6,690/s1,978/s
Celery3,168/s1,589/s893/s

Neck-and-neck with RabbitMQ on produce. 7.1x faster on consume. 4.8x faster end-to-end. With a 6.8 MB binary using 15 MB of RAM.

But performance isn't the point. The point is that you shouldn't need to think about your job queue. It should be invisible infrastructure — like SQLite is for databases.

Think of RustQueue not as a replacement for RabbitMQ, but as a replacement for tokio::spawn.

Production Features

This isn't a toy. RustQueue ships with everything you need for production:

Retries & backoff — fixed, linear, or exponential. Jobs that exhaust retries land in a dead-letter queue for inspection.

Cron & interval scheduling — built-in schedule engine with pause/resume.

DAG workflows — job dependencies with cycle detection and cascade failure.

Progress tracking — 0–100 progress with log messages and worker heartbeats.

Webhooks — HMAC-SHA256 signed HTTP callbacks on job events.

Observability — 15+ Prometheus metrics, pre-built Grafana dashboard, WebSocket event stream.

Get Started

Your First Background Job in 60 Seconds

Add RustQueue to your project and push your first job:

cargo add rustqueue tokio serde_json anyhow

Then check out the examples: basic push/pull/ack, persistent queues, worker loops, and a full Axum web app with background jobs.

Star us on GitHub. We'd love to hear what you build.