Tutorial
Every web app sends emails. Welcome messages, password resets, order confirmations. And every developer learns the same lesson the hard way: don't send emails inside your HTTP handler.
The handler blocks. The SMTP server is slow. It times out. The user sees a spinner. Or worse — the process crashes after charging the customer but before sending the receipt.
The standard solution is a job queue. Push the email to a queue, return immediately, process it in the background. But the standard job queue requires Redis, a separate worker process, Docker Compose entries, monitoring, and 30 minutes of setup.
What if you could have the queue without any of that?
The Setup: One Line
let rq = RustQueue::redb("/tmp/emails.db")?.build()?;That's your queue. It's an embedded ACID database. No server to run, no connection to manage, no Docker container. The .db file is the queue. It survives process crashes, power failures, and kill -9.
The App: Three Endpoints
We're building a small Axum web app with two job-producing endpoints and a stats endpoint. The full code is in the repo — here are the interesting parts.
Signup: queue a welcome email
async fn signup(rq: RqState, Json(body): Json<serde_json::Value>) -> Json<serde_json::Value> {
let email = body["email"].as_str().unwrap_or("unknown");
let name = body["name"].as_str().unwrap_or("User");
let id = rq.push(
"emails", "welcome-email",
json!({ "to": email, "subject": format!("Welcome, {}!", name),
"template": "welcome" }),
None,
).await.unwrap();
Json(json!({"queued": true, "job_id": id.to_string()}))
}The handler returns instantly. The email job is persisted to disk. If the process crashes right now, the job is still there when it restarts.
Password reset: same queue, higher priority
async fn reset_password(rq: RqState, Json(body): Json<serde_json::Value>)
-> Json<serde_json::Value>
{
let id = rq.push(
"emails", "password-reset",
json!({ "to": email, "subject": "Reset your password" }),
Some(JobOptions {
priority: Some(10), // Jump ahead of welcome emails
max_attempts: Some(5), // More retries — this one matters
..Default::default()
}),
).await.unwrap();
// ...
}Same queue, different JobOptions. Password resets get priority 10 (default is 0), so they jump ahead of any waiting welcome emails. They also get 5 retry attempts instead of the default 3 — because a user who can't reset their password is a user who leaves.
The Worker: Pull, Send, Ack
async fn email_worker(rq: Arc<RustQueue>) {
loop {
let jobs = rq.pull("emails", 5).await.unwrap();
for job in &jobs {
match send_email(&job.data).await {
Ok(()) => rq.ack(job.id, None).await.unwrap(),
Err(e) => {
println!("Failed: {e} — will retry");
rq.fail(job.id, &e).await.unwrap();
}
}
}
if jobs.is_empty() {
tokio::time::sleep(Duration::from_millis(200)).await;
}
}
}The worker runs in the same process, spawned as a tokio task. It pulls up to 5 jobs at a time, sends the email, and either acks (success) or fails (retry). That's the entire worker. No framework, no trait implementations, no macros.
When fail() is called, RustQueue automatically schedules the job for retry with exponential backoff. After exhausting all attempts, the job lands in a dead-letter queue for inspection.
What Happens in Practice
Here's actual output from running the example. The simulated email sender fails ~20% of the time to demonstrate retry behavior:
Notice: the password reset was queued last but processed second — right after the job that was already being sent. Priority works.
The failed email to user2 is automatically scheduled for retry with exponential backoff. No code needed — fail() handles everything.
The Crash Test
This is the part that matters. We queued 15 emails, let the worker process some, then killed the process with kill -9:
Zero data loss. All 6 completed emails are still recorded. The 2 failed emails are still in delayed state, waiting for their retry backoff. The database file survived the crash intact.
After restart, we queued 5 more emails. They processed alongside the recovered state — new and old jobs coexisting on the same queue.
The Comparison
Here's what this same feature looks like with common alternatives:
| RustQueue | tokio::spawn | Redis + BullMQ | |
|---|---|---|---|
| Setup | cargo add | Already there | Docker + npm |
| Crash recovery | Yes (ACID) | No | Yes |
| Priority | Built in | Manual | Yes |
| Retry + backoff | Built in | Manual | Yes |
| External services | None | None | Redis server |
| Deployment | Single binary | Single binary | 2+ services |
| DLQ inspection | Built in | No | Yes |
tokio::spawn is fine until the process crashes and your emails vanish. Redis gives you everything but costs you infrastructure. RustQueue gives you everything with nothing to run.
Think of it this way: tokio::spawn is a sticky note. Redis is a filing cabinet in another building. RustQueue is a notebook on your desk.
Try It
# Clone and run
git clone https://github.com/ferax564/rustqueue.git
cd rustqueue
cargo run --example email_notifications
# In another terminal
curl -X POST http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{"email":"you@example.com","name":"You"}'
curl http://localhost:3000/statsKill it with Ctrl+C, restart it, check /stats. Your jobs are still there.
What's Next
This example is intentionally simple. RustQueue also supports:
- Cron scheduling — send daily digest emails on a schedule
- DAG workflows — chain jobs with dependencies (charge → send receipt → update CRM)
- Progress tracking — report 0–100% progress on long-running jobs
- WebSocket events — stream job lifecycle events to your frontend
- Webhooks — HMAC-signed HTTP callbacks on job completion/failure
When you outgrow embedded mode, run the same engine as a standalone server with SDKs for Node.js, Python, and Go.
Add It to Your Project
Persistent, crash-safe background jobs in one line:
cargo add rustqueue
The full example is at examples/email_notifications.rs. Star us on GitHub if this is useful.