comrade-macro | ||
examples | ||
src | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
README.md |
☭ comrade
comrade
is a Rust crate designed for managing compute work. It allows seamless management of shared work and functions even across machines.
Features
- Parallel Execution: Dispatch tasks to run concurrently and gather their results.
- Rally Execution: Run multiple tasks in parallel and return the result of the fastest one.
- Service Management: Manage background services with different operating modes (
Decay
,Daemon
). - Worker Unions: Delegate tasks using
#[worker]
annotations locally or as distributed task queues across multiple machines. - Background Tasks: Seamlessly run background tasks without blocking the main logic of your program.
Core Concepts
Parallel Execution
comrade
provides a simple interface for running tasks in parallel, perfect for independent tasks that can be processed concurrently.
let results: Vec<i32> = parallel(items, |item: &i32| {
// ...
});
Rally Execution
The rally
function allows you to run multiple tasks in parallel and return the result of the first task to finish. This is useful when you want to prioritize the first available result from several tasks (example: download from multiple HTTP mirrors).
let res: (i32, i32) = rally(items, |item: &i32| {
// ...
});
Background Tasks
Easily run tasks in the background without blocking the main thread. This is useful for code that needs to be run without waiting for a result.
fn handle() {
background(|| {
// Background task logic
println!("This is a background task!");
});
}
Service Management
comrade
provides a way to manage persistent services with different modes. The Decay
mode allows services to die, while the Daemon
mode revives them and keeps them running indefinitely.
use comrade::service::ServiceManager;
fn run_services() {
let mut manager = ServiceManager::new().mode(comrade::service::ServiceMode::Decay);
// Register and start services
manager = manager.register("my_service", |_| {
// Service logic here
});
let thread_handle = manager.spawn();
thread_handle.join().unwrap();
}
Worker Unions
You can annotate a function with #[worker]
which gives them superpowers. These functions can be queued and dispatched by the system, and their results are returned when completed.
use comrade::{worker};
// Single local worker
#[worker]
pub fn myfn(i: i32) -> i32 {
i * 2
}
// 4 local worker threads
#[worker(4)]
pub fn multiply(a: i32, b: i32) -> i32 {
a * b
}
After initialization these functions can then be called anywhere and will be processed eventually by whatever worker picks it up.
Additionally there are new functions derived from your function. See the below example:
fn main() {
let mut manager = ServiceManager::new().mode(comrade::service::ServiceMode::Decay);
// Init worker thread on `ServiceManager`
manager = multiply_init(manager);
let manager = manager.spawn();
// works like the original function
let res = multiply(2, 2);
// async
let e = take_time_async(1500);
println!("This will run right after!");
// ...
// is OUR value ready?
println!("the value is {}", e.wait());
// Shutdown worker thread
multiply_shutdown();
manager.join().unwrap();
}
These tasks can now be distributed with Valkey.
Make sure you have a Valkey server running and the $VALKEY_URL
environment variable is set for your application:
services:
valkey:
image: valkey/valkey
ports:
- 6379:6379
Then you can spawn worker threads like that:
fn main() {
let mut s = ServiceManager::new().mode(comrade::service::ServiceMode::Decay);
s = multiply_init_union(s);
s = myfn_init_union(s);
let s = s.spawn();
log::info!("Spawned workers. Working for 1 minute");
std::thread::sleep(Duration::from_secs(60));
myfn_shutdown();
multiply_shutdown();
s.join().unwrap();
}
When workers are running, you can use them like:
fn main() {
// Register workers in union
myfn_register_union();
// Will be computed somewhere else
let x = myfn(50);
println!("x is {x}");
}