Datatoy Logo

Parallel tasks

Scale workers with a single parameter

`.task(fn, count=N)` runs `fn` in N separate processes sharing the same input queue. Each worker runs in its own process, bypassing the Python GIL — ideal for CPU-bound work and I/O calls.

python
from olympipe import Pipeline
import time

def slow_compute(x: int) -> int:
    time.sleep(0.01)   # simule un calcul lourd
    return x ** 2

# 1 worker séquentiel
results = Pipeline(range(100)).task(slow_compute).wait_for_result()

# 8 workers parallèles — ~8x plus rapide
results = (
    Pipeline(range(100))
    .task(slow_compute, count=8)
    .wait_for_result()
)

Performance

100 items, CPU work (0.01s/item)

Sequential 10.0s
2 workers 5.2s
4 workers 2.7s
8 workers 1.4s
🚀 7.1× faster

How it works

Olympipe spawns N child processes via `multiprocessing`. Each item from the source is picked up by the first available worker — no manual coordination, no Lock, no Pool.

Related examples