Show HN: I built Wool, a lightweight distributed Python runtime (github.com)

I spent a long time working in the payments industry, specifically on a rather niche reporting/aggregation platform with spiky workloads that were not easily parallelized. To pump as much data through our pipeline as possible, we had to rely on complex locking schemes across half a dozen or so not-so-micro services - keeping a clear mental picture of how the services interacted for a given data source was a major headache. This problem always intrigued me, even after I no longer worked at the company, and lead to the development of Wool.

If you've worked with frameworks like Ray or Prefect, you're probably familiar with the promise of going from script to scale in two lines of code (or something along those lines). This is essentially the solution I was looking for: a framework with limited boilerplate that facilitated arbitrary distribution schemes within a single, coherent codebase. What I was hoping for, though, was something a little bit more focused - I wasn't working on ML pipelines and didn't need much else other than the distribution layer. This is where Wool comes in. While it's API is very similar to those of Ray and Prefect, where it differentiates itself is in its scope and architecture.

First, Wool is not a task orchestrator. It provides push-based, best-effort, at-most-once execution. There is no built-in coordination state, retry logic, or durable task tracking. Those concerns remain application-defined. The beauty of Wool is that it looks and feels like native async Python, allowing you to use purpose-built libraries for your needs as you would for any other Python app (with some caveats).

Second, Wool was designed with speed in mind. Because it's not bloated with features, it's actually pretty fast, even in its current nascent state. Wool routines are dispatched directly to a decentralized peer-to-peer network of gRPC workers, who can distribute nested routines amongst themselves in turn. This results in low dispatch latencies and high throughput. I won't make any performance claims until I can assemble some more robust benchmarks, but running local workers on my M4 MacBook Pro (a trivial example, I know), I can easily achieve sub-millisecond dispatch latencies.

Anyway, check it out, any and all feedback is welcome. Regarding docs- the code is the documentation for now, but I promise I'll sort that out soon. I've got plenty of ideas for next steps, but it's always more fun when people actually use what you've built, so I'm open to suggestions for impactful features.

-Conrad

takahitoyoneda 4 hours ago

As a solo dev, I usually avoid distributed Python runtimes entirely because managing the infrastructure overhead of Celery or Ray is a massive time sink. If Wool genuinely abstracts away those complex locking mechanisms without requiring a heavy Redis or Postgres cluster just to manage state, that is a huge win for smaller teams. How does your scheduler handle node failures mid-execution when exactly-once processing is strictly required?

bzurak 4 hours ago

I wouldn't say it abstracts the locking mechanisms away - if you need synchronization in your app, it's probably best to leave how that's achieved up to the user - what it does is make it possible to contain your business logic end-to-end in a single application/codebase without obfuscating it with distribution boundaries (e.g., calls out to other REST APIs or message queues). There are also still worker nodes to manager, BUT the architecture is much simpler in the sense that there are only workers to deal with - no control plane, scheduler, or other services involved.

Regarding failures - Wool workers are simple gRPC services under the hood, and connections are long-lived HTTP2 connections that persist for the life of the request. Worker-side failures simply manifest as Python exceptions on the client side, with the added nicety of preserving the FULL stack trace across worker boundaries (achieved with tbpickle). A core tenet of Wool is that it makes no assumptions about your workload - I leave it up to you to write a try-catch block and handle exceptions in a manner appropriate to your use case. The goal is to keep Wool as unopinionated about this sort of thing as possible.

I'm not sure about your specific needs, but I'm considering adding a simple CLI-based worker management tool for users that don't want or need a full service orchestrator like Kubernetes in their stack.

bzurak 4 hours ago

I should add- Wool supports ephemeral worker pools, i.e., pools that are spawned by your application directly that live for the life of the WorkerPool context. The limitation right now is that there’s no remote worker factory - you would need to implement a factory that spawns remote a remote worker as well as a truly remote discovery protocol. These are things I plan to add in future updates, but for now only machine-local and LAN discovery is implemented.