Hi HN, I’m the creator of pycoClaw.
I wanted to run OpenClaw-class, platform-agnostic, autonomous agents on MicroPython hardware, but standard tools couldn't handle the scale of the task.
pycoClaw is the result, which bridges the gap between high-level AI reasoning and bare-metal execution.
The Stack:
- PFC Agent (~26k LOC): A full-featured agent that uses an LLM to 'self-program' its own local MicroPython scripts. Once a task is solved, it runs locally without requiring the LLM.
- ScriptoStudio IDE: A PWA https://scriptostudio.com designed for the iteration speed required by autonomous agents. Since it’s a PWA, it brings a full dev environment (including a real single-step debugger) to any platform, including iPadOS.
- ScriptoHub ( https://scriptohub.ai ): A repository for "Skills" and extensions. Since the agent can generate and execute code, I built a curated hub with automated malware checking to ensure the community can safely share and deploy hardware logic.
- IANA Protocol: To make the IDE fast and reliable, I registered a new WebSocket subprotocol (registry: https://www.iana.org/assignments/websocket/websocket.xhtml ). It’s designed for high-frequency state sync and you can read the spec here: https://jetpax.github.io/webrepl/webrepl_binary_protocol_rfc...
- Custom C Extensions: ~17,900 lines custom modules for MicroPython memory/ fast-path speed optimization
Stats: 10k LOC platform, 26k LOC PFC agent, and ~18k LOC of custom C extensions to optimize MicroPython’s memory and fast-path execution on the ESP32.
Quick Start: You can flash the runtime to an ESP32S3 or P4 in one click via WebSerial at https://pycoclaw.com. Note that all flashing and serial communication happens entirely client-side in your browser.
I'd love to hear your thoughts on the 'self-programming' model or the system architecture!
This project looks super cool. I love having the idea of having openclaw on a low powered device. I am working on something and should have it out next week. It was designed to run on a raspberry pi and would be a great companion to your project. I will post back when it is live and would love for you to take a look.
Hope its a cool local LLM.
Also will soon have this running on RP2350 soon, so stay tuned for that!
picoClaw has a built in provider router to that heartbeats etc can go local to save tokens and $$$
eg
etcThis sounds interesting but I have no idea what use case this could have other than a robot that can communicate with an LLM but it appears it's more than that. Isn't it slow to run scripts rather than have a preprogrammed hardware with tight loops written in C? I'm genuinely confused, please don't take it as a criticism. What kind of projects have you done or what do you have in mind or envisage in the future? Autonomous robots? Cheers
As a robot it literally can program itself; it becomes its own firmware engineer, and can adapt its own code.
As an assistant, it remembers everything, up to the limit of its 2TB sdcard, both facts and vectors, but can still reach out to say Google sheets.
As a plaything , it's a tamagotchi on steroids with it own character that responds to voice.
---
Lets say you had an unknown EV motor on a CAN bus. You want to make it run, but you don’t now the right protocols, or the right codes. An LLM does, or can work it out. You ask the device, as you would ask an experienced firmware engineer, play with this motor until you make it work. So the LLM does, and when it has the code working, it creates a skill that runs the motor when a pedal on CAN bus is pressed. No more firmware engineer.
Or lets say you want to build a wearable that runs on batteries for a long time, so need a mW class processor that can store all your context, and goes with you everywhere, the pendant, the clasp, the buckle. You speak to it with your earbuds, it only knows wake words but streams your audio to the LLM. Its not just a chat bot. It is the agent that stores your whole digital life .
Or maybe, because it’s a $2 part, it’s whimsical, like a tamagotchi or a bag charm. Take care of me and I respond, but I help you too. ‘You tell stuff to me you don’t want to tell Google or Apple, or even your best friend’.
---
The key is this is not a chat bot, it’s an embodied OpenClaw class agent, take it where you will, on whatever LLM you will. And it’s so cheap, you can tie it to your service and give it away.
As for speed, ask yourself, how does numpy work? The answer to that is exactly why pycoClaw works. Its 20K LOC fast path is in C , but it’s exposed to the Agent as Python, because it has to be. It has to be run time modifiable, because thats what an agent needs, because an agent is ‘just’ a while loop with a REPL. The key is what the LLM populates the REPL with.
The LLM is the tokens in, tokens out intelligence, but it turns out the Agent or Harness is where the power really lies.
pycoClaw is the LLM harness for hardware.
[dead]