Hello HN, we're Jeet and Husain from Modulus (https://modulus.so) - a desktop app that lets you run multiple coding agents with shared project memory. We built it to solve two problems we kept running into:
- Cross-repo context is broken. When working across multiple repositories, agents don't understand dependencies between them. Even if we open two repos in separate Cursor windows, we still have to manually explain the backend API schema while making changes in the frontend repo.
- Agents lose context. Switching between coding agents often means losing context and repeating the same instructions again.
Modulus shares memory across agents and repositories so they can understand your entire system.
It's an alternative to tools like Conductor for orchestrating AI coding agents to build product, but we focused specifically on multi-repo workflows (e.g., backend repo + client repo + shared library repo + AI agents repo). We built our own Memory and Context Engine from the ground up specifically for coding agents.
Why build another agent orchestration tool? It came from our own problem. While working on our last startup, Husain and I were working across two different repositories. Working across repos meant manually pasting API schemas between Cursor windows — telling the frontend agent what the backend API looked like again and again. So we built a small context engine to share knowledge across repos and hooked it up to Cursor via MCP. This later became Modulus.
Soon, Modulus will allow teams to share knowledge with others to improve their workflows with AI coding agents - enabling team collaboration in the era of AI coding. Our API will allow developers to switch between coding agents or IDEs without losing any context.
If you wanna see a quick demo before trying out, here is our launch post - https://x.com/subhajitsh/status/2024202076293841208
We'd greatly appreciate any feedback you have and hope you get the chance to try out Modulus.
This is a great deep dive into SIMD. I've been experimenting with similar constraints but on even more restrictive hardware. Managed to achieve sub-85ns cycles for 10.8T dataset audits on a budget 3GB RAM ARM chip (A04e) by combining custom zero-copy logic with strict memory mapping. The trick was bypassing the standard allocator entirely to keep the L1 cache hot. Does your SIMD approach account for the memory controller bottleneck on lower-end ARM v8 cores, or is it mostly tuned for x86/high-end silicon?
the memory engine question is the crux — most 'shared memory' approaches either go vector db (semantic search loses precision on code) or graph (precise but expensive to maintain across repo changes). curious which direction you went. one thing that works surprisingly well for cross-repo context: storing explicit schema contracts as structured facts rather than raw embeddings. agents can retrieve 'what does /api/users return' without semantic fuzziness
Hey Husain here, cofounder of Modulus Good point, we do just that - storing the explicit schema as structured facts. Relevance is based on similarity threshold of the embeddings of the repo purpose and schema fetching is based on structured facts.
How does your memory engine actually work?
Hey Husain here, co founder of Modulus Can talk about this for hours but heres a summary Every repo added by the user is analyzed for technical specifications which are stored without the code itself. Updated every time a significant change is made to the codebase. These are used at the time of retrieval by checking for relevance of the connected repos and extracting them as relevant context to the ongoing task. Hope that answers your question!