Looking for Partner to Build Agent Memory (Zig/Erlang)

I’m working on a purpose-built memory platform for autonomous AI agents.

Right now, agent memory is stuck between two hohum options: RAG (which loses relational topology) and Graph Databases (which require massive pointer chasing and degrade under heavy recursive reasoning).

I'm building an alternative using Vector Symbolic Architecture (Hyperdimensional Computing). By mathematically binding facts, sequences, and trees into fixed-size high-dimensional vectors (D=16,384), we can compress complex graph traversals into O(1) constant-time SIMD operations…and do some quasi brain-like stuff cheaply, that is, without GPUs and LLMs.

The design is maturing nicely and strictly bifurcated to respect mechanical sympathy:

• The Data Plane (Zig): Pure bare-metal math. 2GB memory-mapped NVMe tiles via io_uring. Facts are superposed into lock-free 8-bit accumulators strictly aligned to 64-byte cache lines. Queries are executed via AVX-512 popcount instructions to calculate Hamming distances at line-rate. Zero garbage collection.

• The Control Plane (Gleam): Handles concurrency, routing, and a Linda-style Tuplespace for external comms. It manages the agent "clean-up" loops and auto-chunking without ever blocking the data plane.

• The Bridge: A strict C-ABI / NIF boundary passing pointers from the BEAM schedulers directly into the Zig muscle.

There is no VC fluff here, and I'm not making wild claims about AGI. I have most of spec, memory layout invariants, and the architecture designed. Starting to code and making good progress.

I’m looking for someone who loves low-level systems (Zig/Rust/C) or highly concurrent runtimes (Erlang) to help me build the platform. This is my second AI platform; the first one is healthy and growing.

If you are interested in bare-metal systems engineering to fix the LLM context bottleneck, I'd love to talk: email me at acowed@pm.me.

Cheers, Kendall

valentinza 1 day ago

Fascinating approach — using VSA to compress graph traversals into O(1) SIMD operations is a clever way to sidestep the RAG vs graph DB trade-off. Curious about a couple of things: how do you handle fact deletion or correction once something is superposed into the accumulators? And what does the query interface look like from the agent's perspective — is it purely similarity-based via Hamming distance, or do you support structured relational queries too?

kendallgclark 9 hours ago

Thanks for the question.

Inlike vector databases that append embeddings to an HNSW graph, my working memory substrate natively supports mathematical forgetting.

I use a Squelch primitive—a SIMD-parallelized saturating subtraction over an 8-bit probabilistic accumulator.

When an agent finishes a chain-of-thought, we literally subtract the statistical mass of that specific reasoning path out of the 16,384-dimensional superposition.

It intentionally drops the signal back below the Kanerva noise floor, freeing up capacity in the L3 cache without destroying the other superposed facts. It works similarly for episodic memory and procedural memory.

Semantic memory, currently, is for invariants and “the schema” so no “deletions” but this will likely be reworked.

As for retrieval, yeah similarity via Hamming is table stakes. But there’s other stuff too including resonator network factorization and a Datalog variant, too.

We map Datalog semantics directly to VSA using 16,384-dimensional hypervectors.

Instead of relational tables, our EDB consists of 'Pentads' (Subj, Pred, Obj, Context, Lineage) bound together using prime-number circular bit-shifts to encode grammatical roles.

These facts are superposed into an 8-bit probabilistic accumulator.

For the IDB and schema enforcement, we use a 'Warden' actor in Gleam that intercepts state changes in the Tuplespace, validating them against constraints before they ever cross the C-ABI boundary.

When we query the Datalog, there are no B-trees or graph traversals.

We construct a 16k-bit probe and use AVX-512 SIMD to perform Maximum Likelihood Decoding directly against the superposed noise in O(1).

Because standard Datalog struggles with time, we extended the semantics to support LTL+ (Next, Eventually, Always, Until) natively over the vector space.

Our Episodic memory isn't a flat table; it's a strict chronological linked-list of physical accumulators stitched together with Holographic Pointers.

To evaluate temporal modalities like Eventually (◊) or Until (U), we don't use expensive SQL window functions or graph traversals.

The Zig sidecar just follows the Holographic Pointers, performing a O(1) SIMD resonance check at each temporal node.

If the target state resonates out of the noise at, say, Node 5, the LTL query resolves to true.

We execute temporal logic as a recursive physical jump through a non-Euclidean probability space.

Working next week on encoding Agent text into vectors directly without an LLM or SLM to assist.

I hope that helps!

tlb 1 day ago

I'm interested in this, but only passingly familiar with it from several years ago. Can you link to what you believe the current state of the art is?

kendallgclark 1 day ago

State of the art for HDC/VSA? Or for agentic memory?

tlb 21 hours ago

HDC/VSA.

kendallgclark 9 hours ago

Well probably this recent piece by Kanerva. https://arxiv.org/abs/2503.23608

claudiug 1 day ago

great and neat project! I would like to ask, where do you see the value here? a lot of tools on memory, context, etc

kendallgclark 9 hours ago

Thanks. Yes all spaces are crowded.

IMO the value here will be quasi brain-like operations on data that are fast and efficient.

We overuse LLMs which aren’t too fast and very inefficient.

So the value here is being able to support a shift of some workloads from LLM to smart agentic memory.