Results summary: Baseline heuristic policy achieves 42% success rate on FetchPush-v4. With memory augmentation
(recall past experiences before each episode), it reaches 67% — a +25pp improvement. Cross-environment transfer
from FetchPush to FetchSlide adds +8pp over baseline.
The API has 7 endpoints — the core loop is:
- learn(insight, context) — store what worked (or failed)
- recall(query) — retrieve relevant past experiences, ranked by text + vector + spatial similarity
- save_perception(data) — store raw trajectories/forces
- start_session / end_session — episode lifecycle with auto-consolidation
Everything runs on SQLite locally. No cloud, no GPU. Works via MCP (Model Context Protocol) or direct Python
import.
pip install robotmem — quick demo runs in 2 minutes.
DANmode4 days ago
Recommend providing a text summary of the comparison chart - and talking a bit about the API.
DANmode4 days ago
I’m going to say this has failed the Turing test based on the reply.
robotmem3 days ago
[dead]
sankalpnarula3 days ago
Hey just curious. What happens when the memory gets large enough. Does it start creating problems with context windows?
Thanks for the feedback!
Recommend providing a text summary of the comparison chart - and talking a bit about the API.
I’m going to say this has failed the Turing test based on the reply.
[dead]
Hey just curious. What happens when the memory gets large enough. Does it start creating problems with context windows?
[flagged]
[dead]