Show HN: I gave my robot physical memory – it stopped repeating mistakes (github.com)

robotmem 4 days ago

Thanks for the feedback!

  Results summary: Baseline heuristic policy achieves 42% success rate on FetchPush-v4. With memory augmentation
  (recall past experiences before each episode), it reaches 67% — a +25pp improvement. Cross-environment transfer
  from FetchPush to FetchSlide adds +8pp over baseline.

  The API has 7 endpoints — the core loop is:

  - learn(insight, context) — store what worked (or failed)
  - recall(query) — retrieve relevant past experiences, ranked by text + vector + spatial similarity
  - save_perception(data) — store raw trajectories/forces
  - start_session / end_session — episode lifecycle with auto-consolidation

  Everything runs on SQLite locally. No cloud, no GPU. Works via MCP (Model Context Protocol) or direct Python
  import.

  pip install robotmem — quick demo runs in 2 minutes.
DANmode 4 days ago

Recommend providing a text summary of the comparison chart - and talking a bit about the API.

DANmode 4 days ago

I’m going to say this has failed the Turing test based on the reply.

robotmem 3 days ago

[dead]

sankalpnarula 3 days ago

Hey just curious. What happens when the memory gets large enough. Does it start creating problems with context windows?

RovaAI 4 days ago

[flagged]

robotmem 3 days ago

[dead]