Hey HN,
My cofounder and I have gotten tired of CC ignoring our markdown files so we spent 4 days and built a plugin that automatically steers CC based on our previous sessions. The problem is usually post plan-mode.
What we've tried:
Heavily use plan mode (works great)
CLAUDE.md, AGENTS.md, MEMORY.md
Local context folder (upkeep is a pain)
Cursor rules (for Cursor)
claude-mem (OSS) -> does session continuity, not steering
We use fusion search to find your CC steering corrections.
- user prompt embeddings + bm25
- correction embeddings + bm25
- time decay
- target query embeddings
- exclusions
- metadata hard filters (such as files)
The CC plugin:
- Automatically captures memories/corrections without you having to remind CC
- Automatically injects corrections without you having to remind CC to do it.
The plugin will merge, update, and distill your memories, and then inject the highest relevant ones after each of your own prompts.
We're not sure if we're alone in this. We're working on some benchmarks to see how effective context injection actually is in steering CC and we know we need to keep improving extraction, search, and add more integrations.
We're passionate about the real-time and personalized context layer for agents. Giving Agents a way to understand what you mean when you say "this" or "that". Bringing the context of your world, into a secure, structured, real-time layer all your agents can access.
Would appreciate feedback on how you guys get CC to actually follow your markdown files, understand your modus operandi, feedback on the plugin, or anything else about real-time memory and context.
- Ankur
It seems like every prompt is sent over to gopeek.ai and that's a pretty big thing you forgot to mention.
Updating, you're right! Sorry that wasn't intentional. We're already working on whitelists and blacklists for files, topics, etc..
Sensitive data is wiped and never stored such as keys, tokens, and PII.
If it's a non-starter for most users, we'd definitely build out encryption, and local storage formats.
Cloud was for product velocity for ourselves, not to be malicious. Again we put it together in 4 days.
Appreciate the warning here.
The sheer number of people throwing their nonsense memory implementations at the wall right now is just..
As pointed out below:
Currently your prompts are processed by our server hosted at www.gopeek.ai. This was meant for velocity and early iteration while we're getting data models right.
We're already working on whitelists and blacklists for files, topics, memory exports, and can even work on self-hosted/locally hosted versions so please let us know what is a non-starter on this front.
Sensitive data is already wiped and never stored such as keys, tokens, and PII.
Yeah I'm not using this.
Information We Collect Account Information
Conversation Data (Claude Code Plugin)
When you use the Peek Claude Code plugin, it reads portions of your Claude Code conversation transcript on your local machine and sends the following to our servers:
Your most recent message (prompt text) Recent conversation context (up to 10 prior messages, truncated to 500 characters each)
Understood. If it was full local or where only you hold the decryption key?