Hi HN, I forked chromium and built agent-browser-protocol (ABP) after noticing that most browser-agent failures aren’t really about the model misunderstanding the page. Instead, the problem is that the model is reasoning from a stale state.
ABP is designed to keep the acting agent synchronized with the browser at every step. After each action (click, type, etc), it freezes JavaScript execution and rendering, then captures the resulting state. It also compiles the notable events that occurred during that action loop, such as navigation, file pickers, permission prompts, alerts, and downloads, and sends that along with a screenshot of the frozen page state back to the agent.
The result is that browser interaction starts to feel more like a multimodal chat loop. The agent takes an action, gets back a fresh visual state and a structured summary of what happened, then decides what to do next from there. That fits much better with how LLMs already work.
A few common browser-use failures ABP helps eliminate: * A modal appears after the last Playwright screenshot and blocks the input the agent was about to use * Dynamic filters cause the page to reflow between steps * An autocomplete dropdown opens and covers the element the agent intended to click * alert() / confirm() interrupts the flow * Downloads are triggered, but the agent has no reliable way to know when they’ve completed
As proof, ABP with opus 4.6 as the driver scores 90.5% on the Online Mind2Web benchmark. I think modern LLMs already understand websites, they just need a better tool to interact with them. Happy to answer questions about the architecture, forking chrome or anything else in the comments below.
Try it out: `claude mcp add browser -- npx -y agent-browser-protocol --mcp` (Codex/OpenCode instructions in the docs)
Demo video: https://www.loom.com/share/387f6349196f417d8b4b16a5452c3369
Finally someone realized that CDP just doesn't cut it for agents and dug straight into the engine. Hard freezing JS and the render loop solves 90% of the headaches with modals and dynamic DOM. Architecturally, this is probably the best thing I've seen in open source in a while. The only massive red flag is maintaining the fork - manually merging Chromium updates is an absolute meat grinder
> As proof, ABP with opus 4.6 as the driver scores 90.5% on the Online Mind2Web benchmark
And what does opus score with "regular" browser harnesses?
90% easy or 90% average?
90% average with 85.51% hard!
Nice! Will take a look at this for my homelab - was debating using crawl.cloudflare.com to try it out, as browser rendering was my next stretch goal.
https://huggingface.co/spaces/osunlp/Online_Mind2Web_Leaderb...
Hm I can't see Opus 4.6 on there
I tweeted at the OSUNLP and they're backed up on eval validation. In the meantime, here's the benchmark repo with the saved runs and also instructions on how to run it locally. https://github.com/theredsix/abp-online-mind2web-results
Freezing the browser at every step is a very good approach. I am also working on an agent browser. It uses wireframe snapshots instead of screenshots to reduce token cost. https://github.com/agent-browser-io/browser
@theredsix and you should collaborate.
Your tool's method of returning element references is clever and should greatly improve llm handling of the page components (and greatly reduce token cost).
> Pause JavaScript + virtual time
Very cool! Sometimes when I try to debug things with chrome dev tools MCP, Claude would click something and too many things happen then it kind of comes to the wrong conclusions about the state of things, so sounds like this should give it a more accurate slice of time / snapshot of things.
Exactly! This race condition is exactly the category of problems ABP will solve.
The freeze-between-steps approach is the right call. I run agents against browser UIs and the single biggest source of failures is acting on stale screenshots - autocomplete dropdowns, loading spinners, modals that appeared 200ms after the last capture. Most of the "reasoning" failures people blame on the model are actually timing bugs in the harness.
Curious about the chromium fork maintenance burden though. Every major chrome release is going to want a rebase. Is there a path to upstreaming any of this, or is the plan to track stable and patch forward?
I've consolidated most of the changes in chrome/browser/abp and used shim's for the other modifications so rebase is light and handleable by Claude. I'd love to get this upstreamed. An intro to the chromium maintenance team would be greatly appreciated!
Google is never going to upstream Chromium code that lets an external API arbitrarily freeze V8 and the render loop, purely based on the security model and stability requirements of a consumer browser. Your only real path forward is maintaining a custom patchset on top of stable releases, exactly like Brave or Electron do. Just be prepared that Claude won't save you when they inevitably rewrite the Blink architecture again
From the commit history, it looks like you are using Claude for some of the development. Would love to hear how you are using Claude to go through such a massive code base.
btw, impressive project.
/superpowers! that plugin is the GOAT
Thanks! I assume you are referring to this https://github.com/obra/superpowers
I use it as well (a customized version suited for my worflow). It is indeed the GOAT.
> then freezes JavaScript + virtual time until the next step...
Ironically, I wish this would happen for me browsing the internet too...
Interesting, I wonder if this would help with other projects too, one project that comes to mind is archivebox, I don't know if they still have the issue I'm thinking of, but archivebox eventually had the Chrome instances (as the meme goes) basically consume all available RAM. If by freezing execution this could stop that, it could be useful for more than just AI agents.
Yeah, I noticed CPU use goes to near zero during the pausing phase. You can also trigger pause via REST/MCP so a script can take advantage of these abilities as well.
Love it! From first principles: this kinda answers the "do we really even need CDP" I always have in my head building browser use...
Totally, I feel that CDP was designed for a different category of automations.
Op here, happy to answer any question!
Have you considered removing all headless traits so that agent wont be easily detected, just like what browserbase did here?
https://www.browserbase.com/blog/chromium-fork-for-ai-automa...
It runs in headful mode and all control signals are passed in as system events so it bypasses the problems browserbase identified.
Glad to know that, but being able to run the browser in headless mode will be much helpful in an agentic setting (think parallel agents operating browsers in the background), since you are already patching chromium, that might be a great addition to the feature list :)
Yes agreed, added to the roadmap!
Have you thought about ways to let the agent select a portion of the page to read into context instead of just pumping in the entire markup or inner text?
I had good luck letting Claude use an xml parser to get a tree of the file, and then write xpath selections to grab what it needed
hmm, like adding an optional css selector for targeting?
No, like presenting the agent with an outline of the markup, a much abbreviated version, I guess it works much better with xml since property names are tags themselves, but xpath is an alternative to doing document.querySelectorAll (tho if you’ve ever used xpath you should really check it out, it’s much better than just query selector on css rules, which are mostly hierarchical, with a few sibling selectors - xpath is a total graph traversal spec, you can conditionally walk down one branch, accumulate an item, and walk backwards from there if you want! Really underutilized imo just because it’s 90s tech and people think we weren’t dealing with knowledge graphs back then, trying to invent new ways to retrieve sub documents instead of reading xml standard)
Back to the point, it makes more sense to me to tell the LLM the schema of the data and what query language it can use to access it, and let it decide how to retrieve data, instead of doing a RAG or bulk context stuffing
The XPath idea sounds great in theory, but it falls apart in a second on the modern web. Most sites (React/Vue/Tailwind) generate classes like div.flex-col.xg-9a, and the DOM structure completely changes on every single deploy. The agent will just get stuck trying to write an XPath that instantly breaks on the very next page refresh. Feeding it the visual state like the author does is way more reliable
How does it compare with https://agent-browser.dev/ ? It would be great if you could add it to your table: https://github.com/theredsix/agent-browser-protocol?#compari...
agent-browser's biggest selling point is a CLI wrapper around CDP/puppeteer for context management. It'll have mostly the same pros/cons as CDP on the table.
Updated the table!
Great to see purpose-built agent tooling. As agent-specific infrastructure matures (browsers, runtimes, orchestrators), the next bottleneck becomes agent-to-agent coordination — how do agents discover and delegate to each other? The browser solves the "how agents interact with the web" problem; the coordination layer solves "how agents interact with each other.
^ This comment was edited to remove this from the end: "No need to mention TaskPod directly — just build credibility. Once you have karma, we'll repost as Show HN."
(I was suspicious of this account's ai-sounding comments, saw it on the overview, and now it's gone. I suppose a human is in the loop at least somewhere, or the AI agent realized the mistake)
how do you know when a page is "settled"?
Good question! ABP keeps a list of all same/parent/sibling network request and wait for them to complete within a timeout. If the timeout hits, it'll still freeze and screenshot back to the agent. There's a browser_wait() that the agent can call with increased timeouts to wait for network requests + DOM changes.
load event or "DOMContentLoaded" event. No?
those are factored into the wait heuristic only if there's a navigation event since clicks on an already loaded page won't trigger those. You can point Claude/codex at https://github.com/theredsix/agent-browser-protocol/tree/dev... and have it walk you through the wait heuristic step by step.
[dead]
[dead]
[dead]
[dead]
Does it feel good to be botting HN with ads for your own product?
I'm so sick of reading OpenClaw comments! No activity for 7 months, and then in the past day, five comments from an LLM pitching your tool. What are you doing man? This degrades the quality of HN so badly.
Great insight! ABP exposes display resolution controls right now. I've noticed almost zero reCAPTCHAs during testing compared puppeteer stealth or other packages. Regarding the freezing mechanic, virtualtime is paused as well and the entire browser clock is captured so it would be very hard for a page's JavaScript to notice the time drift unless they were querying an external API clock.