Getting AI to work in complex codebases (github.com)

maltalex 1 day ago

Interesting read, and some interesting ideas, but there's a problem with statements like these:

> Sean proposes that in the AI future, the specs will become the real code. That in two years, you'll be opening python files in your IDE with about the same frequency that, today, you might open up a hex editor to read assembly.

> It was uncomfortable at first. I had to learn to let go of reading every line of PR code. I still read the tests pretty carefully, but the specs became our source of truth for what was being built and why.

This doesn't make sense as long as LLMs are non-deterministic. The prompt could be perfect, but there's no way to guarantee that the LLM will turn it into a reasonable implementation.

With compilers, I don't need to crack open a hex editor on every build to check the assembly. The compiler is deterministic and well-understood, not to mention well-tested. Even if there's a bug in it, the bug will be deterministic and debuggable. LLMs are neither.

ozim 1 day ago

The fun part is that specs already are non-deterministic.

If you spend time to write out requirements in English in a way that cannot be misinterpreted in any way you end up with programming language.

jimbo808 1 day ago

Humans don't make mistakes nearly as much, the mistakes they do make are way more predictable (they're easier to spot in code review), and they don't tend to make the kinds of catastrophic mistakes that could sink a business. They also tend to cause codebases to rapidly deteriorate, since even very disciplined reviewers can miss the kinds of strange and unpredictable stuff an LLM will do. Redundant code isn't evident in a diff, and things like tautological tests, or useless tests where they're mocking everything and only actually testing the mocks. Or they'll write a bunch of redundant code because they really just aggressively avoid code re-use unless you are very specific.

The real problem is just that they don't have brains, and can't think. They generate text that is optimized to look the most right, but not to be the most right. That means they're deceptive right off the bat. When a human is wrong, it usually looks wrong. When an LLM is wrong, it's generating the most correct looking thing it possibly could while still being wrong, with no consideration for actual correctness. It has no idea what "correctness" even means, or any ideas at all, because it's a computer doing matmul.

They are text summarization/regurgitation, pattern matching machines. They regurgitate summaries of things seen in their training data, and that training data was written by humans who can think. We just let ourselves get duped into believing the machine is the where the thinking is coming from and not the (likely uncompensated) author(s) whose work was regurgitated for you.

NaN years ago

undefined

NaN years ago

undefined

diavolodeejay 1 day ago

So… COBOL?

qcnguy 1 day ago

Not really, code even in high level languages is always lower level than English just for computer nonsense reasons. Example: "read a CSV file and add a column containing the multiple of the price and quantity columns".

That's about 20 words. Show me the programming language that can express that entire feature in 20 words. Even very English-like languages like Python or Kotlin might just about do it, if you're working in something else like C++ then no.

In practice, this spec will expand to changes to your dependency lists (and therefore you must know what library is used for CSV parsing in your language, the AI knows this stuff better than you), then there's some file handling, error handling if the file doesn't exist, maybe some UI like flags or other configuration, working out what the column names are, writing the loop, saving it back out, writing unit tests. Any reasonable programmer will produce a very similar PR given this spec but the diff will be much larger than the spec.

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

jlawson 1 day ago

Specs are ambiguous but not necessarily non-deterministic.

The same entity interpreting the spec in exactly the same way will resolve the ambiguities the same way each time.

Human and current AI interpretation of specs is non-deterministic process. But, if we wanted to build a deterministic AI we could.

NaN years ago

undefined

musebox35 1 day ago

"The prompt could be perfect, but there's no way to guarantee that the LLM will turn it into a reasonable implementation."

I think it is worse than that. The prompt, written in natural language, is by its very nature vague and incomplete, which is great if you are aiming for creative artistry. I am also really happy that we are able to search for dates using phrases like "get me something close to a weekend, but not on Tuesdays" on a booking website instead of picking dates from a dropdown box.

However, if natural language was the right tool for software requirements, software engineering would have been a solved problem long ago. We got rightfully excited with LLMs, but now we are trying to solve every problem with it. IMO, for requirements specification, the situation is similar to earlier efforts using formal systems and full verification, but at the exact opposite end. Similar to formal software verification, I expect this phase to end up as a partially failed experiment that will teach us new ways to think about software development. It will create real value in some domains and it will be totally abandoned in others. Interesting times...

dcre 1 day ago

“This doesn't make sense as long as LLMs are non-deterministic.”

I think this is a logical error. Non-determinism is orthogonal to probability of being correct. LLMs can remain non-deterministic while being made more and more reliable. I think “guarantee” is not a meaningful standard because a) I don’t think there can be such a thing as a perfect prompt, and b) humans do not meet that standard today.

physicsguy 1 day ago

> With compilers, I don't need to crack open a hex editor on every build to check the assembly.

The tooling is better than just cracking open the assembly but in some areas people do effectively do this, usually to check for vectorization of hot loops, since various things can mean a compiler fails to do it. I used to use Intel VTune to do this in the HPC scientific world.

weego 1 day ago

We also have to pretend that anyone has ever been any good at writing descriptive, detailed, clear and precise specs or documentation. That might be a skillset that appears in the workforce, but absolutely not in 2 years. A technical writer that deeply understands software engineering so they can prompt correctly but is happy not actually looking at code and just goes along with whatever the agent generates? I don't buy it.

This seems like a typical engineer forgets people aren't machines line of thinking.

lysecret 1 day ago

I agree this whole spec based approach is misguided. Code is the spec.

Ozzie_osman 1 day ago

> This doesn't make sense as long as LLMs are non-deterministic.

I think we will find ways around this. Because humans are also non-deterministic. So what do we do? We review our code, test it, etc. LLMs could do a lot more of that. Eg, they could maintain and run extensive testing, among other ways to validate that behavior matches the spec.

maltalex 23 hours ago

If you're reviewing the code, then you're no longer "opening python files with the same frequency that you open up a hex editor to read assembly".

FitchApps 1 day ago

This. Even with Junior Devs, implementation is always more or less deterministic (based on ones abilities/skills/aptitude). With AI models, you get totally different implementations even when specifically given clear directions via prompt.

phil294 1 day ago

Neither are humans, so this argument doesn't really stand.

maltalex 23 hours ago

> Neither are humans, so this argument doesn't really stand.

Even when we give a spec to a human and tell them to implement it, we scrutinize and test the code they produce. We don't just hand over a spec and blindly accept the result. And that's despite the fact that humans have a lot more common sense, and the ability to ask questions when a requirement is ambiguous.

aabhay 1 day ago

Not only that but they’re lossy. A hex representation is strictly more information as long as comments are included or generated.

beefnugs 1 day ago

sounds like a good nudge to make tests better

oblio 1 day ago

If the tests are written by the AI, who watches the watchers? :-)

apwell23 1 day ago

but ppl writing that already knew that . so why are they writing this kind of stuff. what the fuck is even going on?

fragmede 1 day ago

> there's no way to guarantee that the LLM will turn it into a reasonable implementation.

There's also no way to guarantee that you're not going to get hit by a meteor strike tomorrow. It doesn't have to be provably deterministic at a computer science PhD level for people without PhDs to say eh, it's fine. Okay, it's not deterministic. What does that mean in practice? Given the same spec.md file, at the layer of abstraction where we're no longer writing code by hand, who cares, because of a lack of determinism, if the variable for the filename object is called filename or fname or file or name as long as the code is doing something reasonable? If it works, if it passes tests, if we presume that the stoichastic parrot is going to parrot out its training data sufficiently close each time, why is it important?

As far as compilers being deterministic, there's a fascinating detail we ran into with Ksplice. They're not. They're only sufficiently enough that we trust them to be fine. There was this bug we kept tripping, back in roughly 2006, where GCC would swap registers used for a variable, resulting in the Ksplice patch being larger than it had to be, to include handling the register swap as well. The bug has since been fixed, exposing the details of why it was choosing different registers, but unfortunately I don't remember enough details about it. So don't believe me if you don't want to, but the point is, we trust the c compiler, given a function that takes in variables a, b, c, d, that a, b, c, and d will be map them to r0, r1, r2, or r3. We don't actually care what the order that mapping goes, so long as it works.

So the leap, that some have made, and others have not, is that LLMs aren't going to randomly flip out and delete all your data. Which is funny, because that's actually happened on replit. Despite that, despite the fact that LLMs still hallucinate total bullshit and goes off the rail; some people trust LLMs enough to convert a spec to working code. Personally, I think we're not there yet and won't be while GPU time isn't free. (Arguably it is already because anybody can just start typing into chat.com, but that's propped up by VC funding. That isn't infinite, so we'll have to see where we're at in a couple of years.)

That addresses the determinism part. The other part that was raised is debuggable. Again, I don't think we're at a place where we can get rid of generated code any time soon, and as long as code is being generated, then we can debug it using traditional techniques. As far as debugging LLMs themselves, it's not zero. They're not mainstream yet, but it's an active area of research. We can abliterate models and fine tune them (or whatever) to answer "how do you make cocaine", counter to their training. So they're not total black boxes.

Thus, even if traditional software development dies off, the new field is LLM creation and editing. As with new technologies, porn picks it up first. Llama and other downlodable models (they're not open source https://www.downloadableisnotopensource.org/ ). Downloadable models have been fine tuned or whatever to generate adult content, despite being trained not to. So that's new jobs being created in a new field.

whilenot-dev 1 day ago

What does "it works" mean to you? For me, that'd be deterministic behavior, and your description about brute forcing LLMs to the desired result through a feedback loop with tests is just that. I mean, sure, if something gives the same result 100% of the time, or 90% of the time, or fuck it, even 80-50% of the time, that's all deterministic in the end, isn't it?

The interesting thing is, for something to be deterministic that thing doesn't need to be defined first. I'd guess we can get an understanding of day/night-cycles without understanding anything about the solar system. In that same vein your Ksplice GCC bug doesn't sound nondeterministic. What did you choose to do in the case of the observed Ksplice behavior? Did you debug and help with the patch, or did you just pick another compiler? It seems that somebody did the investigation to bring GCC closer to the "same result 100% of the time", and I truly have to thank that person.

But here we are and LLMs and the "90% of the time"-approach are praised as the next abstraction in programming, and I just don't get it. The feedback loop is hailed as the new runtime, whereas it should be build time only. LLMs take advantage of the solid foundations we built and provide an NLP-interface on top - to produce code, and do that fast. That's not abstraction in the sense of programming, like Assembly/C++/Blender, but rather abstraction in the sense of distance, like PC/Network/Cloud. We use these "abstractions in distance" to widen reach, design impact and shift responsibilities.

Swiffy0 1 day ago

Having been writing a lot of AWS CDK/IAC code lately, I'm looking at this as the "spec" being the infrastructure code and the implementation being the deployed services based on the infrastructure code.

It would be an absolute clown show if AWS could take the same infrastructure code and perform the deployment of the services somehow differently each time... so non-deterministically. There's already all kinds of external variables other than the infra code which can affect the deployment, such as existing deployed services which sometimes need to be (manually) destroyed for the new deployment to succeed.

faxmeyourcode 2 days ago

I've used this pattern on two separate codebases. One was ~500k LOC apache airflow monolith repo (I am a data engineer). The other was a greenfield flutter side project (I don't know dart, flutter, or really much of anything regarding mobile development).

All I know is that it works. On the greenfield project the code is simple enough to mostly just run `/create_plan` and skip research altogether. You still get the benefit of the agents and everything.

The key is really truly reviewing the documents that the AI spits out. Ask yourself if it covered the edge cases that you're worried about or if it truly picked the right tech for the job. For instance, did it break out of your sqlite pattern and suggest using postgres or something like that. These are very simple checks that you can spot in an instant. Usually chatting with the agent after the plan is created is enough to REPL-edit the plan directly with claude code while it's got it all in context.

At my day job I've got to use github copilot, so I had to tweak the prompts a bit, but the intentional compaction between steps still happens, just not quite as efficiently because copilot doesn't support sub-agents in the same way as claude code. However, I am still able to keep productivity up.

-------

A personal aside.

Immediately before AI assisted coding really took off, I started to feel really depressed that my job was turning into a really boring thing for me. Everything just felt like such a chore. The death by a million paper cuts is real in a large codebase with the interplay and idiosyncrasies of multiple repos, teams, personalities, etc. The main benefit of AI assisted coding for me personally seems to be smoothing over those paper cuts.

I derive pleasure from building things that work. Every little thing that held up that ultimate goal was sucking the pleasure out of the activity that I spent most of my day trying to do. I am much happier now having impressed myself with what I can build if I stick to it.

dhorthy 1 day ago

I appreciate the share. Yes as I said it was a pretty dang uncomfortable to transition to this new way of working but now that it’s settled we’re never going back

grbsh 1 day ago

The fundamental frustration most engineers have with AI coding is that they are used to the act of _writing_ code being expensive, and the accumulation of _understanding_ happening for free during the former. AI makes the code free, but the understanding part is just as expensive as it always was (although, maybe the 'research' technique can help here).

But let's assume you're much better than average at understanding code by reviewing it -- you have another frustrating experience to get through with AI. Pre-AI, let's say 4 days of the week are spend writing new code, while 1 day is spent fixing unforseen issues (perhaps incorrect assumption) that came up after production integration or showing things to real users. Post-AI, someone might be able to write those 4 days worth of code in 1 day, but making decisions about unexpected issues after integration doesn't get compressed -- that still takes 1 day.

So post-AI, your time switches almost entirely from the fun, creative act of writing code to the more frustrating experience of figuring out what's wrong with a lot of code that is almost correct. But you're way ahead -- you've tested your assumptions much faster, but unfortunately that means nearly all of your time will now be spent in a state of feeling dumb and trying to figure out why your assumptions are wrong. If your assumptions were right, you'd just move forward without noticing.

iambateman 2 days ago

I built a package which I use for large codebase work[0].

It starts with /feature, and takes a description. Then it analyzes the codebase and asks questions.

Once I’ve answered questions, it writes a plan in markdown. There will be 8-10 markdowns files with descriptions of what it wants to do and full code samples.

Then it does a “code critic” step where it looks for errors. Importantly, this code critic is wrong about 60% of the time. I review its critique and erase a bunch of dumb issues it’s invented.

By that point, I have a concise folder of changes along with my original description, and it’s been checked over. Then all I do is say “go” to Claude Code and it’s off to the races doing each specific task.

This helps it keep from going off the rails, and I’m usually confident that the changes it made were the changes I wanted.

I use this workflow a few times per day for all the bigger tasks and then use regular Claude code when I can be pretty specific about what I want done. It’s proven to be a pretty efficient workflow.

[0] GitHub.com/iambateman/speedrun

scuff3d 1 day ago

I will never understand why anyone wants to go through all this. I don't believe for a second this is more productive than regular coding with a little help from the LLM.

KaiMagnus 1 day ago

I got access to Kiro from Amazon this week and they’re doing something similar. First a requirements document is written based on your prompt, then a design document and finally a task list.

At first I thought that was pretty compelling, since it includes more edge cases and examples that you otherwise miss.

In the end all that planning still results in a lot of pretty mediocre code that I ended up throwing away most of the time.

Maybe there is a learning curve and I need to tweak the requirements more tho.

For me personally, the most successful approach has been a fast iteration loop with small and focused problems. Being able to generate prototypes based on your actual code and exploring different solutions has been very productive. Interestingly, I kind of have a similar workflow where I use Copilot in ask mode for exploration, before switching to agent mode for implementation, sounds similar to Kiro, but somehow it’s more successful.

Anyways, trying to generate lots of code at once has almost always been a disaster and even the most detailed prompt doesn’t really help much. I’d love to see how the code and projects of people claiming to run more than 5 LLMs concurrently look like, because with the tools I’m using, that would be a mess pretty fast.

NaN years ago

undefined

NaN years ago

undefined

ByteDrifter 1 day ago

Maybe the real question isn’t whether AI is useful, but whether we’ve designed workflows that let humans and AI collaborate effectively.

NaN years ago

undefined

NaN years ago

undefined

sneilan1 1 day ago

It’s not necessarily faster to do this for a single task. But it’s faster when you can do 2-3 tasks at the same time. Agentic coding increases throughout.

NaN years ago

undefined

procaryote 1 day ago

I've always assumed it is because they can't do the regular coding themselves. If you compare spending months on trying to shake a coding agent into not exploding too much with spending years on learning to code, the effort makes more sense

NaN years ago

undefined

mattigames 1 day ago

There is a chunk of devs using AI that do it not because they believe it makes them more productive in the present but because it might do so in the near future thanks to advances on AI tech/models, and then some do it because they think it might be required from them to do it this way by their bosses at some point in the future, so they can show preparedness and give the impression of being up to date with how the field evolves, even if at the end it turns out it doesn't speed up things that much.

NaN years ago

undefined

NaN years ago

undefined

the_duke 1 day ago

It absolutely can be, by a huge margin.

You spend a few minutes generating a spec, then agents go off and do their coding, often lasting 10-30 minutes, including running and fixing lints, adding and running tests, ...

Then you come back and review.

But you had 10 of these running at the same time!

You become a manager of AI agents.

For many, this will be a shitty way to spend their time.... But it is very likely the future of this profession.

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

eddywebs 1 day ago

The biggest challenge i found with LLMs on large codebase is making the same mistakes again and again How do keep track of the architecture decisions in context of every tasks on the large codebase ?

tom_m 1 day ago

Very very clear, unambiguous, prompts and agent rules. Use strong language like "must" and "critical" and "never" etc. I would also try working on smaller sections of a large codebase at a time too if things are too inaccurate.

The AI coding tools are going to be looking at other files in the project to help with context. Ambiguity is the death of AI effectiveness. You have to keep things clear and so that may require addressing smaller sections at a time. Unless you can really configure the tools in ways to isolate things.

This is why I like tools that have a lot of control and are transparent. If you ask a tool what the full system and user prompt is and it doesn't tell you? Run away from that tool as fast as you can.

You need to have introspections here. You have to be able to see what causes a behavior you don't want and be able to correct it. Any tool that takes that away from you is one that won't work.

NaN years ago

undefined

NaN years ago

undefined

athrowaway3z 1 day ago

`opencode` will read any amount of `!cmd` output.

I start my sessions with something like `!cat ./docs/*` and I can start asking questions. Make sure you regularly ask it to point out any inconsistencies or ambiguity in the docs.

NaN years ago

undefined

loandbehold 1 day ago

Whenever I see Claude Code make same mistake multiple times I add instructions to clade.md to avoid it in the future.

hyperadvanced 1 day ago

In some sense “the same mistakes again and again” is either a prompting problem or a “you” problem insofar as your expectations differ from the machine overlords.

j45 1 day ago

This looks very cool.

I see it has a pseudo code step, was it helpful at all to try to define a workflow, process or procedure beforehand?

I've also heard that keeping each file down to 100 lines is critical before connecting them. Noticed the same but haven't tried it in depth.

dhorthy 1 day ago

File size matters if you don’t have strategically placed “read the entire file” instructions for certain parts of the workflow (we do)

ghm2199 2 days ago

This article is like a bookmark in time of where I exactly gave up (in July) managing context in Claude code.

I made specs for every part of the code in a separate folder and that had in it logs on every feature I worked on. It was an API server in python with many services like accounts, notifications, subscriptions etc.

It got to the point where managing context became extremely challenging. Claude would not be able to determine business logic properly and it can get complex. e.g. if you want to do a simple RBAC system with an account and profile with a junction table for roles joining an account with profile. In the end what kind of worked was I had to give it UML diagrams of the relationship with examples to make it understand and behave better.

dhorthy 2 days ago

i think that was one of the key reasons we built research_codebase.md first - the number one concern is

"what happens if we end up owning this codebase but don't know how it works / don't know how to steer a model on how to make progress"

There are two common problems w/ primarily-AI-written code

1. Unfamiliar codebase -> research lets you get up to speed quickly on flows and functionality

2. Giant PR Reviews Suck -> plans give you ordered context on what's changing and why

Mitchell has praised ampcode for the thread sharing, another good solution to #2 - https://x.com/mitchellh/status/1963277478795026484

svieira 1 day ago

> the number one concern "what happens if we end up owning this codebase but ... don't know how to steer a model on how to make progress"

> Research lets you get up to speed quickly on flows and functionality

This is the _je ne sais quoi_ that people who are comfortable with AI have made peace with and those who are not have not. If you don't know what the code base does or how to make progress you are effectively trusting the system that built the thing you don't understand to understand the thing and teach you. And then from that understanding you're going to direct the teacher to make changes to the system it taught you to understand. Which suggests a certain _je ne sais quoi_ about human intelligence that isn't present in the system, but which would be necessary to create an understanding of the thing under consideration. Which leads to your understanding being questionable because it was sourced from something that _lacks_ that _je ne sais quoi_. But the order time of failure here is "lifetimes". Of features, of codebases, of persons.

gloosx 2 days ago

It's strange that author is bragging that this 35K LOC was researched and implemented in 7 hours, but there are 40 commits spanning across 7 days. Was it 1 hour per day or what?

Also quite funny that one of the latest commits is "ignore some tests" :D

dhorthy 2 days ago

if you read further down, I acknowledge this

> While the cancelation PR required a little more love to take things over the line, we got incredible progress in just a day.

daxfohl 2 days ago

FWIW I think your style is better and more honest than most advocates. But I'd really love to see some examples of things that completely failed. Because there have to be some, right? But you hardly ever see an article from an AI advocate about something that failed, nor from an AI skeptic about something that succeeded. Yet I think these would be the types of things that people would truly learn from. But maybe it's not in anyone's financial interest to cross borders like that, for those who are heavily vested in the ecosystem.

NaN years ago

undefined

NaN years ago

undefined

gloosx 2 days ago

You do acknowledge this but this doesn't make the "spent 7 hours and shipped 35k LOC" claim factually correct or true. It sure sounds good but it's disingenuous, because shipping != making progress. Shipping code means deploying it to the end users.

NaN years ago

undefined

NaN years ago

undefined

potamic 2 days ago

There are a lot of people declaring this, proclaiming that about working with AI, but nobody presents the details. Talk is cheap, show me the prompts. What will be useful is to check in all the prompts along with code. Every commit generated by AI should include a prompt log recording all the prompts that led to the change. One should be able to walkthrough the prompt log just as they may go through the commit log and observe firsthand how the code was developed.

an0malous 1 day ago

I agree, the rare times when someone has shared prompts and AI generated code I have not been impressed at all. It very quickly accrues technical debt and lacks organization. I suspect the people who say it’s amazing are like data engineers who are used to putting everything in one script file, React devs where the patterns and organization are well defined and constrained, or people who don’t code and don’t even understand the issues in their generated code yet.

troupo 1 day ago

This blog post of mine will be evergreen: https://dmitriid.com/everything-around-llms-is-still-magical...

rhetocj23 1 day ago

Moreover, show me the money!!

suninsight 1 day ago

So I can attest to the fact that all of the things proposed in this article actually works. And you can try it out yourself on any arbitrary code base within few minutes.

This is how: I work for a company called NonBioS.ai - we already implement most of what is mentioned in this article. Actually we implemented this about 6 months back and what we have now is an advanced version of the same flow. Every user in NonBioS gets a full linux VM with root access. You can ask nonbios to pull in your source code and ask it to implement any feature. The context is all managed automatically through a process we call "Strategic Forgetting" which is in someways an advanced version of the logic in this article.

Strategic Forgetting handles the context automatically - think of it like automatic compaction. It evaluates information retention based on several key factors:

1. Relevance Scoring: We assess how directly information contributes to the current objective vs. being tangential noise

2. Temporal Decay: Information gets weighted by recency and frequency of use - rarely accessed context naturally fades

3. Retrievability: If data can be easily reconstructed from system state or documentation, it's a candidate for pruning

4. Source Priority: User-provided context gets higher retention weight than inferred or generated content

The algorithm runs continuously during coding sessions, creating a dynamic "working memory" that stays lean and focused. Think of it like how you naturally filter out background conversations to focus on what matters.

And we have tried it out in very complex code bases and it works pretty well. Once you know how well it works, you will not have a hard time believing that the days of using IDE's to edit code is probably numbered.

Also - you can try it out for yourself very quickly at NonBioS.ai. We have a very generous free tier that will be enough for the biggest code base you can throw at nonbios. However, big feature implementations or larger refactorings might take time longer than what is afforded in the free tier.

varjag 2 days ago

A few weeks later, @hellovai and I paired on shipping 35k LOC to BAML, adding cancellation support and WASM compilation - features the team estimated would take a senior engineer 3-5 days each.

Sorry, had they effectively estimated that an engineer should produce 4-6KLOC per day (that's before genAI)?

henry2023 2 days ago

The missing detail here is that the senior engineer would probably have shipped it in 2k lines of code

gigel82 1 day ago

Or 1k lines of functional, readable, testable, commented code... but who cares, we'll abstract it all away soon enough.

rsynnott 1 day ago

And note that, as admitted elsewhere, it _actually_ took a week: https://news.ycombinator.com/item?id=45351546

hellovai 2 days ago

if you haven't tried the research -> plan -> implementation approach here, you are missing out on how good LLMs are. it completely changed my perspective.

the key part was really just explicitly thinking about different levels of abstraction at different levels of vibecoding. I was doing it before, but not explicitly in discrete steps and that was where i got into messes. The prior approach made check pointing / reverting very difficult.

When i think of everything in phases, i do similar stuff w/ my git commits at "phase" levels, which makes design decision easier to make.

I also do spend ~4-5 hours cleaning up the code at the very very end once everything works. But its still way faster than writing hard features myself.

0xblacklight 2 days ago

tbh I think the thing that's making this new approach so hard to adopt for many people is the word "vibecoding"

Like yes vibecoding in the lovable-esque "give me an app that does XYZ" manner is obviously ridiculous and wrong, and will result in slop. Building any serious app based on "vibes" is stupid.

But if you're doing this right, you are not "coding" in any traditional sense of the word, and you are *definitely* not relying on vibes

Maybe we need a new word

simonw 2 days ago

I'm sticking to the original definition of "vibe coding", which is AI-generated code that you don't review.

If you're properly reviewing the code, you're programming.

The challenge is finding a good term for code that's responsibly written with AI assistance. I've been calling it "AI-assisted programming" but that's WAY too long.

NaN years ago

undefined

NaN years ago

undefined

dhorthy 2 days ago

alex reibman proposed hyperengineering

i've also heard "aura coding", "spec-driven development" and a bunch of others I don't love.

but we def need a new word cause vibe coding aint it

NaN years ago

undefined

chickensong 2 days ago

AI is the new pAIr programming.

giancarlostoro 2 days ago

> but not explicitly in discrete steps and that was where i got into messes.

I've said this repeatedly, I mostly use it for boilerplate code, or when I'm having a brain fart of sorts, I still love to solve things for myself, but AI can take me from "I know I want x, y, z" to "oh look I got to x, y, z in under 30 minutes, which could have taken hours. For side projects this is fine.

I think if you do it piecemeal it should almost always be fine. When you try to tell it to do two much, you and the model both don't consider edge cases (Ask it for those too!) and are more prone for a rude awakening eventually.

merlincorey 2 days ago

It seems we're still collectively trying to figure out the boundaries of "delegation" versus "abstraction" which I personally don't think are the same thing, though they are certainly related and if you squint a bit you can easily argue for one or the other in many situations.

> We've gotten claude code to handle 300k LOC Rust codebases, ship a week's worth of work in a day, and maintain code quality that passes expert review.

This seems more like delegation just like if one delegated a coding task to another engineer and reviewed it.

> That in two years, you'll be opening python files in your IDE with about the same frequency that, today, you might open up a hex editor to read assembly (which, for most of us, is never).

This seems more like abstraction just like if one considers Python a sort of higher level layer above C and C a higher level layer above Assembly, except now the language is English.

Can it really be both?

dhorthy 2 days ago

I would say its much more about abstraction and the leverage abstractions give you.

You'll also note that while I talk about "spec driven development", most of the tactical stuff we've proven out is downstream of having a good spec.

But in the end a good spec is probably "the right abstraction" and most of these techniques fall out as implementation details. But to paraphrase sandy metz - better to stay in the details than to accidentally build against the wrong abstraction (https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction)

I don't think delegation is right - when me and vaibhav shipped a week's worth of work in a day, we were DEEPLY engaged with the work, we didn't step away from the desk, we were constantly resteering and probably sent 50+ user messages that day, in addition to some point-edits to markdown files along the way.

sarchertech 1 day ago

It’s definitely not abstraction. You don’t watch a compiler output machine code and constantly “resteer” it.

sothatsit 1 day ago

I continue to write codebases in programming languages, not English. LLM agents just help me manipulate that code. They are tools that do work for me. That is delegation, not abstraction.

To write and review a good spec, you also need to understand your codebase. How are you going to do that without reading the code? We are not getting abstracted away from our codebases.

For it to be an abstraction, we would need our coding agents to not only write all of our code, they would also need to explain it all to us. I am very skeptical that this is how developers will work in the near future. Software development would become increasingly unreliable as we won't even understand what our codebases actually do. We would just interact with a squishy lossy English layer.

NaN years ago

undefined

mercurialsolo 1 day ago

Good pointers on decompositing and looking at implementation or fixing in chunks.

1. Break down the feature or bug report into a technical implementation spec. Add in COT for the splits. 2. Verify the implementation spec. Feed reviews back to your original agent that has created the spec. Edit, merge, integrate feedback. 3. Transform implementation spec into an implementation plan - logically split into modules look at dependency chain. 4. Build, test and integrate continuously with coding agents 5. Squash the commits if needed into a single one for the whole feature.

Generally has worked well as a process when working on a complex feature. You can add in HITL at each stage if you need more verification.

For larger codebases always maintain an ARCHITECTURE.md and for larger modules a DESIGN.md

spike021 1 day ago

I admittedly haven't tried this approach at work yet but at home while working on a side project, I'll make a new feature branch and give CLAUDE a prompt about what the feature is with as much detail as possible. i then have it generate a CLAUDE-feature.md and place an implementation plan along with any supporting information (things we have access to in the codebase, etc.).

i'll then prompt it for more based on if my interpretation of the file is missing anything or has confusing instructions or details.

usually in-between larger prompts I'll do a full /reset rather than /compact, have it reference the doc, and then iterate some more.

once it's time to try implementing I do one more /reset, then go phase by phase of the plan in increments /reset-ing between each and having it update the doc with its progress.

generally works well enough but not sure i'd trust it at work.

dhorthy 1 day ago

My advice - never use compact, always stash your context to Md or a wordy git commit message and then clear context

You want control over and visibility into what’s being compacted, and /compact doesn’t do great on either

afiodorov 2 days ago

> It was uncomfortable at first. I had to learn to let go of reading every line of PR code. I still read the tests pretty carefully, but the specs became our source of truth for what was being built and why.

This is exactly right. Our role is shifting from writing implementation details to defining and verifying behavior.

I recently needed to add recursive uploads to a complex S3-to-SFTP Python operator that had a dozen path manipulation flags. My process was:

* Extract the existing behavior into a clear spec (i.e., get the unit tests passing).

* Expand that spec to cover the new recursive functionality.

* Hand the problem and the tests to a coding agent.

I quickly realized I didn't need to understand the old code at all. My entire focus was on whether the new code was faithful to the spec. This is the future: our value will be in demonstrating correctness through verification, while the code itself becomes an implementation detail handled by an agent.

lunarcave 2 days ago

> Our role is shifting from writing implementation details to defining and verifying behavior.

I could argue that our main job was always that - defining and verifying behavior. As in, it was a large part of the job. Time spent on writing implementation details have always been on a downward trend via higher level languages, compilers and other abstractions.

PUSH_AX 1 day ago

Tell that to all the engineers that want to argue over minutia for days in a PR

nine_k 2 days ago

> My entire focus was on whether the new code was faithful to the spec

This may be true, but see Postel's Law, that says that the observed behavior of a heavily-used system becomes its public interface and specification, with all its quirks and implementation errors. It may be important to keep testing that the clients using the code are also faithful to the spec, and detect and handle discrepancies.

patrickmay 2 days ago

I believe that's Hyrum's Law.

cm2012 2 days ago

Claude Plays Pokemon showed that too. AI is bad at deciding when something is "working" - it will go in circles forever. But an AI combined with a human to occasionally course correct is a powerful combo.

wagwang 1 day ago

If you actually define every inch of behavior, you are pretty much writing code. If there's any line in the PR that you can't instantly grok the meaning of, you probably haven't defined the full breadth of the behavior.

iLoveOncall 2 days ago

> Within an hour or so, I had a PR fixing a bug which was approved by the maintainer the next morning

An hour for 14 lines of code. Not sure how this shows any productivity gain from AI. It's clear that it's not the code writing that is the bottleneck in a task like this.

Looking at the "30K lines" features, the majority of the 30K lines are either auto-generated code (not by AI), or documentation. One of them is also a PoC and not merged...

mwigdahl 1 day ago

The author said he was not a Rust expert and had no prior familiarity with the codebase. An hour for a 14 line fix that works and is acceptable quality to merge is pretty good given those conditions.

fusslo 2 days ago

Maybe I am just misunderstanding. I probably am; seems like it happens more and more often these days

But.. I hate this. I hate the idea of learning to manage the machine's context to do work. This reads like a lecture in an MBA class about managing certain types of engineers, not like an engineering doc.

Never have I wanted to manage people. And never have I even considered my job would be to find the optimum path to the machine writing my code.

Maybe firmware is special (I write firmware)... I doubt it. We have a cursor subscription and are expected to use it on production codebases. Business leaders are pushing it HARD. To be a leader in my job, I don't need to know algorithms, design patterns, C, make, how to debug, how to work with memory mapped io, what wear leveling is, etc.. I need to know 'compaction' and 'context engineering'

I feel like a ship corker inspecting a riveted hull

dolebirchwood 2 days ago

Guess it boils down to personality, but I personally love it. I got into coding later in life, and coming from a career that involved reading and writing voluminous amounts of text in English. I got into programming because I wanted to build web applications, not out of any love for the process of programming in and of itself. The less I have to think and write in code, the better. Much happier to be reading it and reviewing it than writing it myself.

skydhash 1 day ago

No ones like programming that much. That's like saying someone love speaking English. You have an idea and you express it. Sometimes there's additional complexity that got in the way (initializing the library, memory cleanup,...), but I put those at the same level as proper greetings in a formal letter.

It also helps starting small, get something useful done and iterate by adding more features overtime (or keeping it small).

NaN years ago

undefined

NaN years ago

undefined

jnwatson 2 days ago

I've started to use agents on some very low-level code, and have middling results. For pure algorithmic stuff, it works great. But I asked it to write me some arm64 assembly and it failed miserably. It couldn't keep track of which registers were which.

jmkni 2 days ago

I imagine the LLM's have been trained on a lot less firmware code than say, HTML

qweiopqweiop 2 days ago

Honestly - if it's such a good technique it should be built into the tool itself. I think just waiting for the tools to mature a bit will mean you can ignore a lot of the "just do xyz" crap.

It's not at senior engineer level until it asks relevant questions about lacking context instead of blindly trying to solve problems IMO.

shafyy 2 days ago

> Heck even Amjad was on a lenny's podcast 9 months ago talking about how PMs use Replit agent to prototype new stuff and then they hand it off to engineers to implement for production.

Please kill me now

cube00 1 day ago

I got lectured this week that I wasn't working fast enough because the client had already vibe coded (a broken, non-functional prototype) in under an hour.

They saw the the first screen assembled by Replit and figured everything they could see would work with some "small tweaks" which is where I was allegedly to come into the picture.

They continued to lecture me about how the app would need Web Workers for maximum client side performance (explanations full of em-dashes so I knew they were pasting in AI slop at me) and it must all be browser based with no servers because "my prototype doesn't need a server"

Meanwhile their "prototype" had a broken Node.js backend running alongside the frontend listening on a TCP port.

When I asked about this backend they knew nothing about it be assured me their prototype was all browser based with no "servers".

Needless to say I'm never taking on any work from that client again, one of the small joys of being a contractor.

shafyy 1 day ago

Sounds like hell

ath3nd 2 days ago

[dead]

jarek83 8 hours ago

Nice read and I'm trying to follow and use your tools. But it's just seems hard to make Claude Code follow the instructions - it always diverges into researching and proposing fixes as well as searching for root causes - which is strictly said it the research_codebase not to do.

From many tries I had a success only twice. Could you give some hints on how to use it?

koakuma-chan 2 days ago

Context has never been the bottleneck for me. AI just stops working when I reach certain things that AI doesn't know how to do.

jmkni 2 days ago

My problem is it keeps working, even when it reaches certain things it doesn't know how to do.

I've been experimenting with Github agents recently, they use GPT-5 to write loads of code, and even make sure it compiles and "runs" before ending the task.

Then you go and run it and it's just garbage, yeah it's technically building and running "something", but often it's not anything like what you asked for, and it's splurged out so much code you can't even fix it.

Then I go and write it myself like the old days.

koakuma-chan 2 days ago

I have same experience with CC. It loves to comment out code, add a "fallback" implementation that returns mock data, and act like the thing works.

0xblacklight 2 days ago

> Context has never been the bottleneck for me. AI just stops working when I reach certain things that AI doesn't know how to do.

It's context all the way down. That just means you need to find and give it the context to enable it to figure out how to do the thing. Docs, manuals, whatever. Same stuff that you would use to enable a human that doesn't know how to do it to figure out how.

koakuma-chan 2 days ago

At that point it's easier to implement the thing yourself, and then let AI work with that.

NaN years ago

undefined

lacy_tinpot 2 days ago

Specifically what did you have difficulty implementing where it "just stops working"?

koakuma-chan 2 days ago

Anything it has not been trained on. Try getting AI to use OpenAI's responses API. You will have to try very hard to convince it not to use the chat completions API.

NaN years ago

undefined

nicklaf 2 days ago

In my limited experiments with Gemini: it stops working when presented with a program containing fundamental concurrency flaws. Ask it to resolve a race condition or deadlock and it will flail, eventually getting caught in a loop, suggesting the same unhelpful remedies over and over.

I imagine this has to to with concurrency requiring conceptual and logical reasoning, which LLMs are known to struggle with about as badly as they do with math and arithmetic. Now, it's possible that the right language to work with the LLM in these domains is not program code, but a spec language like TLA+. However, at that point, I'd probably just spend less effort to write the potentially tricky concurrent code myself.

saberience 1 day ago

I've had AI totally fail several times on Swift concurrency issues, i.e. threads deadlocking or similar issues. I've also had AI totally fail on memory usage issues in Swift. In both cases I've had to go back to reasoning over the bugs myself and debugging them by hand, fixing the code by hand.

wrs 2 days ago

1. Research -> Plan -> Implement

2. Write down the principles and assumptions behind the design and keep them current

In other words, the same thing successful human teams on complex projects do! Have we become so addicted to “attention-deficit agile” that this seems like a new technique?

Imagine, detailed specs, design documents, and RFC reviews are becoming the new hotness. Who would have thought??

dhorthy 2 days ago

yeah its kinda funny how some bigger more sophisticated eng orgs that would be called "slow and ineffective" by smaller teams are actually pretty dang well set-up to leverage AI.

All because they have been forced to master technical communication at scale.

but the reason I wrote this (and maybe a side effect of the SF bubble) is MOST of the people I have talked to, from 3-person startups to 1000+ employee public companies, are in a state where this feels novel and valuable, not a foregone conclusion or something happening automatically

afro88 1 day ago

I use a similar pattern but without the subagents. I get good results with it. I review and hand edit "research" and plans. I follow up and hand edit code changes. It makes me faster, especially in unfamiliar codebases.

But the write up troubles me. If I'm reading correctly, he did 1 bugfix (approved and merged) and then 2 larger PRs (1 merged, 1 still in draft over a month later). That's an insanely small sample size to draw conclusions from.

How can you talk like you've just proven the workflow works "for brownfield codebases"? You proved it worked for 2/3 tasks in 2 codebases, one failure (we can't say it works until the code is shipped IMO).

HellDunkel 1 day ago

I am still sceptical of the roi and the time i am supposed to sink into trying and learning these AI tools which seem to be replacing each other every week.

madcocomo 1 day ago

For me the biggest difficulty is I find it hard to read unverifiable documentation. It's like dyslexia - if I can't connect the text content with runnable code, I feel lost in 5 minutes.

So with this approach of spending 3 hours on planning without verification in code, that's too hard for me.

I agree the context compaction sounds good. But I'm not sure if an md file is good enough to carry the info from research to plan and implementation. Personally I often find the context is too complex or the problem is too big. I just open a new session to resolve a smaller, more specific problem in source code, then test and review the source code.

jascha_eng 2 days ago

Except for ofc pushing their own product (humanlayer) and some very complex prompt template+agent setups that are probably overkill for most, the basics in this post about compaction and doing human review at the correct level are pretty good pointers. And giving a bit of a framework to think within is also neat

daxfohl 2 days ago

> And yeah sure, let's try to spend as many tokens as possible

It'd be nice if the article included the cost for each project. A 35k LOC change in a 350k codebase with a bunch of back and forth and context rewriting over 7 hours, would that be a regular subscription, max subscription, or would that not even cover it?

daxfohl 2 days ago

Oh, oops it says further down

> oh, and yeah, our team of three is averaging about $12k on opus per month

I'll have to admit, I was intrigued with the workflow at first. But emm, okay, yeah, I'll keep handwriting my open source contributions for a while.

CharlesW 2 days ago

From a cost perspective, you would definitely want a Claude Max subscription for this.

dhorthy 2 days ago

yes - correct. For the record, if spending raw tokens, the 2 prs to baml cost about $650.

but yes we switched off per-token this week because we ran out of anthropic credits, we're on max plan now

NaN years ago

undefined

robertoallende 1 day ago

Thanks to write such detailed article... lot of very well supported information.

I've been working on something what I call Micromanaged Driven Development https://mmdd.dev and wrote about it at https://builder.aws.com/content/2y6nQgj1FVuaJIn9rFLThIslwaJ/...

I'm in a similar search and I'm stoked to see that many people riding the wave of coding with AI is moving in this direction.

Lots of learning ahead.

malfist 2 days ago

This article bases its argument on the predicate that AI _at worst_ will increase developer productivity be 0-10%. But several studies have found that not to be true at all. AI can, and does, make some people less effective

telliott1984 2 days ago

There's also the more insidious gap between perceived productivity and actual productivity. Doesn't help that nobody can agree on how to measure productivity even without AI.

simonw 2 days ago

"AI can, and does, make some people less effective"

So those people should either stop using it or learn to use it productively. We're not doomed to live in a world where programmers start using AI, lose productivity because of it and then stay in that less productive state.

bgwalter 2 days ago

If managers are convinced by stakeholders who relentlessly put out pro-"AI" blog posts, then a subset of programmers can be forced to at least pretend to use "AI".

They can be forced to write in their performance evaluation how much (not if, because they would be fired) "AI" has improved their productivity.

CharlesW 2 days ago

Both (1) "AI can, and does, make some people less effective" and (2) "the average productivity boost (~20%) is significant" (per Stanford's analysis) can be true.

The article at the link is about how to use AI effectively in complex codebases. It emphasizes that the techniques described are "not magic", and makes very reasonable claims.

dingnuts 2 days ago

the techniques described sound like just as much work, if not more, than just writing the code. the claimed output isn't even that great, it's comparable to the speed you would expect a skilled engineer to move at in a startup environment

NaN years ago

undefined

NaN years ago

undefined

dhorthy 2 days ago

definitely - the standford video has a slide about how many cases caused people to be even slower than without AI

keeda 1 day ago

According to the Stanford video the only cases (statistically speaking) where that happened was high-complexity tasks for legacy / low popularity languages, no? I would imagine that is a small minority of projects. Indeed, the video cites the overall productivity boost at 15 - 20% IIRC.

mcny 2 days ago

Question for discussion - what steps can I take as a human to set myself up for success where success is defined by AI made me faster, more efficient etc?

NaN years ago

undefined

NaN years ago

undefined

f59b3743 2 days ago

[flagged]

dingnuts 2 days ago

I have

NaN years ago

undefined

kkpattern 1 day ago

I used a similar pattern. When ask AI to do a large implementation. I ask gemini-2.5-pro to write a very detailed overview implementation plan. Then review it. Then ask gemini-2.5-pro to split the plan into multiple stages and write detail implementation plan for each stage. Then I ask claude sonnat to read the overview plan and implement the stage n. I found that this is the only way to complete a major implementation with a relatively high success rate.

philipp-gayret 2 days ago

Can't agree with the formula for performance, on the "/ size" part. You can have a huge codebase, but if the complexity goes up with size then you are screwed. Wouldn't a huge but simple codebase be practical and fine for AI to deal with?

The hierarchy of leverage concept is great! Love it. (Can't say I like the 1 bad line of CLAUDE.md is 100K lines of bad code; I've had some bad lines in my CLAUDE.md from time to time - I almost always let Claude write it's own CLAUDE.md.).

dhorthy 2 days ago

i mean there's also the fact that claude code injects this system message into your claude.md which means that even if your claude.md sucks you will probably be okay:

<system-reminder> IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context or otherwise consider it in your response unless it is highly relevant to your task. Most of the time, it is not relevant. </system-reminder>

lots of others have written about this so i won't go deep but its a clear product decision, but if you don't know what's in your context window, you can't respond/architect your balance between claude.md and /commands well.

klysm 1 day ago

Tasted like a sales pitch the whole way and what do ya know at the very end there it is

rs186 1 day ago

> Sean proposes that in the AI future, the specs will become the real code. That in two years, you'll be opening python files in your IDE with about the same frequency that, today, you might open up a hex editor to read assembly (which, for most of us, is never).

Only if AI code generation is correct 99.9% of the time and almost never hallucinates. We trust compilers and don't read assembly code because we know it's deterministic and the output can never be wrong (barring bugs and certain optimization issues, which are rare/one-time fixes). As long as generated code is not doing what the original "code" (in this case, specs) doing, humans need to go back to fix things themselves.

GoatInGrey 2 days ago

Hello, I noticed your privacy policy is a black page with text seemingly set to 1% or so opacity. Can you get the slopless AI to fix that when time permits?

- Mr. Snarky

procaryote 1 day ago

They wanted a transparent privacy policy

dhorthy 2 days ago

thank you for the feedback! themes are hard. Update going out now

ath_ray 1 day ago

I enjoyed the emphasis on optimising the context window itself. I think that's the most important bit.

An abstraction for this that seems promising to me for its completeness and size is a User Story paired with a research plan(?).

This works well for many kinds of applications and emphasizes shipping concrete business value for every unit of work.

I wrote about some of it here: https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/

I also think a lot of coding benchmarks and perhaps even RL environments are not accounting for the messy back and forth of real world software development, which is why there's always a gap between the promise and reality.

dhorthy 1 day ago

I have had a user story and a research plan and only realized deep in the implementation that a fundamental detail about how the code works was missing (specifically, that types and sdks are generated from OpenAPI spec) - this missing meant the plan was wrong (didn’t read carefully enough) and the implementation was a mess

ath_ray 1 day ago

Yeah I agree. There's a lot more needed than just the User Story, one way I'm thinking about it is that the "core" is deliverable business value, and the "shells" are context required for fine-grained details. There will likely need to be a step to verify against the acceptance criteria.

I hope to back up this hypothesis with actual data and experiments!

tschellenbach 2 days ago

I wrote this blogpost on the same topic: https://getstream.io/blog/cursor-ai-large-projects/

It's super effective with the right guardrails and docs. It also works better on languages like Go instead of Python.

dhorthy 2 days ago

why do you think go is better than python (i have some thoughts but curious your take)

mholm 2 days ago

imo:

1. Go's spec and standard practices are more stable, in my experience. This means the training data is tighter and more likely to work.

2. Go's types give the llm more information on how to use something, versus the python model.

3. Python has been an entry-level accessible language for a long time. This means a lot of the code in the training set is by amateurs. Go, ime, is never someone's first language. So you effectively only get code from someone who has already has other programming experience.

4. Go doesn't do much 'weird' stuff. It's not hard to wrap your head around.

NaN years ago

undefined

polishdude20 2 days ago

probably because it's typed?

NaN years ago

undefined

marcuschong 1 day ago

I'm using GPT Pro and a VS extension that makes it easy to copy code from multiple files at once. I'm architecting the new version of our SaaS and using it to generate everything for me on the backend. It’s a huge help with modeling and coding, though it takes a lot of steering and correction. I think I’ll end up with a better result than if I did it alone, since it knows many patterns and details I’m not aware of (even simple things like RRULE). I’m designing this new project with a simpler, more vertical architecture in the hopes that Codex will be able to create new tables and services easily once the initial structure is ready and well documented.

Edit: typo.

dhorthy 1 day ago

yeah flat, simple code is good to start, but I find I'm still developing instincts around right balance between "when to let duplicate code sprawl" vs. "when to be the DRY police".

CuriouslyC 1 day ago

Agents get really confused by duplicate code, so I advise DRYing out early and often.

cheschire 2 days ago

As an aside, this single markdown file as an entire GitHub repo is a unique approach to blog posts.

dhorthy 2 days ago

s/unique/lazy

cadamsdotcom 1 day ago

Re the meta of running multiple phases of "document expansion":

Research helps with complex implementations and for brownfield. But it isn't always needed - simple bugfixes can be one-shot!

So all AI workflows could be expressed with some number "N" of "document expansion phases":

N(0): vibe coding.

N(1): "write a spec then implement it while I watch".

N(2): "research then specify". At this point you start to get serious steerability.

What's N(3) and beyond? Strategy docs, industry research, monetization planning? Can AI do these too, all of it ending up in git? Interesting to muse on.

wobblyasp 1 day ago

Verifying behavior is great and all if you can actually exhaustively test the behaviors of your system. If you can't, then not knowing what your code is actually doing is going to set you back when things do go belly up.

ipnon 1 day ago

I love this comment because it makes perfect sense today, it made perfect sense 10 years ago, it would have made perfect sense in 1970. The principles of software engineering are not changed by the introduction of commodified machine intelligence.

dhorthy 1 day ago

i 100% agree - the folks who are best at ai-first engineering, they spend 3 days designing the test harness and then kick off an agent unsupervised for 2+ days and come back to working software.

not exactly valuable as guidance since programming languages are very easy to verify, but the https://ghuntley.com/ralph post is an example of whats possible on the very extreme end of the spectrum

jgilias 2 days ago

I used to do these things manually in Cursor. Then I had to take a few months off programming, and when I came back and updated Cursor I found out that it now automatically does ToDos, as well as keeps track of the context size and compresses it automatically by summarising the history when it reaches some threshold.

With this I find that most of the shenanigans of manual context window managing with putting things in markdown files is kind of unnecessary.

You still need to make it plan things, as well as guide the research it does to make sure it gets enough useful info into the context window, but in general it now seems to me like it does a really good job with preserving the information. This is with Sonnet 4

YMMV

ooopakaj 2 days ago

I’m not an expert in either language, but seeing a 20k LoC PR go up (linked in the article) would be an instant “lgtm, asshole” kind of review.

> I had to learn to let go of reading every line of PR code

Ah. And I’m over here struggling to get my teammates to read lines that aren’t in the PR.

Ah well, if this stuff works out it’ll be commoditized like the author said and I’ll catch up later. Hard to evaluate the article given the authors financial interest in this succeeding and my lack of domain expertise.

ActionHank 2 days ago

I dunno man, I usually close the PR when someone does that and tell them to make more atomic changes.

Would you trust an colleague who is over confident, lies all the time, and then pushes a huge PR? I wouldn't.

ooopakaj 2 days ago

[dead]

Our_Benefactors 2 days ago

Closing someone else’s PR is an actively hostile move. Opening a 20k LOC isn’t great either, but going ahead and closing it is rude as hell.

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

CuriouslyC 2 days ago

If this stuff works out, you'll be behind the curve and people who were on the ball will have your job.

ooopakaj 1 day ago

[dead]

nobunaga 2 days ago

[flagged]

onscreencomb 1 day ago

I created an account to say this: RepoPrompt's 'Context Builder' feature helps a ton with scoping context before you touch any code.

It's kind of like if you could chat with Repomix or Gitingest so they only pull the most relevant parts of your codebase into a prompt for planning, etc

I'm a paying RepoPrompt user but not associated in any other way.

I've used it in conjunction with Codex, Claude Code, and any other code gen tool I have tried so far. It saves a lot of tokens and time (and headaches)

jb2403 2 days ago

It’s refreshing to read a full article this was written by a human. Content +++

lexoj 1 day ago

To minimise context bloat and provide more holistic context, I extract on first step the important elements from a codebase via AST which then the LLM uses to determine which files to get in full for given task.

https://github.com/piqoni/vogte

iagooar 2 days ago

I am working on a project with ~200k LoC, entirely written with AI codegen.

These days I use Codex, with GPT-5-Codex + $200 Pro subscription. I code all day every day and haven't yet seen a single rate limiting issue.

We've come a long way. Just 3-4 months ago, LLMs would start doing a huge mess when faced with a large codebase. They would have massive problems with files with +1k LoC (I know, files should never grow this big).

Until recently, I had to religiously provide the right context to the model to get good results. Codex does not need it anymore.

Heck, even UI seems to be a solved problem now with shadcn/ui + MCP.

My personal workflow when building bigger new features:

1. Describe problem with lots of details (often recording 20-60 mins of voice, transcribe)

2. Prompt the model to create a PRD

3. CHECK the PRD, improve and enrich it - this can take hours

4. Actually have the AI agent generate the code and lots of tests

5. Use AI code review tools like CodeRabbit, or recently the /review function of Codex, iterate a few times

6. Check and verify manually - often times, there are a few minor bugs still in the implementation, but can be fixed quickly - sometimes I just create a list of what I found and pass it for improving

With this workflow, I am getting extraordinary results.

AMA.

GoatInGrey 2 days ago

And I assume there's no actual product that customers are using that we could also demo? Because only 1 out of every 20 or so claims of awesomeness actually has a demoable product to back up those claims. The 1 who does usually has immediate problems. Like an invisible text box rendered over the submit button on their Contact Us page preventing an onClick event for that button.

In case it wasn't obvious, I have gone from rabidly bullish on AI to very bearish over the last 18 months. Because I haven't found one instance where AI is running the show and things aren't falling apart in not-always-obvious ways.

b_e_n_t_o_n 2 days ago

I'm kind of in the same boat although the timeline is more compressed. People claim they're more productive and that AI is capable of building large systems but I've yet to see any actual evidence of this. And the people who make these claims also seem to end up spending a ton of time prompting to the point where I wonder if it would have been faster for them to write the code manually, maybe with copilot's inline completions.

PhillBluebear 2 days ago

I created these demos using real data and real api connections with real databases, utilizing 100% AI code in http://betpredictor.io and https://pix2code.com; however, they barely work. At this point, I'm fixing 90% or more of every recommendation the AI gives. With you're code base being this large, you can be guaranteed that the AI will not know what needs to be edited, but I haven't written one line of hand-written code.

NaN years ago

undefined

NaN years ago

undefined

NaN years ago

undefined

iagooar 2 days ago

It is true AI-generated UIs tend to be... Weird. In weird ways. Sometimes they are consistent and work as intended, but often times they reveal weird behaviors.

Or at least this was true until recently. GPT-5 is consistently delivering more coherent and better working UIs, provided I use it with shadcn or alternative component libraries.

So while you can generate a lot of code very fast, testing UX and UI is still manual work - at least for me.

I am pretty sure, AI should not run the show. It is a sophisticated tool, but it is not a show runner - not yet.

NaN years ago

undefined

NaN years ago

undefined

NicoJuicy 2 days ago

It's not the goal to have AI running the show. There's babysitting required, but it works pretty well tbh.

Note: using it for my B2B e-commerce

rhetocj23 1 day ago

Let me summarise your comment in a few words: show me the money. If nobody is buying anything, there is no incremental value creation or augmentation of existing value in the economy that didn't already exist.

nzach 2 days ago

What is you opinion on what is the "right level of detail" that we should use when creating technical documents the LLM will use to implement features ?

When I started leaning heavily into LLMs I was using really detailed documentations. Not '20 minutes of voice recordings', but my specification documents would easily hit hundreds of lines even for simple features.

The result was decent, but extremely frustrating. Because it would often deliver 80% to 90% but the final 10% to 20% it could never get right.

So, what I naturally started doing was to care less about the details of the implementation and focus on the behavior I want. And this led me to simpler prompts, to the point that I don't feel the need to create a specification document anymore. I just use the plan mode in Claude Code and it is good enough for me.

One way that I started to think about this was that really specific documentations were almost as if I was 'over-fitting' my solution over other technically viable solutions the model could come up with. One example would be if I want to sort an array, I could either ask for "sort the array" or "merge sort the array". And by forcing a merge sort I may end up with a worse solution. Admittedly sort is a pretty simple and unlikely example, but this could happen with any topic. You may ask the model to use a hash-set but a better solution would be to use a bloom filter.

Given all that, do you think investing so much time into your prompts provides a good ROI compared with the alternative of not really min-maxing every single prompt?

iagooar 2 days ago

I 100% agree with the over-fitting part.

I tend to provide detailed PRDs, because even if the first couple of iterations of the coding agent are not perfect, it tends to be easier to get there (as opposed to having a vague prompt and move on from there).

What I do sometimes is an experimental run - especially when I am stuck. I express my high-level vision, and just have the LLM code it to see what happens. I do not do it often, but it has sometimes helped me get out of being mentally stuck with some part of the application.

Funnily, I am facing this problem right now, and your post might just have reminded me, that sometimes a quick experiment can be better than 2 days of overthinking about the problem...

boredtofears 2 days ago

This mirrors my experience with AI so far - I've arrived at mostly using the plan and implement modes in Claude Code with complete but concise instructions about the behavior I want with maybe a few guide rails for the direction I'd like to see the implementation path take. Use cases and examples seem to work well.

I kind of assumed that claude code is doing most of the things described this document under the hood (but I really have no idea).

NaN years ago

undefined

apercu 2 days ago

If it's working for you I have to assume that you are an expert in the domain, know the stack inside and out and have built out non-AI automated testing in your deployment pipeline.

And yes Step 3 is what no one does. And that's not limited to AI. I built a 20+ year career mostly around step 3 (after being biomed UNIX/Network tech support, sysadmin and programmer for 6 years).

iagooar 2 days ago

Yes, I have over 2 decades of programming experience, 15 years working professionally. With my co-founder we built an entire B2B SaaS, coding everything from scratch, did product, support, marketing, sales...

Now I am building something new but in a very familiar domain. I agree my workflow would not work for your average "vibe coder".

jrmiii 2 days ago

> Heck, even UI seems to be a solved problem now with shadcn/ui + MCP.

I'm interested in hearing more about this - any resource you can point me at or do you mind elaborating a bit? TIA!

iagooar 2 days ago

Basically, you install the shadcn MCP server as described here: https://ui.shadcn.com/docs/mcp

If you use Codex, convert the config to toml:

[mcp_servers.shadcn] command = "npx" args = ["shadcn@latest", "mcp"]

Now with the MCP server, you can instruct the coding agent to use shadcn. I often do "I you need to add new UI elements, make sure to use shadcn and the shadcn component registry to find the best fitting component"

The genius move is that the shadcn components are all based on Tailwind and get COPIED to your project. 95% of the time, the created UI views are just pixel-perfect, spacing is right, everything looks good enough. You can take it from here to personalize it more using the coding agent.

malyk 2 days ago

I've had success here by simply telling Codex which components to use. I initially imported all the shadcn components into my project and then I just say things like "Create a card component that includes a scrollview component and in the scrollview add a table with a dropdown component in the third column"...and Codex just knows how to add the shadcn components. This is without internet access turned on by the way.

NaN years ago

undefined

giancarlostoro 2 days ago

> 1. Describe problem with lots of details (often recording 20-60 mins of voice, transcribe)

I just ask it to give me instructions for a coding agent and give it a small description of what I want to do, it looks at my code, and details what I describes as best as it can, and usually I have enough to let Junie (JetBrains AI) run on.

I can't personally justify $200 a month, I would need to see seriously strong results for that much. I use AI piecemeal because it has always been the best way to use it. I still want to understand the codebase. When things break its mostly on you to figure out what broke.

iagooar 2 days ago

A small description can be extrapolated to a large feature, but then you have to accept the AI filling in the gaps. Sometimes that is cool, often times it misses the mark. I do not always record that much, but if I have a vague idea that I want to verbalize, I use recording. Then I take the transcript and create the PRD based on it. Then I iterate a few more times on the PRD - which yield much better results.

LeafItAlone 1 day ago

>I am working on a project with ~200k LoC, entirely written with AI codegen.

I’d love to see the codebase if you can share. My experience with LLM code generation (I’ve tried all of the popular models and tools, though generally favor Claude Code with Opus and Sonnet). My time working with them leads me to suspect that your ~200k LoC project could be solved in only about 10k LoC. Their solutions are unnecessary complex (I’m guessing because they don’t “know” the problem, in the way a human does) and that compounds over time. At this point, I would guess my most common instruction to this tools is to simplify the solution. Even when that’s part of the plan.

CuriouslyC 2 days ago

Don't want to come off as combative but if you code every day with codex you must not be pushing very hard, I can hit the weekly quota in <36 hours. The quota is real and if you're multi-piloting you will 100% hit it before the week is over.

wahnfrieden 2 days ago

On the Pro tier? Plus/Team is only suitable for evaluating the tool and occasional help

Btw one thing that helps conserve context/tokens is to use GPT 5 Pro to read entire files (it will read more than Codex will, though Codex is good at digging) and generate plans for Codex to execute. Tools like RepoPrompt help with this (though it also looks pretty complicated)

NaN years ago

undefined

iagooar 1 day ago

Fair enough. I spend entire days working on the product, but obviously there are lots of times I am not running Codex - when reviewing PRDs, testing, talking to users, even posting on HN is good for the quota ;)

danielbln 2 days ago

I can recommend one more thing: tell the LLM frequently to "ask me clarifying questions". It's simple, but the effect is quite dramatic, it really cuts down on ambiguity and wrong directions without having to think about every little thing ahead of time.

iagooar 2 days ago

When do you do that? You give it the PRD and tell it to ask clarifying questions? Will definitely try that.

NaN years ago

undefined

NaN years ago

undefined

Mockapapella 2 days ago

This sounds very similar to my workflow. Do you have pre-commits or CI beyond testing? I’ve started thinking about my codebase as an RL environment with the pre-commits as hyperparameters. It’s fascinating seeing what coding patterns emerge as a result.

joshvm 2 days ago

I think pre-commit is essential. I enforce conventional commits (+ a hook which limits commit length to 50 chars) and for Python, ruff with many options enabled. Perhaps the most important one is to enforce complexity limits. That will catch a lot of basic mistakes. Any sanity checks that you can make deterministic are a good idea. You could even add unit tests to pre-commit, but I think it's fine to have the model run pytest separately.

The models tend to be very good about syntax, but this sort of linting will often catch dead code like unused variables or arguments.

You do need to rule-prompt that the agent may need to run pre-commit multiple times to verify the changes worked, or to re-add to the commit. Also, frustratingly, you also need to be explicit that pre-commit might fail and it should fix the errors (otherwise sometimes it'll run and say "I ran pre-commit!") For commits there are some other guardrails, like blanket denying git add <wildcard>.

Claude will sometimes complain via its internal monologue when it fails a ton of linter checks and is forced to write complete docstrings for everything. Sometimes you need to nudge it to not give up, and then it will act excited when the number of errors goes down.

NaN years ago

undefined

iagooar 2 days ago

Yes, I do have automated linting (a bit of a PITA at this scale). On the CI side I am using Github Actions - it does the job, but haven't put much work into it yet.

Generally I have observed that using a statically typed language like Typescript helps catching issues early on. Had much worse results with Ruby.

daxfohl 2 days ago

Which of these steps do you think/wish could be automated further? Most of the latter ones seem like throwing independent AI reviewers could almost fully automate it, maybe with a "notify me" option if there's something they aren't confident about? Could PRD review be made more efficient if it was able to color code by level of uncertainty? For 1, could you point it to a feed of customer feedback or something and just have the day's draft PRD up and waiting for you when you wake up each morning?

iagooar 2 days ago

There is definitely way too much plumbing and going back and forth.

But one thing that MUST get better soon is having the AI agent verify its own code. There are a few solutions in place, e.g. using an MCP server to give access to the browser, but these tend to be brittle and slow. And for some reason, the AI agents do not like calling these tools too much, so you kinda have to force them every time.

PRD review can be done, but AI cannot fill the missing gaps the same way a human can. Usually, when I create a new PRD, it is because I have a certain vision in my head. For that reason, the process of reviewing the PRD can be optimized by maybe 20%. OR maybe I struggle to see how tools could make me faster at reading and commenting / editing the PRD.

NaN years ago

undefined

daxfohl 2 days ago

Have you considered or tried adding steps to create / review an engineering design doc? Jumping straight from PRD to a huge code change seems scary. Granted, given that it's fast and cheap to throw code away and start over, maybe engineering design is a thing of the past. But still, it seems like it would be useful to have it delineate the high-level decisions and tradeoffs before jumping straight into code; once the code is generated it's harder to think about alternative approaches.

iagooar 2 days ago

It depends. But let me explain.

Adding an additional layer slows things down. So the tradeoff must be worth it.

Personally, I would go without a design doc, unless you work on a mission-critical feature humans MUST specify or deeply understand. But this is my gut speaking, I need to give it a try!

NaN years ago

undefined

ants_everywhere 1 day ago

I have LLMs write and review design docs. Usually I prompt to describe the doc, the structure, what tradeoffs are especially important, etc. Then an LLM writes the doc. I spot check it. A separate LLM reviews it according to my criteria. Once everything has been covered in first draft form I review it manually, and then the cycle continues a few times. A lot of this can be done in a few minutes. The manual review is the slowest part.

rubicon33 2 days ago

How does it compare to Cursor with Claud? I’ve been really impressed with how well Cursor works, but always interested in up leveling if there’s better tools considering how fast this space is moving. Can you comment to how Codex performs vs Cursor?

dhorthy 2 days ago

Claude code is Claude code, whether you use in cursor or not

Codex and Claude code are neck and neck, but we made the decision to go all in on opus 4, as there are compounding returns in optimizing prompts and building intuition for a specific model

That said I have tested these prompts on codex, amp, opencode, even grok 4 fast via codebuff, and they still work decently well

But they are heavily optimized from our work with opus in particular

NaN years ago

undefined

mentos 2 days ago

What platform are you developing for, web?

Did you start with Cursor and move to Codex or only ever Codex?

drewnick 2 days ago

Not OP, but I use Codex for back-end, scripting, and SQL. Claude Code for most front-end. I have found that when one faces a challenge, the other often can punch through and solve the problem. I even have them work together (moving thoughts and markdown plans back and fourth) and that works wonders.

My progression: Cursor in '24, Roo code mid '25, Claude Code in Q2 '25, Codex CLI in Q3 `25.

NaN years ago

undefined

iagooar 2 days ago

Yes, it is a web project with next.js + Typescript + Tailwind + Postgres (Prisma).

I started with Cursor, since it offers a well-rounded IDE with everything you need. It also used to be the best tool for the job. These days Codex + GPT-5-Codex is king. But I sometimes go back to Cursor, especially when reading / editing the PRDs or if I need the ocasional 2nd opinion by Claude.

navanchauhan 2 days ago

Hey, this sounds a lot like what we have been doing. We would love to chat with you, and share notes if you are up for it!

Drop us an email at navan.chauhan[at]strongdm.com

foobartab 2 days ago

This just won't work beyond a one-person team

iagooar 2 days ago

Then I will adapt and expand. Have done it before.

I am not giving universal solutions. I am sharing MY solution.

ceedan 2 days ago

What is the % breakdown of LOC for tests vs application code?

iagooar 1 day ago

200k LoC + 80k LoC for tests.

I have roughly 2k tests now, but should probably spend a couple of days before production release to double that.

prisenco 2 days ago

Are you vibe coding or have the 200k LoC been human reviewed?

iagooar 2 days ago

I would not call it vibe coding. But I do not check all changed lines of code either.

In my opinion, and this is really my opinion, in the age of coding with AI, code review is changing as well. If you speed up how much code can be produced, you need to speed up code review accordingly.

I use automated tools most of the time AND I do very thorough manual testing. I am thinking about a more sophisticated testing setup, including integration tests via using a headless browser. It definitely is a field where tooling needs to catch up.

NaN years ago

undefined

NaN years ago

undefined

criemen 2 days ago

What does PRD mean? I never heard that acronym before.

iagooar 2 days ago

Product Requirements Document

It is a fairly standardized way of capturing the essens of a new feature. It covers most important aspects of what the feature is about, the goals, the success criteria, even implementation details where it makes sense.

If there is interest, I can share the outline/template of my PRDs.

NaN years ago

undefined

upcoming-sesame 2 days ago

can you expand on how you use shadcn UI with MCP?

iagooar 1 day ago

I add the MCP server (https://ui.shadcn.com/docs/mcp)

Then I instruct the coding agent to use shadcn / choose the right component from shadcn component registry

The MCP server has a search / discovery tool, and it can also fetch individual components. If you tell the AI agent to use a specific component, it will fetch it (reference doc here: https://ui.shadcn.com/docs/components)

Retr0id 2 days ago

Can we see it?

bopbopbop7 1 day ago

No, because everyone that claims to have coded some amazing software with AI Code Generator 3000 never seems to share their project. Curious.

iagooar 1 day ago

Book a demo! Really, it will not be self-service just yet, because it requires a bit of holding hands in the beginning.

But I am working on making a solid self-service signup experience - might need a couple of weeks to get it done.

NaN years ago

undefined

ActionHank 2 days ago

[flagged]

dang 1 day ago

Please don't cross into personal attack. Also, please don't post snark to HN threads. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

iagooar 2 days ago

Programming has always had these steps, but traditionally people with different roles would do different parts of it, like gathering requirements, creating product concept, creating development tickets, coding, testing and so on.

rustystump 2 days ago

[flagged]

iagooar 1 day ago

It is more than 200k lines of slop. 200k lines of code slop, and 80k lines of test slop.

Drunkfoowl 2 days ago

[dead]

asdev 2 days ago

The problem is the research phase will fail because you can't glean tribal product knowledge from just looking at the code

Amaury-El 1 day ago

Using AI to help with code felt like working with a smart but slightly unreliable teammate. If I wasn’t clear, it just couldn’t follow. But once I learned to explain what I wanted clearly and specifically, it actually saved me time and helped me think more clearly too.

aitchnyu 1 day ago

> context management, and keeping utilization in the 40%-60% range (depends on complexity of the problem).

Is this a rule of thumb? Will the cheaper (fewer params) models dumb down at 25%?

pcannons 1 day ago

Nice, my experience writing production code in large codebase here, granted it's evolved a lot since: https://philippcannons.com/100x-ai-coding-for-real-work-summ...

Not surprisingly, building really fast is not the silver bullet you'd think it is. It's all about what to build and how to distribute it. Otherwise bigcos/billionaires would have armies of engineers growing their net worth to epic scales.

c54 1 day ago

Regarding billionaires having armies of engineers growing their wealth to massive scale: is that not what they have?

pcannons 1 day ago

My current world view: for monster multiples you need someone who knows how to go 0 to 1, repeatedly. That's almost always only the founder. People after are incremental. If they weren't, they'd just be a founder. Hence why everything is done through acquisitions post-founder. So there's armies of engineers incrementally scaling and maintaining dollars. But not creating that wealth or growing it in a significant % way.

djgrant 1 day ago

How granular are the specs? Is it at the level of "this is the code you must write, and here is how to do it", or are you letting AI work some of that out?

spariev 2 days ago

Thanks for sharing, I wonder how do you keep the stylistic and mental alignment of the codebase - is this happens during the code review or there are specific instructions during at the plan/implement stages?

alanfranz 1 day ago

> our team of three is averaging about $12k on opus per month

That’s usd 150k per year. Probably low for SF, but may be a lot in other areas.

habinero 1 day ago

You could almost hire a real engineer for that money.

vanillax 2 days ago

Doesnt githubs new speckit solve this? https://github.com/github/spec-kit

0xblacklight 2 days ago

how does this solve it?

jwpapi 1 day ago

Honestly I was reading that article and I smelled sales pitch and then in the end of course.

This is not my experience at all.

I also don’t get the line obsession.

Good code has less lines not more

skydhash 1 day ago

It seems like a different universe from the openbsd and the suckless guys.

jrecyclebin 1 day ago

Lots of gold in this article. It's like discovering a basket of cheat codes. This will age well.

Great links, BAML is a crazy rabbithole and just found myself nodding along to frequent /compact. These tips are hard-earned and very generously given. Anyone here can take it or leave it. I have theft on my mind, personally. (ʃƪ¬‿¬)

jongjong 1 day ago

When I read about people dumping 2000 lines of code every few days, I'm extremely skeptical about the quality of this code. All the people I've met who worked at this rate were always going for naive solutions and their code was full of hard-to-see bugs which only reared their ugly heads once in a while and were impossible to debug.

jillesvangurp 1 day ago

We're currently in a transition phase where we're using agentic coding on systems developed with tools and languages designed for humans. Ironically, this makes things unnecessarily hard as things that are easy for us aren't necessary easy to deal with; or that optimal for agentic coding systems.

People like languages that are expressive and concise. That means they do things like omit types, use type inference, macros, syntactic sugar, allow for ambiguities and all the other stuff that gives us shorter, easier to type code that requires more effort to figure out. A good intuition here might be that the harder the compiler/interpreter has to work to convert it into running/executable code, the harder an LLM will have to work to figure out what that code does.

LLMs don't mind verbosity and spelling things out. Things that are long winded and boring to us are helpful for an LLM. The optimal language for an LLM is going to be different than one that is optimal for a human. And we're not good at actually producing detailed specifications. Programming actually is the job of coming up with detailed specifications. Easy to forget when you are doing that but that's literally what programming is. You write some kind of specification that is then "compiled" into something that actually works as specified.

The solution to agentic coding isn't writing specifications for our specifications. That just moves the problem.

We've had a few decades of practice where we just happen to stuff code into files and use very primitive tools to manipulate those files. Agentic coding uses a few party tricks involving command line tools to manipulate those files and reading them by one into the precious context window. We're probably shoveling too much data around. But since that's the way we store code, there are no better tools to do that.

From having used things like Codex, 99% of what it does is interrogating what's there via tediously slow prodding and poking around the code base using simple command line commands and build tool invocations. It's like watching paint dry. I usually just go off doing something else while it boils the oceans and does god knows what before finally doing the (usually) relatively straightforward thing that I asked it to do. It's easy to see that this doesn't scale that well.

The whole point of a large code base is that it probably won't all fit in the context window. We can try to brute force the problem; or we can try to be more selective. The name of the game here is being able to be able to quickly select just the right stuff to put in there and discard all the rest.

We can either do that manually (tedious and a lot of work, sort of as the article proposes), or make it easier for the LLM to use tools that do that. Possibly a bunch of poorly structured files in some nested directory hierarchy isn't the optimal thing here. Most non AI based automated refactorings require something that more closely resembles the internal data structures of what a compiler would use (e.g. symbol tables, definitions, etc.).

A lot of what an agentic coding system has to do is reconstruct something similar enough to that just so it can build a context in which it can do constructive things. The less ambiguous and more structured that is, the easier the job. The easier we make it to do that, the more it can focus on solving interesting problems rather than getting ready to do that.

I don't have all the answers here but if agentic coding is going to be most of the coding, it makes sense to optimize the tools, languages, etc. for that rather than for us.

pvncher 1 day ago

[dead]

ath3nd 2 days ago

Why though. Why should we do that?

If AI is so groundbreaking, why do we have to have guides and jump through 3000 hoops just so we can make it work?

spaniard89277 2 days ago

Because now your manager will measure on LOCs against other engineers again and it's only software engineers worrying about complexity, maintainability, and, in summary, the health of the very creature it's going to pay your salary.

This is the new world we live in. Anyone who actually likes coding should seriously look for other venues because this industry is for other type of people now.

I use AI in my job. I went from tolerable (not doing anything fancy) to unbearable.

I'm actually looking to become a council employee with a boring job and code my own stuff, because if this is what I have to do moving forward, I rather go back to non-coding jobs.

dhorthy 2 days ago

i strongly disagree with this - if anything, using AI to code real production code in real complex codebase is MORE technical than just writing software.

Staff/Principal engineers already spend a lot more time designing systems than writing code. They care a lot about complexity, maintainability, and good architecture.

The best people I know who have been using these techniques are former CTOs, former core Kubernetes contributors, have built platforms for CRDTs at scale, and many other HIGHLY technical pursuits.

NaN years ago

undefined

NaN years ago

undefined

ath3nd 2 days ago

[dead]

ej88 2 days ago

why do we have guides and lessons on how to use a chainsaw when we can hack the tree with an axe?

NaN years ago

undefined

0xblacklight 2 days ago

if nuclear power is so much better than coal, why do we need to learn how to safely operate a reactor just to make it work? Coal is so much easier

logicchains 2 days ago

Even if we had perfectly human-level AI it'd still need management, just like human workers do, and turns out effective management is actually nontrivial.

NaN years ago

undefined

Michael_Keller 1 day ago

[dead]

Emma_Schmidt 1 day ago

[dead]

rybosworld 2 days ago

TLDR:

We're taking a profession that attracts people who enjoy a particular type of mental stimulation, and transforming it into something that most members of the profession just fundamentally do not enjoy.

If you're a business leader wondering why AI hasn't super charged your company's productivity, it's at least partly because you're asking people to change the way they work so drastically, that they no longer derive intrinsic motivation from it.

Doesn't apply to every developer. But it's a lot.

hufdr 1 day ago

[dead]

rationalfaith 2 days ago

[dead]

techlatest_net 2 days ago

[dead]

dhorthy 2 days ago

[dead]

r2ob 2 days ago

I refactored CPython using GPT-5, turning the compiler bilingual for english and portuguese keywords.

https://github.com/ricardoborges/cpython

what web programming task GPT-5 can't handle?

SafeDusk 1 day ago

OpenAI Codex has an `update_plan` function[0]. I'm wondering if switching the implementation to this would improve the coding agent's capabilities or is the default for simplicity better.

[0]: https://blog.toolkami.com/openai-codex-tools/