I disagree with the premise. It made all engineering easier. Bad and good.
I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
nhaehnle12 hours ago
I believe the article exaggerates to make a point. Yes, good engineering can also be assisted with LLM-based agents, but there is a delta.
Good engineering requires that you still pay attention to the result produced by the agent(s).
Bad engineering might skip over that part.
Therefore, via Amdahl's law, LLM-based agents overall provide more acceleration to bad engineering than they do to good engineering.
0xcafefood12 hours ago
The connection to Amdahl's law is totally on point. If you're just using LLMs as a faster way to get _your_ ideas down, but still want to ensure you validate and understand the output, you won't get the mythical 10x improvement so many seem to claim they're getting. And if you do want that 10x speedup, you have to forego the validation and understanding.
koonsolo11 hours ago
I do agree with you, but don't underestimate the projects where you can actually apply this 10x. For example, I wanted to get some analytics out of my database. What would have been a full weekend project was now done in an hour. So for such things there is a huge speed boost.
But once software becomes bigger and more complex, the LLM starts messing up, and the expert has to come in. That basicaly means your months project cannot be done in a week.
My personal prediction: plugins and systems that support plugins will become important. Because a plugin can be written at 10x speed. The system itself, not so much.
judahmeek2 hours ago
I think there will also be a lot of work in how to modularize month long projects into plugin sized pieces.
furyofantares12 hours ago
Well, it's made bad engineering massively easier and good engineering a little easier.
So much so that many people who were doing good engineering before have opted to move to doing three times as much bad engineering instead of doing 10% more good engineering.
myrak12 hours ago
[dead]
bensyverson12 hours ago
In corporate app development, I would see tests to check that the mocks return the expected values. Like, what are we even doing here?
steve_adams_8612 hours ago
# abstract internals for no reason
func doThing(x: bool)
if (x)
return true
else
return false
# make sure our logic works as expected
assert(doThing(true))
# ???
# profit
It's excellent software engineering because there are tests
anilakar12 hours ago
Someone was asked to test untestable code so verifying mock contents was the best they could come up with.
mvpmvh12 hours ago
No. Someone was asked to meet an arbitrary code coverage threshold. I'm dealing with this malicious compliance/weaponized incompetence at $current_job
dwoldrich11 hours ago
How will you deal with it? I successfully convinced $big_important_group at $day_job to not implement a policy of failing their builds when code coverage dips below their target threshold > 90%. (Insane target, but that's a different conversation.)
I convinced them that if they wanted to treat uncovered lines of code as tech debt, they needed to add an epic stories to their backlog to write tests. And their artificially setting some high target coverage threshold will produce garbage because developers will write do-nothing tests in order to get their work done and not trip the alarms. I argued that failing the builds on code coverage would be unfair because the tech debt created by past developers would unfairly hinder random current-day devs getting their work done.
Instead, I recommended they pick their current coverage percentage (it was < 10% at the time) and set the threshold to that simply to prevent backsliding as new code was added. Then, as their backlogged, legit tests were implemented, ratchet up the coverage threshold to the new high water mark. This meant all new code would get tests written for them.
And, instead of failing builds, I recommended email blasts to the whole team to indicate there was some recent backsliding in the testing regime and the codebase had grown without accompanying tests. It was not a huge shame event, but good a motivator to the team to keep up the quality. SonarQube was great for long-term tracking of coverage stats.
Finally, I argued the coverage tool needed to have very liberal "ignore" rules that were agreed to by all members of the team (including managers). Anything that did not represent testable logic written by the team: generated code, configurations, tests themselves, should not count against their code coverage percentages.
dwoldrich12 hours ago
You could ask the same thing about tests themselves. And I'm not talking about tests that don't exercise the code in a meaningful manner like your assertions on mocks(?!)
I'm saying you could make the same argument about useful tests themselves. What is testing that the tests are correct?
Uncle Bob would say the production code is testing the tests but only in the limited, one-time, acceptance case where the programmer who watches the test fail, implements code, and then watches it pass (in the ideal test-driven development scenario.)
But what we do all boils down to acceptance. A human user or stakeholder continues to accept the code as correct equals a job well done.
Of course, this is itself a flawed check because humans are flawed and miss things and they don't know what they want anyhow. The Agile Manifesto and Extreme Programming was all about organizing to make course corrections as cheap as possible to accommodate fickle humanity.
> Like, what are we even doing here?
What ARE we doing? A slapdash job on the whole. And, AI is just making slapdash more acceptable and accepted because it is so clever and the boards of directors are busy running this next latest craze into the dirt. "Baffle 'em with bullsh*t" works in every sector of life and lets people get away with all manner of sins.
I think what we SHOULD be doing is plying our craft. We should be using AI as a thinking tool, and not treat it like a replacement for ourselves and our thinking.
RobRivera12 hours ago
I'm trying to wrap my head around here.
So there are tests that leverage mocks. Those mocks help validate software is performing as desired by enabling tests to see the software behaves as desired in varying contexts.
If the software fails, it is because the mocks exposed that under certain inputs, undesired behavior occurs, an assert fails, and a red line flags the test output.
Validating that the mocks return the desired output....
Maybe there is a desire that the mocks return a stream of random numbers and the mock validation tests asserts said stream adheres to a particular distribution?
Maybe someone in the past pushed a bad mock into prod, that mock validated a test that would have failed given better mock, and a post mortem when the bad software, now pushed into prod, was traced to a bad mock derived a requirement that all mocks must be validated?
bensyverson10 hours ago
Yeah, seems plausible, or it was just "belt and suspenders." Sure made a lot of pretty green checkmarks.
I think it's easy to forget that the LLM is not a magic oracle. It doesn't give great answers. What you do with the LLM's output determines whether the engineering you produce is good or bad. There are places where you can plonk in the LLM's output as-is and places you can't, or times when you have to keep nudging for a better output, and times when nothing the LLM produces is worth keeping.
It makes bad engineering easier because it's easy to fall into the trap of "if the LLM said so, it must be right".
onlyrealcuzzo12 hours ago
Even if you agree with the OP, there's a large portion of applications where it simply doesn't matter if the quality of the software is good or terrible as long as it sufficiently works.
tyleo11 hours ago
Yeah, I've seen this too. I like to call them "single-serving apps". I made a flashcard app to study for interviews and one-shot it with Claude Code. I've had it add some features here and there but haven't really looked at the code.
It's just a small CLI app in 3 TypeScript files.
ffsm811 hours ago
> Ive known people at every company who add copious null checks rather than understanding things and fixing them properly.
ynow "defensive programming" is a thing, yeah? Sorry mate, but that statement I'd expect from juniors, which are also often the one's claiming their own technical superiority over others
retrodaredevil11 hours ago
Adding null checks where they aren't needed means adding branching complexity. It means handling cases that may never need to be handled. Doing all that makes it harder to understand "could this variable ever be null?" If you can't answer that question, it is now harder to write code in the future, often leading to even more unnecessary null checks.
I've seen legacy code bases during code review where someone will ask "should we have a null check there?" and often no-one knows the answer. The solution is to use nullability annotations IMO.
It's easy to just say "oh this is just something a junior would say", but come on, have an actual discussion about it rather than implying anyone who has that opinion is inexperienced.
ffsm87 hours ago
No, the branching complexity exists anyway. You've just made it clearly visible by adding a null check or accept that the computational may fail if violated.
You never know what changes are being done in the future, while today the variable may not be nullable in the scenario you're up-to-date on, that doesn't necessarily mean it'll stay like that in the future.
Ultimately, there is a cost associated with null checks everywhere and another by omitting them. The person I responded to just insinuated that people which introduce copious amounts of null checks are inept and lazy.
In response to that I pointed out that that's literally one of the core tenets of defensive programming, and people that make such sweeping statements about other people's capabilities in this way are very often juniors.
I stand by this opinion. You can disagree on specific places were a null check may have been placed unnecessary, but that's always a discussion about a specific field and cannot be generalized like he did there.
archagon9 hours ago
Except vibe coding is not "engineering," but more akin to project management. Engineering presupposes a deep and thorough understanding of your code. If you ship code that you’ve never even looked at, you are no longer an engineer.
agentultra12 hours ago
100%.
There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.
Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.
Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.
There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
carlosjobim11 hours ago
I don't have any idea of what a unit test is, but with AI I can make programs that help me immensely in my real world job.
Snobby programmers would never even return an email offering money for their services.
staticassertion11 hours ago
It's unclear what point you're even trying to make, other than that AI has been helpful to you. But surely you understand that if you don't know what a unit test is you're probably not in a position to comment on the value of unit testing.
> Snobby programmers would never even return an email offering money for their services.
Why the would they? I don't respond to the vast majority of emails, and I'm already employed.
carlosjobim11 hours ago
Helpful to me and millions of others. Soon to be billions even.
You are employed because somewhere in the pipeline there are paying customers. They don't care about unit tests, they care about having their problems solved. Beware of AI.
staticassertion11 hours ago
Right... I mean, no engineer is going to tell you that customers care about unit tests, so I think you're arguing against a straw man here. What engineers will tell you is that bugs cost money, support costs money, etc, and that unit tests are one of the ways we cheaply reduce those costs in order to solve problems, which is what we're in the business of doing.
We are all very aware of the fact that customers pay us... it seems totally strange to be that you think we wouldn't be aware of this fact. I suspect this is where the disconnect comes in, much to the point of the article - you seem to think that engineers just write tests and code, but the article points out how silly that is, we spend most of our time thinking about our customers, features, user experience, and how to match that to the technology choices that will allow us to build, maintain, and support systems that meet customer expectations.
I think people outside of engineering might be very confused about this, strangely, but engineers do a ton of product work, support work, etc, and our job is to match that to the right technology choices.
staticassertion8 hours ago
Yeeesh. Looking at other posts from that user, they seem to have a serious grudge against software devs, presumably for not responding to their emails. "You should starve" - words taken from another post.
Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck. Whatever weird personal issue you have, I doubt anyone was not working for you out of some whatever this perceived snobbery is and it's just like... we all have jobs?
carlosjobim7 hours ago
I don't have a grudge, but there needs to be some balance. Software devs are incredibly well paid compared to other professionals. It is their responsibility to use their talents to benefit themselves, and if they are out-competed then they should work with something else. They don't have a right to a fantastic career.
All other workers have had to go through this when their fields became more automated and efficient. A cargo ship used to have hundreds of crew, now it's a dozen and the amount of cargo on a ship is ten times more.
So I will absolutely not cry for a software dev who has to make changes in the face of AI competition. If they're too precious to adapt or take a different job, then starve.
> Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck.
Me and hundreds of thousands of other organizations who have software needs that were under served by the market. Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on. If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source. Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
staticassertion7 hours ago
> If they're too precious to adapt or take a different job, then starve.
Yeah I mean I think everyone is with you except for the "then starve", this is just weirdly combative and lacking in empathy, I find it totally strange.
> Me and hundreds of thousands of other organizations who have software needs that were under served by the market.
And... you blame software developers for that? You blame software devs for a lack of capacity in the field? So weird.
> Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
Okay, I mean, this has always been an option. I guess it will be more of an option now. There have been consulting agencies or "WYSIWYG" editors like Wix or other "low code/ no code" platforms for ages. No one is going to be upset that you're using them. This hostility is totally one sided lol
> I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on.
We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
> If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source.
I find this funny because hobbyist/ open source projects are by far the ones that are glamorized by the field, not enterprise software.
> Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
I have no idea where you get this impression from. Most software devs I've worked with are motivated heavily by solving real world problems. I think you have very, very little insight into what software development actually looks like or what software engineers are motivated by. Frankly, this comes off as very much "I was snubbed and now I'm happy that the people who I perceive as having snubbed me will be replaced by AI", which I think is quite lame. You definitely seem to have a resentful tone to your posts that I find weird.
carlosjobim4 hours ago
Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force. Asking yourself "who is going to purchase the products?" when software development is being automated is quite silly. Why didn't they ask that question when thousands of other professions suffered the same?
> And... you blame software developers for that?
I don't blame them. They had more lucrative ventures to tend to. Now that under served market segments can be served with the help of AI, then they shouldn't complain.
You mention making web sites, but this is probably the only field in computing where the market has a lot of offerings to customers from all segments. If I need a website I don't have to use Wix, there is an endless supply of freelancers or small, medium, or big studios that offer their services. The same cannot be said of other bespoke software needs.
BTW, you are hearing a lot more hostility in my comments than is actually there.
> We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
Yes it is snobbery. Other skilled professionals generally do not have that option, they have to do the boring stuff as well. And if you only like to work on interesting projects, then why are people complaining that AI is taking their jobs?
Regarding hobbyist / open source, I mean that when software devs aren't working on big enterprise style projects as a job, they tend to work on enterprise style projects as open source, or just play with hobbyist projects. Servicing smaller customers with bespoke software seems to be considered a little bit beneath the programmer dignity.
And it's not my personal experience talking. Consider how many studios are offering bespoke software for small businesses, compared to how many studios are offering websites for small businesses. There's a huge gap, that is probably going to be filled in some way pretty soon.
staticassertion4 hours ago
> Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force.
This is silly equivocation. I'm telling you that your statement lacks empathy, and you're making vague, unclear gestures to an entire field.
Anyway, reading your post it's clear that you have a rather pathetic grudge because software devs weren't interesting in working with you and now you get to grin gleefully as you see AI take away jobs. You obviously have zero insight into software development as a practice, nor how software devs think - this is glaringly obvious by your detraction that software devs give software away for free as somehow snobbery because they wouldn't work on whatever project you clearly hold a grudge over. Further, your comments from start to finish demonstrate a complete lack of understanding of what the job actually entails.
> BTW, you are hearing a lot more hostility in my comments than is actually there.
Maybe so! I can't tell you what you actually think, but it comes off as really pathetic, so maybe reread your post and consider why I'm hearing it.
Best of luck in your ventures.
carlosjobim7 hours ago
What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
With AI code we might get software that is an ugly mess underneath, but at least we have it. While human programmers are unwilling to provide this software for even a high price.
I could argue that people are better off having nothing to eat rather than having high quality food. But in reality something is better than nothing.
There is a gigantic market and a gigantic need of software in the field between hobbyist and enterprise. And AI code will serve that field. Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Otherwise more and more people will do like me and have AI make their own bespoke solutions.
staticassertion7 hours ago
> What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
You think that good engineering is unrelated to fulfilling needs at a price they pay? I think you're confused. Software engineers are tasked with exactly this problem - determining how to deliver utility at a price point. That's... the whole job. We consider how much it costs to run a database, how to target hardware constraints, how to build software that will scale accordingly as new features are added or new users onboard, etc. That's the job...
That's sort of the whole point of the article. The job isn't "write code", which is what AI does. The job is "understand the customer, understand technology, figure out how to match customer expectations, business constraints, and technologies together in a way that can be maintained at-cost".
> While human programmers are unwilling to provide this software for even a high price.
Sorry, but this is just you whining that people didn't want to work for you. Software engineers obviously are willing to provide software in exchange for money, hence... idk, everyone's jobs.
> And AI code will serve that field.
That may be true, just as no-code solutions have done in the past.
> Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Yes, I agree.
> Otherwise more and more people will do like me and have AI make their own bespoke solutions.
I'm a bit skeptical of this broadly in the long term but it's certainly the case that some portion of the market will be served by this approach.
agentultra5 hours ago
The end user of a bridge doesn’t care about most things the engineer who designed it does. They care that the bridge spans the gap and is safe to use. The company building the bridge and the group financing its construction care about a few more things like how much it will cost to provide those things to the end user, how long it will last. The engineer cares about a few more things: will the tolerances in the materials used in the struts account for the shearing forces of the most extreme weather in the region?
So it is with software.
You might not need a blueprint if you’re building a shed in your back yard. This is the kind of software that and user might write or script themselves. If it’s kind of off nobody is going to get hurt.
In many cities in North America you can’t construct a dwelling with plumbing connections to a sewer and a permanent foundation without an engineer. And you need an engineer and a blueprint to get the license to go ahead with your construction.
Because if you get it wrong you can make people in the dwelling sick and damage the surrounding environment and infrastructure.
Software-wise this is where you’re handling other people’s sensitive data. You probably have more than one component that needs to interact with others. If you get it wrong people could lose money, assets could get damaged, etc.
This is where I think the software industry needs to figure out liability and maybe professionalize a bit. Right now the liability is with the company and the deterrents are basically no worse than a speeding ticket in most cases. It’s more profitable to keep speeding and pay off the ticket than to prevent harm from throwing out sloppy code and seeing what sticks.
Then if you are building a sky scraper… well yeah, that’s the large scale stuff you build at big tech.
There are different degrees of software with different requirements. While not engineers by professional accreditation, in practice I would say most software developers are doing engineering… or trying.
What I agree with in the article is that AI tools make bad engineering easier. That is for people building houses and skyscrapers who should be thinking about blueprints: they are working under assumptions that the AI is “smart” and will build sky-scrapers for them. They’re not thinking about the things they ought to be and less about things that will pass on to the customers: cost and a product that isn’t fit for use.
A bridge that falls down if you drive too fast over it isn’t a useful bridge even though it looks like a bridge.
xg1512 hours ago
> Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively.
As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
But not if the only reward for this would be to be laid off.
So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
neversupervised12 hours ago
The goal has nothing to do with you being employed. Your job security is a consequence of the ultimate goal to build AGI. And software development salaries and employment will be affected before getting there. In my opinion, we already past the SWE peak as far as yearly salary. Yes there are super devs working on AI making a lot of dough, but I consider that a particular specialty. On average the salary of a new grad SWE in the US is past its peak if you consider how many new grads can’t get a job.
staticassertion12 hours ago
> if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
I don't really believe this is possible. Or, it's the sort of thing that gets solved at a "product" level. Reality is complicated. People are complicated. The idea that software can exist without complexity feels sort of absurd to me.
That said, to your larger point, I think the goal is basically to eliminate the middle class.
carlosjobim11 hours ago
You can work with something else if there's no longer any demand for your current skills. If you refuse you should starve.
fao_12 hours ago
> So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
Capitalism is reliant on the underclass (the homeless, the below minimum-wage) to add pressure to the broader class of workers in a way that makes them take jobs that they ordinarily wouldn't (Because they may be e.g. physically/emotionally unsafe, unethical, demeaning), for less money than they deserve and for more hours than they should. This is done in order to ensure that the price of work for companies is low, and that they can always draw upon a needy desperate workforce if required. You either comply with company requirements, or you get fired and hope you have enough runway not to starve. This was written about over a hundred years ago and it's especially true today in the modern form of it. Programmers as a field have just been materially insulated from the modern realities of "your job is timing your bathroom breaks, tracking how many hours you spend looking at the internet, your boss verbally abuses you for being slow, and you aren't making enough money to eat properly".
This is also why many places do de-facto 'cleansings' of homeless people by exterminating their shelter or removing their ability to survive off donations, and why the support that is given for people without the means to survive is not only tedious but almost impossible to get. The majority of workers are supposed to look at that and go "well fuck, glad that's not me!" with a little part of their brain going "if i lost my job and things went badly, that could become me."
This is also why immigration enforcement is a thing — so many modern jobs that nobody else in the western world wants to do are taken by immigrants. The employer won't look too closely at the visa, and in return the person gets work. With the benefit being towards the employer — if the person refuses to do something dangerous to themselves or others, or refuses to produce enough output to sustain the exponential growth at great personal cost, well, then the company can just cut the immigrant loose with no recourse, or outright call the authorities on them so they get deported. Significantly less risky to get people to work in intolerable conditions for illegal wages if there is no hope of them suing you for this.
Back in the 1900s there were international conventions to remove passports. Now? Well, they're a convenient underclass for political manoeuvring. Why would you want people to have freedom of movement if your own citizens could just leave when things get bad, and when the benefits are a free workforce that you don't have to obey workers rights laws about?
a_void_sky12 hours ago
"Coding was never the hard part. Typing syntax into a machine has always been the least interesting part of building a system."
and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work
embedding-shape12 hours ago
Slight adjustment, but I'd see "maintaining code" is the same as before, matters more the people and their experience and knowledge how to manage that. But agree that the literal typing was never the difficult part, knowing what code should and shouldn't be written was a hard part, and still remains a hard part.
Right now, "what code shouldn't be written" seems to have become an even more important part, as it's so easy to spit out huge amounts of code, but that's the lazy and easy way, not the one that let you slowly add in features across a decade, rather than getting stuck with a ball of spaghetti after a weekend of "agent go brrr".
a_void_sky12 hours ago
this is where years of experience working with freshers and junior devs helps, AI is smart enough to exactly do if you clearly tell it what to do and how
unless you understand every inch of system and foresee what issues can be created by what kind of change, things will break when using AI
j3k312 hours ago
I think that captures a lot of the LLM debate.
There are people who just want an object produced that allows for some outcome to be achieved closer to the present.
And there are other people who want to ensure that object that is produced, will be maintainable, not break other parts of the system etc.
Neither party is wrong in what they want. I think there should naturally be a split of roles - the former can prototype stuff so other individuals in the organisation can critique whether it is a thing of value / worth investing in for production.
a_void_sky12 hours ago
me and my team have wasted so many hours (days) working on some product features which was "definitely going viral" only to be forgotten after a few weeks
I believe if we had something like this we could go to market early and understand the user behaviour to build a more scalable and robust system once we were sure if it was even of worth
j3k312 hours ago
Yeah thats a good example.
The reality is humans are really bad at knowing what is worth investing into.. until the object is there for all to see and critique.
Every idea sounds great until you spend resources getting into the subtleties and nuances.
heliumtera11 hours ago
Hm. People that know what they are doing prefer to do it themselves. This is not a new thing, not because of llms, but it is how it was always. People that knows more, have heavier opinions. Given the option of accepting new code and new functionality, often, people that knows what they are doing would reject code that functions fine, but fails expectations.
I think who's benefiting the most are people that said that syntax was the least interesting parts but could not program for shit.
Typing into a machine is not the least interesting part. It is the only interesting part. Everything else is a fairy tale
jazz9k12 hours ago
Juniors that are relying too heavily on AI now will pay the price down the line, when they don't even know the fundamentals, because they just copy and pasted it from a prompt.
It only means job security for people with actual experience.
sega_sai12 hours ago
When I see this: "One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was."
I don't think I can take this seriously.
Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands.
So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.
staticassertion11 hours ago
It's hard to reconcile "I don't think I can take this seriously" followed by an immediate admission that you agree but that there's some nuance.
I think the author's post is far more nuanced that this one sentence that you apparently agree with fundamentally.
anilakar12 hours ago
Agree. Writing code has always been the most time-consuming part that distracts me from actual design. AI just emphasizes the fact that anyone can do the keyboard mashing while reading code is the actual skill that matters.
Give a woodcutter a chainsaw instead of an axe and he'll fell ten times more trees. He'll also likely cause more than ten times the collateral damage.
sshine12 hours ago
It also made good engineering easier.
AI is an amplifier of existing behavior.
staticassertion12 hours ago
I think it's easier for good engineers to be good, perhaps. For example, I think property testing is "good" engineering. But I suspect that if you took a survey of developers the vast majority would not actually know what property testing is. So is AI really of any use then for those devs?
qsera12 hours ago
Internet enabled instant to access to all human knowledge as well as instant chit-chat across the globe. Guess what humanity choose to drown itself in?
So combine both facts here in context, with human nature, and you ll see where this will go.
maplethorpe12 hours ago
I think it depends on your process. Problems that require creative solutions are often solved through the act of doing. For me, it's the act of writing code itself that ignites the pathways and neural connections that I've built up in my brain over the years. Using AI circumvents that process, and those brain circuits go unused.
I love that I don’t have to look thru SO looking for a problem that’s kind of like the one I’m having. I have a problem solved based on my exact code taking my entire code base into account. And if I want a biting sarcastic review of my many faults as a software developer, I can ask for that too.
groundzeros201512 hours ago
I’m an AI skeptic and in no sense is it taking my peers job. But it does save me time. I can do research much better than Google, explore a code base, spit out helper functions, and review for obvious mistakes.
gitaarik1 hour ago
So what do you mean with being a sceptic? I thought you meant you weren't convinced of it's usefulness, and therefore don't use it. But you do seem to use it, so what is it you're sceptible about?
coffeefirst12 hours ago
Yep. And the more time I spend with the agents the more I’m convinced that your way is the endgame.
furyofantares12 hours ago
AI Didn't Simplify Blogging: It Just Made Bad Blogging Easier
I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.
polynomial12 hours ago
Robots making fun of us complaining about them.
hyperbovine12 hours ago
I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
j3k312 hours ago
Thats because an LLM can access breadth at any given moment that you cannot. That's the advantage it has.
E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.
Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.
woeirua12 hours ago
How many model releases are we away from people like this throwing in the towel? 2? 3?
__MatrixMan__12 hours ago
Naw, I just yesterday caught something in test that would've made it to prod without AI. It happens all the time.
You can't satisfy every single paranoia, eventually you have to deem a risk acceptable and ship it. Which experiments you do run depends on what can be done in what limited time you have. Now that I can bootstrap a for-this-feature test harness in a day instead of a week, I'm catching much subtler bugs.
It's still on you to be a good engineer, and if you're careful, AI really helps with that.
j3k312 hours ago
Change 'good' for 'disciplined'.
Problem is.. discipline is hard for humans. Especially when exposed to a thing that at face-value seems like it is really good and correct.
__MatrixMan__11 hours ago
We can cheat on discipline if we design our workflows with more careful thought about incentives.
I wound up in a role where I throw away 100% of the code that I write within a few months. My job is about discovering cases where people are operating under false assumptions (typically about how some code will or won't be surprising in context with some dataset), and inform them of the discrepancy. "proofs" would be too strong of a word, but I generate a lot of code that then generates evidence which I then use in an argument.
I do try to be disciplined about the code I rely on, but since I have no incentive to sneak through volumes of unreliable code before moving on to the next feature, it's easy to do. When I'm not diligent, the pain comes quickly, and I once again learn to be diligent. At the end of the day I end up looking at a dashboard I had an agent throw together and I decide if the argument I intend to make based on that dashboard is convincing.
Also, agent sycophancy isn't really a problem because the agents are only asked to collect and represent the data. They don't know what I'm hoping to see, so it's very uncommon that they end up generating something deceptive. Their incentives are also aligned.
I think we can structure much of our work this way (I just lucked into it) where there's no conflict of interest and therefore the need to be disciplined is not in opposition to anything else.
_pdp_12 hours ago
Put a bad driver in an F1 car and you won't make them a racer. You will just help them crash faster. Put a great driver in that same car, and they become unstoppable.
Technology was never equaliser. It just divides more and yes ultimately some developers will get paid a lot more because their skills will be in more demand while other developers will be forced to seek other opportunities.
arty_prof12 hours ago
In terms of the Tech Debt it is obviously allow to make it a lot. But this is controllable if analysing in depth what AI is doing.
I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs.
And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb.
Like any tool in right hands can be a dealbreaker.
yubainu11 hours ago
Ultimately, I believe the most important thing is how we effectively utilize AI. We can't and shouldn't entrust everything to AI in any field, not even after AGI is perfected. Sometimes it's important to mass-produce low-quality code, and other times it's important to create beautifully crafted code.
heliumtera11 hours ago
Exactly, a large portion of software development was rejecting code from a intellectually functional human being.
AGI is not sufficient to achieve minimum quality code because intelligence was never sufficient
rednafi11 hours ago
AI just lowered the cost of replication. Now you can replicate good or bad stuff but that doesn't automatically make AI the enabler of either.
sunir12 hours ago
Simplify? It’s like saying a factory made chair building… what?
It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.
yakattak12 hours ago
Anecdotally I have not seen consistency in quality at all.
sunir12 hours ago
My chairs resemble each other. Have you tried ikea?
If you are talking about code which isn’t what I said, then we aren’t there yet.
groundzeros201512 hours ago
It’s a bad analogy because the benefits of industrial machines were predictable processes done efficiently.
sunir12 hours ago
That came later than the beginning. Workhouses came before the loom. You can see this in the progression of quality of things like dinner plates over time.
Making clay pottery can be simple. But to make “fine china” with increasingly sophisticated ornamentation and strength became more complex over time. Now you can go to ikea and buy plates that would be considered expensive luxuries hundreds of years ago.
j3k312 hours ago
Yeah... nah. As others have said, your analogy does not hold up to scrutiny.
groundzeros201512 hours ago
You’re not addressing any points I made.
Sharlin12 hours ago
Compilers made programming faster, cheaper, and more consistent in quality. They are the proper analogy of machine tools and automation in physical industries. Reusable code libraries also made programming faster, cheaper, and more consistent in quality. They are the proper analogy of prefabricated, modular components in physical industries.
j3k312 hours ago
Consistent in quality.. what?
water_badger12 hours ago
So somewhere here there is a 2x2 or something based on these factors:
1. Programmers viewing programming through career and job security lens
2. Programmers who love the experience of writing code themselves
3. People who love making stuff
4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles.
It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out.
At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward.
20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)
tw-20260303-00112 hours ago
Or it simply made one step over the draft stage faster. It all depends how one uses it.
zackmorris12 hours ago
A -> (expletive) -> B
I think we're all in denial about how bad software engineering has gotten. When I look at what's required to publish a web page today vs in 1996, I'm appalled. When someone asks me how to get started, all I can do is look at them and say "I'm so sorry":
So "coding was always the hard part". All AI does is obfuscate how the sausage gets made. I don't see it fixing the underlying fallacies that turned academic computer science into for-profit software engineering.
Although I still (barely) hold onto hope that some of us may win the internet lottery someday and start fixing the fundamentals. Maybe get back to what we used to have with apps like HyperCard, FileMaker and Microsoft Access but for a modern world where we need more than rolodexes. Back to paradigms where computers work for users instead of the other way around.
Until then, at least we have AI to put lipstick on a pig.
jinko-niwashi11 hours ago
Your "don't fucking touch that file" experience is the exact pattern I kept hitting. After 400+ sessions of full-time pair programming with Claude, I stopped trying to fix it with prompt instructions and started treating it as a permissions problem.
The model drifts because nothing structurally prevents it from drifting. Telling it "don't touch X" is negotiating behavior with a probabilistic system — it works until it doesn't. What actually worked: separating the workflow into phases where certain actions literally aren't available. Design phase? Read and propose only. Implementation phase? Edit, but only files in scope.
Your security example is even more telling — the model folding under minimal pushback isn't a knowledge gap, it's a sycophancy gradient. No amount of system prompting fixes that. You need the workflow to not ask the model for a judgment call it can't be trusted to hold.
jwilliams11 hours ago
There are some interesting points here, but I think this essay is a little too choppy - e.g. the Aircraft Mechanic comparison is a long bow to draw.
The Visual Basic comparison is more salient. I've seen multiple rounds of "the end of programmers", including RAD tools, offshoring, various bubble-bursts, and now AI. Just because we've heard it before though, doesn't mean it's not true now. AI really is quite a transformative technology. But I do agree these tools have resulted in us having more software, and thus more software problems to manage.
The Alignment/Drift points are also interesting, but I think that this appeals to SWE's belief that that taste/discernment is stopping this happening in pre-AI times.
I buy into the meta-point which is that the engineering role has shifted. Opening the floodgates on code will just reveal bottlenecks elsewhere (especially as AI's ability in coding is three steps ahead and accelerating). Rebuilding that delivery pipeline is the engineering challenge.
lowbloodsugar12 hours ago
> and ensuring that the system remains understandable as it grows in complexity.
Feel like only people like this guy, with 4 decades of experience, understand the importance of this.
polynomial11 hours ago
Understandable as always a proxy for predictable.
rvz12 hours ago
Is that why there are so many outages across many companies adopting AI, including GitHub, Amazon, Cloudflare and Anthropic even with usage?
Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s.
If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0].
AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel.
This is the under acknowledged "secret" of this reconfiguration.
It's like the Bill Joy point about mediocre technology taken to the next level.
staticassertion12 hours ago
> Code Was Never the Hard Part
I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more, but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this.
Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe.
This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version.
None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case.
I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job".
AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.
tonyedgecombe11 hours ago
>I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
For you and others here but you only need to look at the number of people who can’t code FizzBuzz to realise there are many who struggle with it.
It’s easy to take your own knowledge for granted. I’ve met a lot of people who know their business inside out but couldn’t translate that knowledge into code.
staticassertion10 hours ago
I mean, of course not everyone can code. I'm not saying that programming is trivial, or anyone can just naturally write code. What I'm saying is that if you're a full time engineer then some part of your day is spent programming but the difficult work is not encompassed by "how do I write the code to do this?" - sometimes it is, but mostly you think about lots of other things surrounding the code.
zer00eyz12 hours ago
"AI" (and calling it that is a stretch) is nothing more than a nail gun.
If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house?
A confident AI and an unskilled human are just a Dunning-Kruger multiplier.
j3k312 hours ago
Nicely put.
There's this mistake Engineers make when using LLMs and loudly proclaiming its coming for their jobs / is magic... you have a lot of knowledge, experience and skill implicitly that allows for you to get the LLM to produce what you want.
Without it... you produce crappy stuff that is inevitably going to get mangled and crushed. As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.
zer00eyz11 hours ago
> As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.
And I keep seeing products and projects banning AI: "My new house fell down because of the nail gun used, therefore I'm banning nail guns going forward." I understand and sympathize with maintainers and owners and the pressure they are under, but the limitation is going to look ridiculous as we see more progress with the tools.
There are whole categories of problems that were creating that we have no solutions for at present: it isn't a crisis it's an opportunity.
j3k311 hours ago
I personally think its better to be cautious and wait for improvements in tooling to rise. Its not always necessary to be the one to take a risk when there's plenty of others willing to do so, for which the outcomes can then be assessed.
dgxyz12 hours ago
Not easier but faster. It’s really hard to catch shit now.
I disagree with the premise. It made all engineering easier. Bad and good.
I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
I believe the article exaggerates to make a point. Yes, good engineering can also be assisted with LLM-based agents, but there is a delta.
Good engineering requires that you still pay attention to the result produced by the agent(s).
Bad engineering might skip over that part.
Therefore, via Amdahl's law, LLM-based agents overall provide more acceleration to bad engineering than they do to good engineering.
The connection to Amdahl's law is totally on point. If you're just using LLMs as a faster way to get _your_ ideas down, but still want to ensure you validate and understand the output, you won't get the mythical 10x improvement so many seem to claim they're getting. And if you do want that 10x speedup, you have to forego the validation and understanding.
I do agree with you, but don't underestimate the projects where you can actually apply this 10x. For example, I wanted to get some analytics out of my database. What would have been a full weekend project was now done in an hour. So for such things there is a huge speed boost.
But once software becomes bigger and more complex, the LLM starts messing up, and the expert has to come in. That basicaly means your months project cannot be done in a week.
My personal prediction: plugins and systems that support plugins will become important. Because a plugin can be written at 10x speed. The system itself, not so much.
I think there will also be a lot of work in how to modularize month long projects into plugin sized pieces.
Well, it's made bad engineering massively easier and good engineering a little easier.
So much so that many people who were doing good engineering before have opted to move to doing three times as much bad engineering instead of doing 10% more good engineering.
[dead]
In corporate app development, I would see tests to check that the mocks return the expected values. Like, what are we even doing here?
Someone was asked to test untestable code so verifying mock contents was the best they could come up with.
No. Someone was asked to meet an arbitrary code coverage threshold. I'm dealing with this malicious compliance/weaponized incompetence at $current_job
How will you deal with it? I successfully convinced $big_important_group at $day_job to not implement a policy of failing their builds when code coverage dips below their target threshold > 90%. (Insane target, but that's a different conversation.)
I convinced them that if they wanted to treat uncovered lines of code as tech debt, they needed to add an epic stories to their backlog to write tests. And their artificially setting some high target coverage threshold will produce garbage because developers will write do-nothing tests in order to get their work done and not trip the alarms. I argued that failing the builds on code coverage would be unfair because the tech debt created by past developers would unfairly hinder random current-day devs getting their work done.
Instead, I recommended they pick their current coverage percentage (it was < 10% at the time) and set the threshold to that simply to prevent backsliding as new code was added. Then, as their backlogged, legit tests were implemented, ratchet up the coverage threshold to the new high water mark. This meant all new code would get tests written for them.
And, instead of failing builds, I recommended email blasts to the whole team to indicate there was some recent backsliding in the testing regime and the codebase had grown without accompanying tests. It was not a huge shame event, but good a motivator to the team to keep up the quality. SonarQube was great for long-term tracking of coverage stats.
Finally, I argued the coverage tool needed to have very liberal "ignore" rules that were agreed to by all members of the team (including managers). Anything that did not represent testable logic written by the team: generated code, configurations, tests themselves, should not count against their code coverage percentages.
You could ask the same thing about tests themselves. And I'm not talking about tests that don't exercise the code in a meaningful manner like your assertions on mocks(?!)
I'm saying you could make the same argument about useful tests themselves. What is testing that the tests are correct?
Uncle Bob would say the production code is testing the tests but only in the limited, one-time, acceptance case where the programmer who watches the test fail, implements code, and then watches it pass (in the ideal test-driven development scenario.)
But what we do all boils down to acceptance. A human user or stakeholder continues to accept the code as correct equals a job well done.
Of course, this is itself a flawed check because humans are flawed and miss things and they don't know what they want anyhow. The Agile Manifesto and Extreme Programming was all about organizing to make course corrections as cheap as possible to accommodate fickle humanity.
> Like, what are we even doing here?
What ARE we doing? A slapdash job on the whole. And, AI is just making slapdash more acceptable and accepted because it is so clever and the boards of directors are busy running this next latest craze into the dirt. "Baffle 'em with bullsh*t" works in every sector of life and lets people get away with all manner of sins.
I think what we SHOULD be doing is plying our craft. We should be using AI as a thinking tool, and not treat it like a replacement for ourselves and our thinking.
I'm trying to wrap my head around here.
So there are tests that leverage mocks. Those mocks help validate software is performing as desired by enabling tests to see the software behaves as desired in varying contexts.
If the software fails, it is because the mocks exposed that under certain inputs, undesired behavior occurs, an assert fails, and a red line flags the test output.
Validating that the mocks return the desired output.... Maybe there is a desire that the mocks return a stream of random numbers and the mock validation tests asserts said stream adheres to a particular distribution?
Maybe someone in the past pushed a bad mock into prod, that mock validated a test that would have failed given better mock, and a post mortem when the bad software, now pushed into prod, was traced to a bad mock derived a requirement that all mocks must be validated?
Yeah, seems plausible, or it was just "belt and suspenders." Sure made a lot of pretty green checkmarks.
we use this https://github.com/auchenberg/volkswagen
I think it's easy to forget that the LLM is not a magic oracle. It doesn't give great answers. What you do with the LLM's output determines whether the engineering you produce is good or bad. There are places where you can plonk in the LLM's output as-is and places you can't, or times when you have to keep nudging for a better output, and times when nothing the LLM produces is worth keeping.
It makes bad engineering easier because it's easy to fall into the trap of "if the LLM said so, it must be right".
Even if you agree with the OP, there's a large portion of applications where it simply doesn't matter if the quality of the software is good or terrible as long as it sufficiently works.
Yeah, I've seen this too. I like to call them "single-serving apps". I made a flashcard app to study for interviews and one-shot it with Claude Code. I've had it add some features here and there but haven't really looked at the code.
It's just a small CLI app in 3 TypeScript files.
> Ive known people at every company who add copious null checks rather than understanding things and fixing them properly.
ynow "defensive programming" is a thing, yeah? Sorry mate, but that statement I'd expect from juniors, which are also often the one's claiming their own technical superiority over others
Adding null checks where they aren't needed means adding branching complexity. It means handling cases that may never need to be handled. Doing all that makes it harder to understand "could this variable ever be null?" If you can't answer that question, it is now harder to write code in the future, often leading to even more unnecessary null checks.
I've seen legacy code bases during code review where someone will ask "should we have a null check there?" and often no-one knows the answer. The solution is to use nullability annotations IMO.
It's easy to just say "oh this is just something a junior would say", but come on, have an actual discussion about it rather than implying anyone who has that opinion is inexperienced.
No, the branching complexity exists anyway. You've just made it clearly visible by adding a null check or accept that the computational may fail if violated.
You never know what changes are being done in the future, while today the variable may not be nullable in the scenario you're up-to-date on, that doesn't necessarily mean it'll stay like that in the future.
Ultimately, there is a cost associated with null checks everywhere and another by omitting them. The person I responded to just insinuated that people which introduce copious amounts of null checks are inept and lazy.
In response to that I pointed out that that's literally one of the core tenets of defensive programming, and people that make such sweeping statements about other people's capabilities in this way are very often juniors. I stand by this opinion. You can disagree on specific places were a null check may have been placed unnecessary, but that's always a discussion about a specific field and cannot be generalized like he did there.
Except vibe coding is not "engineering," but more akin to project management. Engineering presupposes a deep and thorough understanding of your code. If you ship code that you’ve never even looked at, you are no longer an engineer.
100%.
There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.
Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.
Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.
There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
I don't have any idea of what a unit test is, but with AI I can make programs that help me immensely in my real world job.
Snobby programmers would never even return an email offering money for their services.
It's unclear what point you're even trying to make, other than that AI has been helpful to you. But surely you understand that if you don't know what a unit test is you're probably not in a position to comment on the value of unit testing.
> Snobby programmers would never even return an email offering money for their services.
Why the would they? I don't respond to the vast majority of emails, and I'm already employed.
Helpful to me and millions of others. Soon to be billions even.
You are employed because somewhere in the pipeline there are paying customers. They don't care about unit tests, they care about having their problems solved. Beware of AI.
Right... I mean, no engineer is going to tell you that customers care about unit tests, so I think you're arguing against a straw man here. What engineers will tell you is that bugs cost money, support costs money, etc, and that unit tests are one of the ways we cheaply reduce those costs in order to solve problems, which is what we're in the business of doing.
We are all very aware of the fact that customers pay us... it seems totally strange to be that you think we wouldn't be aware of this fact. I suspect this is where the disconnect comes in, much to the point of the article - you seem to think that engineers just write tests and code, but the article points out how silly that is, we spend most of our time thinking about our customers, features, user experience, and how to match that to the technology choices that will allow us to build, maintain, and support systems that meet customer expectations.
I think people outside of engineering might be very confused about this, strangely, but engineers do a ton of product work, support work, etc, and our job is to match that to the right technology choices.
Yeeesh. Looking at other posts from that user, they seem to have a serious grudge against software devs, presumably for not responding to their emails. "You should starve" - words taken from another post.
Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck. Whatever weird personal issue you have, I doubt anyone was not working for you out of some whatever this perceived snobbery is and it's just like... we all have jobs?
I don't have a grudge, but there needs to be some balance. Software devs are incredibly well paid compared to other professionals. It is their responsibility to use their talents to benefit themselves, and if they are out-competed then they should work with something else. They don't have a right to a fantastic career.
All other workers have had to go through this when their fields became more automated and efficient. A cargo ship used to have hundreds of crew, now it's a dozen and the amount of cargo on a ship is ten times more.
So I will absolutely not cry for a software dev who has to make changes in the face of AI competition. If they're too precious to adapt or take a different job, then starve.
> Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck.
Me and hundreds of thousands of other organizations who have software needs that were under served by the market. Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on. If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source. Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
> If they're too precious to adapt or take a different job, then starve.
Yeah I mean I think everyone is with you except for the "then starve", this is just weirdly combative and lacking in empathy, I find it totally strange.
> Me and hundreds of thousands of other organizations who have software needs that were under served by the market.
And... you blame software developers for that? You blame software devs for a lack of capacity in the field? So weird.
> Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.
Okay, I mean, this has always been an option. I guess it will be more of an option now. There have been consulting agencies or "WYSIWYG" editors like Wix or other "low code/ no code" platforms for ages. No one is going to be upset that you're using them. This hostility is totally one sided lol
> I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on.
We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
> If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source.
I find this funny because hobbyist/ open source projects are by far the ones that are glamorized by the field, not enterprise software.
> Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.
I have no idea where you get this impression from. Most software devs I've worked with are motivated heavily by solving real world problems. I think you have very, very little insight into what software development actually looks like or what software engineers are motivated by. Frankly, this comes off as very much "I was snubbed and now I'm happy that the people who I perceive as having snubbed me will be replaced by AI", which I think is quite lame. You definitely seem to have a resentful tone to your posts that I find weird.
Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force. Asking yourself "who is going to purchase the products?" when software development is being automated is quite silly. Why didn't they ask that question when thousands of other professions suffered the same?
> And... you blame software developers for that?
I don't blame them. They had more lucrative ventures to tend to. Now that under served market segments can be served with the help of AI, then they shouldn't complain.
You mention making web sites, but this is probably the only field in computing where the market has a lot of offerings to customers from all segments. If I need a website I don't have to use Wix, there is an endless supply of freelancers or small, medium, or big studios that offer their services. The same cannot be said of other bespoke software needs.
BTW, you are hearing a lot more hostility in my comments than is actually there.
> We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.
Yes it is snobbery. Other skilled professionals generally do not have that option, they have to do the boring stuff as well. And if you only like to work on interesting projects, then why are people complaining that AI is taking their jobs?
Regarding hobbyist / open source, I mean that when software devs aren't working on big enterprise style projects as a job, they tend to work on enterprise style projects as open source, or just play with hobbyist projects. Servicing smaller customers with bespoke software seems to be considered a little bit beneath the programmer dignity.
And it's not my personal experience talking. Consider how many studios are offering bespoke software for small businesses, compared to how many studios are offering websites for small businesses. There's a huge gap, that is probably going to be filled in some way pretty soon.
> Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force.
This is silly equivocation. I'm telling you that your statement lacks empathy, and you're making vague, unclear gestures to an entire field.
Anyway, reading your post it's clear that you have a rather pathetic grudge because software devs weren't interesting in working with you and now you get to grin gleefully as you see AI take away jobs. You obviously have zero insight into software development as a practice, nor how software devs think - this is glaringly obvious by your detraction that software devs give software away for free as somehow snobbery because they wouldn't work on whatever project you clearly hold a grudge over. Further, your comments from start to finish demonstrate a complete lack of understanding of what the job actually entails.
> BTW, you are hearing a lot more hostility in my comments than is actually there.
Maybe so! I can't tell you what you actually think, but it comes off as really pathetic, so maybe reread your post and consider why I'm hearing it.
Best of luck in your ventures.
What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
With AI code we might get software that is an ugly mess underneath, but at least we have it. While human programmers are unwilling to provide this software for even a high price.
I could argue that people are better off having nothing to eat rather than having high quality food. But in reality something is better than nothing.
There is a gigantic market and a gigantic need of software in the field between hobbyist and enterprise. And AI code will serve that field. Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Otherwise more and more people will do like me and have AI make their own bespoke solutions.
> What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.
You think that good engineering is unrelated to fulfilling needs at a price they pay? I think you're confused. Software engineers are tasked with exactly this problem - determining how to deliver utility at a price point. That's... the whole job. We consider how much it costs to run a database, how to target hardware constraints, how to build software that will scale accordingly as new features are added or new users onboard, etc. That's the job...
That's sort of the whole point of the article. The job isn't "write code", which is what AI does. The job is "understand the customer, understand technology, figure out how to match customer expectations, business constraints, and technologies together in a way that can be maintained at-cost".
> While human programmers are unwilling to provide this software for even a high price.
Sorry, but this is just you whining that people didn't want to work for you. Software engineers obviously are willing to provide software in exchange for money, hence... idk, everyone's jobs.
> And AI code will serve that field.
That may be true, just as no-code solutions have done in the past.
> Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.
Yes, I agree.
> Otherwise more and more people will do like me and have AI make their own bespoke solutions.
I'm a bit skeptical of this broadly in the long term but it's certainly the case that some portion of the market will be served by this approach.
The end user of a bridge doesn’t care about most things the engineer who designed it does. They care that the bridge spans the gap and is safe to use. The company building the bridge and the group financing its construction care about a few more things like how much it will cost to provide those things to the end user, how long it will last. The engineer cares about a few more things: will the tolerances in the materials used in the struts account for the shearing forces of the most extreme weather in the region?
So it is with software.
You might not need a blueprint if you’re building a shed in your back yard. This is the kind of software that and user might write or script themselves. If it’s kind of off nobody is going to get hurt.
In many cities in North America you can’t construct a dwelling with plumbing connections to a sewer and a permanent foundation without an engineer. And you need an engineer and a blueprint to get the license to go ahead with your construction.
Because if you get it wrong you can make people in the dwelling sick and damage the surrounding environment and infrastructure.
Software-wise this is where you’re handling other people’s sensitive data. You probably have more than one component that needs to interact with others. If you get it wrong people could lose money, assets could get damaged, etc.
This is where I think the software industry needs to figure out liability and maybe professionalize a bit. Right now the liability is with the company and the deterrents are basically no worse than a speeding ticket in most cases. It’s more profitable to keep speeding and pay off the ticket than to prevent harm from throwing out sloppy code and seeing what sticks.
Then if you are building a sky scraper… well yeah, that’s the large scale stuff you build at big tech.
There are different degrees of software with different requirements. While not engineers by professional accreditation, in practice I would say most software developers are doing engineering… or trying.
What I agree with in the article is that AI tools make bad engineering easier. That is for people building houses and skyscrapers who should be thinking about blueprints: they are working under assumptions that the AI is “smart” and will build sky-scrapers for them. They’re not thinking about the things they ought to be and less about things that will pass on to the customers: cost and a product that isn’t fit for use.
A bridge that falls down if you drive too fast over it isn’t a useful bridge even though it looks like a bridge.
> Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively.
As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
But not if the only reward for this would be to be laid off.
So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
The goal has nothing to do with you being employed. Your job security is a consequence of the ultimate goal to build AGI. And software development salaries and employment will be affected before getting there. In my opinion, we already past the SWE peak as far as yearly salary. Yes there are super devs working on AI making a lot of dough, but I consider that a particular specialty. On average the salary of a new grad SWE in the US is past its peak if you consider how many new grads can’t get a job.
> if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
I don't really believe this is possible. Or, it's the sort of thing that gets solved at a "product" level. Reality is complicated. People are complicated. The idea that software can exist without complexity feels sort of absurd to me.
That said, to your larger point, I think the goal is basically to eliminate the middle class.
You can work with something else if there's no longer any demand for your current skills. If you refuse you should starve.
> So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
Capitalism is reliant on the underclass (the homeless, the below minimum-wage) to add pressure to the broader class of workers in a way that makes them take jobs that they ordinarily wouldn't (Because they may be e.g. physically/emotionally unsafe, unethical, demeaning), for less money than they deserve and for more hours than they should. This is done in order to ensure that the price of work for companies is low, and that they can always draw upon a needy desperate workforce if required. You either comply with company requirements, or you get fired and hope you have enough runway not to starve. This was written about over a hundred years ago and it's especially true today in the modern form of it. Programmers as a field have just been materially insulated from the modern realities of "your job is timing your bathroom breaks, tracking how many hours you spend looking at the internet, your boss verbally abuses you for being slow, and you aren't making enough money to eat properly".
This is also why many places do de-facto 'cleansings' of homeless people by exterminating their shelter or removing their ability to survive off donations, and why the support that is given for people without the means to survive is not only tedious but almost impossible to get. The majority of workers are supposed to look at that and go "well fuck, glad that's not me!" with a little part of their brain going "if i lost my job and things went badly, that could become me."
This is also why immigration enforcement is a thing — so many modern jobs that nobody else in the western world wants to do are taken by immigrants. The employer won't look too closely at the visa, and in return the person gets work. With the benefit being towards the employer — if the person refuses to do something dangerous to themselves or others, or refuses to produce enough output to sustain the exponential growth at great personal cost, well, then the company can just cut the immigrant loose with no recourse, or outright call the authorities on them so they get deported. Significantly less risky to get people to work in intolerable conditions for illegal wages if there is no hope of them suing you for this.
Back in the 1900s there were international conventions to remove passports. Now? Well, they're a convenient underclass for political manoeuvring. Why would you want people to have freedom of movement if your own citizens could just leave when things get bad, and when the benefits are a free workforce that you don't have to obey workers rights laws about?
"Coding was never the hard part. Typing syntax into a machine has always been the least interesting part of building a system."
and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work
Slight adjustment, but I'd see "maintaining code" is the same as before, matters more the people and their experience and knowledge how to manage that. But agree that the literal typing was never the difficult part, knowing what code should and shouldn't be written was a hard part, and still remains a hard part.
Right now, "what code shouldn't be written" seems to have become an even more important part, as it's so easy to spit out huge amounts of code, but that's the lazy and easy way, not the one that let you slowly add in features across a decade, rather than getting stuck with a ball of spaghetti after a weekend of "agent go brrr".
this is where years of experience working with freshers and junior devs helps, AI is smart enough to exactly do if you clearly tell it what to do and how
unless you understand every inch of system and foresee what issues can be created by what kind of change, things will break when using AI
I think that captures a lot of the LLM debate.
There are people who just want an object produced that allows for some outcome to be achieved closer to the present.
And there are other people who want to ensure that object that is produced, will be maintainable, not break other parts of the system etc.
Neither party is wrong in what they want. I think there should naturally be a split of roles - the former can prototype stuff so other individuals in the organisation can critique whether it is a thing of value / worth investing in for production.
me and my team have wasted so many hours (days) working on some product features which was "definitely going viral" only to be forgotten after a few weeks
I believe if we had something like this we could go to market early and understand the user behaviour to build a more scalable and robust system once we were sure if it was even of worth
Yeah thats a good example.
The reality is humans are really bad at knowing what is worth investing into.. until the object is there for all to see and critique.
Every idea sounds great until you spend resources getting into the subtleties and nuances.
Hm. People that know what they are doing prefer to do it themselves. This is not a new thing, not because of llms, but it is how it was always. People that knows more, have heavier opinions. Given the option of accepting new code and new functionality, often, people that knows what they are doing would reject code that functions fine, but fails expectations.
I think who's benefiting the most are people that said that syntax was the least interesting parts but could not program for shit.
Typing into a machine is not the least interesting part. It is the only interesting part. Everything else is a fairy tale
Juniors that are relying too heavily on AI now will pay the price down the line, when they don't even know the fundamentals, because they just copy and pasted it from a prompt.
It only means job security for people with actual experience.
When I see this: "One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was." I don't think I can take this seriously.
Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands.
So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.
It's hard to reconcile "I don't think I can take this seriously" followed by an immediate admission that you agree but that there's some nuance.
I think the author's post is far more nuanced that this one sentence that you apparently agree with fundamentally.
Agree. Writing code has always been the most time-consuming part that distracts me from actual design. AI just emphasizes the fact that anyone can do the keyboard mashing while reading code is the actual skill that matters.
Give a woodcutter a chainsaw instead of an axe and he'll fell ten times more trees. He'll also likely cause more than ten times the collateral damage.
It also made good engineering easier.
AI is an amplifier of existing behavior.
I think it's easier for good engineers to be good, perhaps. For example, I think property testing is "good" engineering. But I suspect that if you took a survey of developers the vast majority would not actually know what property testing is. So is AI really of any use then for those devs?
Internet enabled instant to access to all human knowledge as well as instant chit-chat across the globe. Guess what humanity choose to drown itself in?
So combine both facts here in context, with human nature, and you ll see where this will go.
I think it depends on your process. Problems that require creative solutions are often solved through the act of doing. For me, it's the act of writing code itself that ignites the pathways and neural connections that I've built up in my brain over the years. Using AI circumvents that process, and those brain circuits go unused.
Yep, and recent reports from the likes of DORA and DX validate this with data: https://cloud.google.com/blog/products/ai-machine-learning/a...
“ AI is an amplifier of existing behavior”
Apropos. I’m stealing that line.
I love that I don’t have to look thru SO looking for a problem that’s kind of like the one I’m having. I have a problem solved based on my exact code taking my entire code base into account. And if I want a biting sarcastic review of my many faults as a software developer, I can ask for that too.
I’m an AI skeptic and in no sense is it taking my peers job. But it does save me time. I can do research much better than Google, explore a code base, spit out helper functions, and review for obvious mistakes.
So what do you mean with being a sceptic? I thought you meant you weren't convinced of it's usefulness, and therefore don't use it. But you do seem to use it, so what is it you're sceptible about?
Yep. And the more time I spend with the agents the more I’m convinced that your way is the endgame.
AI Didn't Simplify Blogging: It Just Made Bad Blogging Easier
I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.
Robots making fun of us complaining about them.
I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
Thats because an LLM can access breadth at any given moment that you cannot. That's the advantage it has.
E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.
Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.
How many model releases are we away from people like this throwing in the towel? 2? 3?
Naw, I just yesterday caught something in test that would've made it to prod without AI. It happens all the time.
You can't satisfy every single paranoia, eventually you have to deem a risk acceptable and ship it. Which experiments you do run depends on what can be done in what limited time you have. Now that I can bootstrap a for-this-feature test harness in a day instead of a week, I'm catching much subtler bugs.
It's still on you to be a good engineer, and if you're careful, AI really helps with that.
Change 'good' for 'disciplined'.
Problem is.. discipline is hard for humans. Especially when exposed to a thing that at face-value seems like it is really good and correct.
We can cheat on discipline if we design our workflows with more careful thought about incentives.
I wound up in a role where I throw away 100% of the code that I write within a few months. My job is about discovering cases where people are operating under false assumptions (typically about how some code will or won't be surprising in context with some dataset), and inform them of the discrepancy. "proofs" would be too strong of a word, but I generate a lot of code that then generates evidence which I then use in an argument.
I do try to be disciplined about the code I rely on, but since I have no incentive to sneak through volumes of unreliable code before moving on to the next feature, it's easy to do. When I'm not diligent, the pain comes quickly, and I once again learn to be diligent. At the end of the day I end up looking at a dashboard I had an agent throw together and I decide if the argument I intend to make based on that dashboard is convincing.
Also, agent sycophancy isn't really a problem because the agents are only asked to collect and represent the data. They don't know what I'm hoping to see, so it's very uncommon that they end up generating something deceptive. Their incentives are also aligned.
I think we can structure much of our work this way (I just lucked into it) where there's no conflict of interest and therefore the need to be disciplined is not in opposition to anything else.
Put a bad driver in an F1 car and you won't make them a racer. You will just help them crash faster. Put a great driver in that same car, and they become unstoppable.
Technology was never equaliser. It just divides more and yes ultimately some developers will get paid a lot more because their skills will be in more demand while other developers will be forced to seek other opportunities.
In terms of the Tech Debt it is obviously allow to make it a lot. But this is controllable if analysing in depth what AI is doing.
I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs.
And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb. Like any tool in right hands can be a dealbreaker.
Ultimately, I believe the most important thing is how we effectively utilize AI. We can't and shouldn't entrust everything to AI in any field, not even after AGI is perfected. Sometimes it's important to mass-produce low-quality code, and other times it's important to create beautifully crafted code.
Exactly, a large portion of software development was rejecting code from a intellectually functional human being. AGI is not sufficient to achieve minimum quality code because intelligence was never sufficient
AI just lowered the cost of replication. Now you can replicate good or bad stuff but that doesn't automatically make AI the enabler of either.
Simplify? It’s like saying a factory made chair building… what?
It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.
Anecdotally I have not seen consistency in quality at all.
My chairs resemble each other. Have you tried ikea?
If you are talking about code which isn’t what I said, then we aren’t there yet.
It’s a bad analogy because the benefits of industrial machines were predictable processes done efficiently.
That came later than the beginning. Workhouses came before the loom. You can see this in the progression of quality of things like dinner plates over time.
Making clay pottery can be simple. But to make “fine china” with increasingly sophisticated ornamentation and strength became more complex over time. Now you can go to ikea and buy plates that would be considered expensive luxuries hundreds of years ago.
Yeah... nah. As others have said, your analogy does not hold up to scrutiny.
You’re not addressing any points I made.
Compilers made programming faster, cheaper, and more consistent in quality. They are the proper analogy of machine tools and automation in physical industries. Reusable code libraries also made programming faster, cheaper, and more consistent in quality. They are the proper analogy of prefabricated, modular components in physical industries.
Consistent in quality.. what?
So somewhere here there is a 2x2 or something based on these factors:
1. Programmers viewing programming through career and job security lens 2. Programmers who love the experience of writing code themselves 3. People who love making stuff 4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles.
It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out.
At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward.
20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)
Or it simply made one step over the draft stage faster. It all depends how one uses it.
A -> (expletive) -> B
I think we're all in denial about how bad software engineering has gotten. When I look at what's required to publish a web page today vs in 1996, I'm appalled. When someone asks me how to get started, all I can do is look at them and say "I'm so sorry":
https://xkcd.com/1168/
So "coding was always the hard part". All AI does is obfuscate how the sausage gets made. I don't see it fixing the underlying fallacies that turned academic computer science into for-profit software engineering.
Although I still (barely) hold onto hope that some of us may win the internet lottery someday and start fixing the fundamentals. Maybe get back to what we used to have with apps like HyperCard, FileMaker and Microsoft Access but for a modern world where we need more than rolodexes. Back to paradigms where computers work for users instead of the other way around.
Until then, at least we have AI to put lipstick on a pig.
Your "don't fucking touch that file" experience is the exact pattern I kept hitting. After 400+ sessions of full-time pair programming with Claude, I stopped trying to fix it with prompt instructions and started treating it as a permissions problem.
The model drifts because nothing structurally prevents it from drifting. Telling it "don't touch X" is negotiating behavior with a probabilistic system — it works until it doesn't. What actually worked: separating the workflow into phases where certain actions literally aren't available. Design phase? Read and propose only. Implementation phase? Edit, but only files in scope.
Your security example is even more telling — the model folding under minimal pushback isn't a knowledge gap, it's a sycophancy gradient. No amount of system prompting fixes that. You need the workflow to not ask the model for a judgment call it can't be trusted to hold.
There are some interesting points here, but I think this essay is a little too choppy - e.g. the Aircraft Mechanic comparison is a long bow to draw.
The Visual Basic comparison is more salient. I've seen multiple rounds of "the end of programmers", including RAD tools, offshoring, various bubble-bursts, and now AI. Just because we've heard it before though, doesn't mean it's not true now. AI really is quite a transformative technology. But I do agree these tools have resulted in us having more software, and thus more software problems to manage.
The Alignment/Drift points are also interesting, but I think that this appeals to SWE's belief that that taste/discernment is stopping this happening in pre-AI times.
I buy into the meta-point which is that the engineering role has shifted. Opening the floodgates on code will just reveal bottlenecks elsewhere (especially as AI's ability in coding is three steps ahead and accelerating). Rebuilding that delivery pipeline is the engineering challenge.
> and ensuring that the system remains understandable as it grows in complexity.
Feel like only people like this guy, with 4 decades of experience, understand the importance of this.
Understandable as always a proxy for predictable.
Is that why there are so many outages across many companies adopting AI, including GitHub, Amazon, Cloudflare and Anthropic even with usage?
Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s.
If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0].
AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
A lot of times bad engineering is all you need.
This is the under acknowledged "secret" of this reconfiguration.
It's like the Bill Joy point about mediocre technology taken to the next level.
> Code Was Never the Hard Part
I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more, but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this.
Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe.
This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version.
None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case.
I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job".
AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.
>I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
For you and others here but you only need to look at the number of people who can’t code FizzBuzz to realise there are many who struggle with it.
It’s easy to take your own knowledge for granted. I’ve met a lot of people who know their business inside out but couldn’t translate that knowledge into code.
I mean, of course not everyone can code. I'm not saying that programming is trivial, or anyone can just naturally write code. What I'm saying is that if you're a full time engineer then some part of your day is spent programming but the difficult work is not encompassed by "how do I write the code to do this?" - sometimes it is, but mostly you think about lots of other things surrounding the code.
"AI" (and calling it that is a stretch) is nothing more than a nail gun.
If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house?
A confident AI and an unskilled human are just a Dunning-Kruger multiplier.
Nicely put.
There's this mistake Engineers make when using LLMs and loudly proclaiming its coming for their jobs / is magic... you have a lot of knowledge, experience and skill implicitly that allows for you to get the LLM to produce what you want.
Without it... you produce crappy stuff that is inevitably going to get mangled and crushed. As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.
> As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.
And I keep seeing products and projects banning AI: "My new house fell down because of the nail gun used, therefore I'm banning nail guns going forward." I understand and sympathize with maintainers and owners and the pressure they are under, but the limitation is going to look ridiculous as we see more progress with the tools.
There are whole categories of problems that were creating that we have no solutions for at present: it isn't a crisis it's an opportunity.
I personally think its better to be cautious and wait for improvements in tooling to rise. Its not always necessary to be the one to take a risk when there's plenty of others willing to do so, for which the outcomes can then be assessed.
Not easier but faster. It’s really hard to catch shit now.
[dead]