Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
Sharlin3 days ago
Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.
f4stjack3 days ago
To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.
The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.
I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.
So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.
This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.
plorg3 days ago
This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.
chanon3 days ago
I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.
Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
frogperson3 days ago
Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".
Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.
nobodyandproud2 days ago
This is not a new norm (LLM aside).
Old man time, providing unsolicited and unwelcome input…
My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.
The dating analogue for your interview question would
be something like: “Can you cook or make meals for yourself?”.
- Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”
- Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”
My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?
Note: I hope this perspective shift helps you.
jackdawed3 days ago
I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.
Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.
These are CEOs who have raised $1M+ pre-seed.
delusional3 days ago
Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.
These Silicon Valley CEOs are hacks.
NaN years ago
undefined
alexpotato3 days ago
I watched someone do this during an interview.
They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)
I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.
Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.
figers3 days ago
I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.
They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.
NaN years ago
undefined
userbinator2 days ago
You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"
pbronez3 days ago
How far did they get? Did they solve the problem?
NaN years ago
undefined
NaN years ago
undefined
goalieca3 days ago
Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.
xpe3 days ago
Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.
NaN years ago
undefined
esalman3 days ago
My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.
pravj3 days ago
This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
rvnx3 days ago
You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms.
Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?
NaN years ago
undefined
NaN years ago
undefined
AJ0073 days ago
This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.
The exponential growth of compute and data continues..
As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
NaN years ago
undefined
theoreticalmal3 days ago
I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
stn81883 days ago
I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha
AlexandrB3 days ago
> - The average length of documents and emails has increased.
Brevity is the soul of wit. Unfortunately, many people think more is better.
NaN years ago
undefined
trod12343 days ago
There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).
I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.
> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).
I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.
BHSPitMonkey3 days ago
For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.
tptacek3 days ago
This has been a norm on Hacker One for over a decade.
mort963 days ago
No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.
NaN years ago
undefined
silverliver2 days ago
Ha! We've become the robots!
balamatom3 days ago
We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)
If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.
If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.
sebastiennight3 days ago
> If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?
I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.
NaN years ago
undefined
dragontamer3 days ago
This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".
IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.
rapidaneurism3 days ago
I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'
zaphodias3 days ago
I don't think there are humans involved. I've now seen countless PRs to some repos I maintain that claim to be fixing non-existent bugs, or just fixing typos. One that I got recently didn't even correctly balanced the parenthesis in the code, ugh.
I call this technique: "sprAI and prAI".
cornholio3 days ago
We will quickly evolve a social contract that AI are not allowed to directly contact humans and waste their time with input that was not reviewed by other humans, and any transgression should by swiftly penalized.
It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.
If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
ChipopLeMoral3 days ago
You're absolutely right! There are no humans involved and I apologize for that! Let me try that again and involve some humans this time, as well as correctly balancing the the parentheses. I understand your frustration and apologize for it, I am still learning as a model!
I think there are humans that watch "how to get rich with chatgpt and hackerone" videos (replace chatgpt and hackerone with whatever affiliate youtuber uses).
It's MLM in tech.
pjc503 days ago
The future of everything with a text entry box is AIs shoveling plausible looking nonsense into it. This will result in a rise of paranoia, pre-verification hoops, Cloudflare like agent-blocking, and communities "going dark" or closed to new entrants who have not been verified in person somewhere.
Even with closed communities, real user accounts will get sold for use by AI.
stronglikedan3 days ago
Don't need a human until someone is ready to pay a bounty!
l5870uoo9y3 days ago
This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.
gryfft3 days ago
I think you might be onto something-- perhaps something from the first sentence of the post to which you are replying.
brap3 days ago
You’re absolutely right, that’s a sharp observation that really gets to the heart of the issue.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
SoKamil3 days ago
Faking grammar mistakes is the new meta of proving that you wrote something yourself.
Or faking generated content into real one.
crabmusket3 days ago
Providing valuable and accurate information was, is, and will continue to be the "meta".
rpcope13 days ago
They also don't really use profanity
ToucanLoucan3 days ago
Is it that crazy? He's doing exactly what the AI boosters have told him to do.
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
elzbardico3 days ago
I have a long-running interest in NLP, LLMs basically solved or almost solved a lot of NLP problems.
The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.
But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.
Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.
But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.
tantivy3 days ago
I once tasked an LLM with correcting a badly-OCR'd text, and it went beast mode on that. Like setting an animal finally free in its habitat. But that kind of work won't propel a stock valuation :(
NaN years ago
undefined
rpcope13 days ago
So basically a hundred billion dollar industry for just spam and fraud. Truly amazing technological progress.
pizlonator3 days ago
Wait so are we now saying that these AIs are failing the Turing test?
(I mean I guess it has to mean that if we are able to spot them so easily)
blharr3 days ago
You don't spot the ones you don't spot
shadowgovt3 days ago
Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
Havoc3 days ago
Makes me wonder whether the submitter even speaks english
t0lo3 days ago
AI's other acronym...
akk03 days ago
You do realize English is one of India's two official languages, I hope?
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
mda3 days ago
Probably yes, but not as smooth and eloquent as the AI they use.
unmole3 days ago
The username sounds Turkish. Make what you will of it.
dansmith19193 days ago
So... nothing? Because I'm also not from an English speaking country and I speak English.
dansmith19193 days ago
At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.
rasz3 days ago
You dont even have to instruct it for emojis, it does it on its own. printf with emoji is an instant red flag
jcul2 days ago
It loves to put emojis in print statements, it's usually a red flag for me that something is written by AI.
listic3 days ago
What was it with em dash?
Ralfp3 days ago
People usually don't type embdash, just use regular dash (minus sign) they have already on the keyboard. ChatGPT uses emdash instead.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
badgersnake3 days ago
Some people actually do that on Github too. Absolute psychopaths.
jsheard3 days ago
I think the JS/Node scene was the pioneer in spamming emojis absolutely everywhere, well before AI. Maybe that's where the models picked it up from.
It was far before ChatGPT. I remember once on a Show HN post I commented something along the line with "The number of emoji in README makes it very hard for me to take this repo seriously" and my comment got (probably righteously) downvoted to dead.
NaN years ago
undefined
lumost3 days ago
Was this all actually an agent? I could see someone making the claim that a security research LLM should always report issues immediately from an ethics standpoint (and in turn acquire more human generated labels of accuracy).
To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.
BoredPositron3 days ago
It's an n8n bot without user input. If you Google the username you'll find a GitHub full of agent stuff.
listic3 days ago
Who was likely to start it and for what purpose?
BoredPositron3 days ago
Clout? The dude behind the username?
NaN years ago
undefined
Lerc3 days ago
I felt like it was more likely to be a complete absence of a human in the loop.
belter3 days ago
Crazy on how the current 400 Billion AI bubble is based on this being feasible...
koolba3 days ago
The rationale is that the AI companies are selling the shovels to both generate this pile as well as the ones we'll need to clean it up.
whstl3 days ago
I vividly remember the image of one guy digging a hole and another filling it with dirt as a representation of government bureaucracy and similar. Looks like office workers are gonna have the same privilege.
NaN years ago
undefined
pjc503 days ago
And on externalizing costs - the actual humans who have to respond to bad vulnerability report spam.
jonplackett3 days ago
Do you think it’s a person doing it? When I saw that reply I though maybe it’s a bot doing the whole thing!
dolmen3 days ago
I think we are now beyond just copy-pasting. I guess we are in the era where this shit is full automated.
ainiriand3 days ago
Is this for internet points?
filcuk3 days ago
If it's an individual, it could be as simple as portfolio cred ('look, I found and helped fix a security flaw in this program that's on millions of devices ')
zzzeek3 days ago
why assume someone is copy-pasting and didn't just build a bot to "report bugs everywhere" ?
chinathrow3 days ago
The '—' gave it away. No one types this character on purpose.
jaymzcampbell3 days ago
I really loved how easy MacOS made these (option+hypen for en, with shift for em), so I used to use them all the time. I'm a bit miffed by good typography now being an AI smell.
shagie3 days ago
On MacOS (and I have this disabled since I'm not infrequently typing code and getting an — where I specced a - can be not fun to debug)...
Right click in the text box, and select "Substitutions". Smart dashes will replace -- with — when typed that way. It can also do smart quotes to make them curly... which is even worse for code.
(turning those on...)
It is disappointing that proper typography is a sign of AI influence… (wait, that’s option semicolon? Things you learn) though I think part of it is that humans haven’t cared about proper typography in the past.
sevg3 days ago
Just because you don’t, doesn’t mean other people don’t. Plenty of real humans use emdash. You probably don’t realise that on some platforms it’s easy to type an emdash.
mwigdahl3 days ago
In Office apps on Windows just type two hyphens and then a word afterwards and it will autoconvert to an em-dash.
NaN years ago
undefined
kstrauser3 days ago
And where did you suppose AIs learned this, if not from us?
Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.
exe343 days ago
I'm starting to wonder if there's a real difference between the populations who use em dashes and those who think it's a sign of AI. The former are the ones who write useful stuff online, which the AIs were trained on, and the latter are the consumers who probably never paid attention to typography and only started commenting on dashes after they became a meme on LinkedIn.
NaN years ago
undefined
pessimizer3 days ago
I find it disturbing that many people don't seem to realize that chatbot output is forced into a strict format that it fills in recursively, because the patterns that LLMs recognize are no longer than a few paragraphs. Chatbots are choosing response templates based on the type of response that is being given. Many of those templates include unordered lists, and the unordered list marker that they chose was the em-dash.
If a chatbot had to write freely, it would be word salad by the end of the length of the average chatbot response. Even its "free" templates are templates (I'm sure stolen from the standard essay writing guides), and the last paragraph is always a call to further engagement.
Chatbots are tightly designed dopamine dispensers.
edit: even weirder is people who think they use em-dashes at the rate of chatbots (they don't) even thinking that what they read on the web uses em-dashes at the rate of chatbots (it doesn't.) Oh, maybe in print? No, chatbots use them more than even Spanish writing, and they use em-dashes for quotation marks. It's just the format. I'm sure they regret it, but what are they going to replace them with? Asterisks or en-dashes? Maybe emoticons.
NaN years ago
undefined
NaN years ago
undefined
birjokduf3 days ago
Books use it more liberally, internet writings not so much. Also some languages are much more prone to using it while some practically never use it
ceejayoz3 days ago
The AI is trained on human input. It uses the dash because humans did.
arthens3 days ago
I'm skeptical this is the reason:
- Chatgpt uses mdashes in basically every answer, while on average humans don't (the average user might not even be aware it exists)
- if the preference for em dashes came from the training set, other AIs would show the same bias (gemini and Le chat don't seem to use them at all)
NaN years ago
undefined
pessimizer3 days ago
Is that why it uses colorful emoticons, too? Was it trained on Onlyfans updates?
NaN years ago
undefined
chinathrow3 days ago
Yeah but a dash, at least on my keyboard is a '-', not the one quoted above.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
ulimn3 days ago
Or at least not anymore since this became the number 1 sign whether a text was written with AI. Which is a bit sad imo.
yreg3 days ago
I do all the time, but might have to stop. Same with `…`.
henrebotha3 days ago
Don't let them win. Stand proud with your "–" and your "—" and your "…" and your "×".
python-b53 days ago
I dislike the ellipsis character on its own merits, honestly. Too scrunched-up, I think - ellipses in print are usually much wider, which looks better to me, and three periods approximates that more closely than the Unicode ellipsis.
acheron3 days ago
In the words of Michael Bolton, "Why should I change? He's the one who sucks."
vagrantJin3 days ago
That got a giggle out of me. Not entirely relevant but AI tends to be overzealous in its use of emojis and punctuation, in a way people almost never do (too cumbersome on desktop where majority of typing work is done)
Academia certainly does, although, humorously, we also have professors making the same proclamation you do, while while en or em dashes in their syllabi.
_fizz_buzz_3 days ago
I started using hyphens a few years ago. But now I had to stop, because AI ruined it :(
johnisgood3 days ago
Keep in mind that now that people know what to pay attention to: em-dash, emojis, etc. they will instruct the LLM to not use that, so yeah.
easton3 days ago
Two dashes on the Mac or iOS do it unless you explicitly disable it, I think.
Balinares3 days ago
I absolutely bloody do -- though more commonly as a double dash when not at the keyboard -- and I'm so mad it was cargo-culted into the slop machines as a superficial signifier of literacy.
jrimbault3 days ago
I used to.
rpigab3 days ago
"I heard you were extremely quick at math"
Me: "yes, as a matter of fact I am"
Interviewer: "Whats 14x27"
Me: "49"
Interviewer: "that's not even close"
me: "yeah, but it was fast"
jtwaleson3 days ago
There should be a language that uses "Almost-In-Time" compilation. If it runs out of time, it just gives a random answer.
layer83 days ago
"Progressive compilation" would be more fun: The compiler has a candidate output ready at all times, starting from a random program that progressively gets refined into what the source code says. Like progressive JPEG.
phinnaeus3 days ago
Best I can do is a system that gives you a random answer no matter how much time you give it.
zelphirkalt3 days ago
Great! 80-20, Pareto principle, we're gonna use that! We are as good as done with the task. Everyone take phinnaeus as an example. This is how you get things done. We move quickly and break things. Remember our motto.
Yes, the way I described it is actually a sensible approach to some problems.
"Almost-in-time compilation" is mostly an extremely funny name I came up with, and I've trying to figure out the funniest "explanation" for it for years. So far the "it prints a random answer" is the most catchy one, but I have the feeling there are better ones out there.
philipwhiuk3 days ago
When you get the wrong answer you can just say 'ah yes, the halting problem'
mhuffman3 days ago
You should send a pull request to DreamBerd/Gulf of Mexico[0], it's surely the only language that can handle it properly!
I wonder where the balance of “Actual time saved for me” vs “Everyone else's time wasted” lies in this technological “revolution”.
simsla3 days ago
Agreed.
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
palmotea3 days ago
> If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.
Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).
M2Ys4U3 days ago
>but there needs to be a human in the loop.
More than that - there needs to be a competent human in the loop.
joquarky2 days ago
We've going from being writers to editors: a particular human must still ultimately be responsible for signing off on their work, regardless of how it was put together.
This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.
stahorn3 days ago
You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.
jsheard3 days ago
You joke, but some companies are pushing this idea unironically by putting "use AI to expand a short message into a bloated mess" and "use AI to turn a bloated mess into a brief summary" into both sides of the same product. Good job everyone, we've invented the opposite of data compression.
The next HTTP standard should include `Transfer-Encoding: polite` for AI-enabled servers and user agents.
palmotea3 days ago
Sadly, it might not be ironic. I've encountered many people (particularly software engineers and other tech bros) who assume most written language is mostly BS/padding, and assume the only real information there is what you get get from a concise summary or list of bullet points.
It's the kind of incuriosity that comes from the arrogance from believing you're very smart but actually being quite ignorant.
So it wounds like one of those guys took their misunderstanding and built and sell tools founded on it.
Groxx3 days ago
of course they are. that way they can sell both the shovels and the shit.
q3k3 days ago
Two economists are walking in a forest when they come across a pile of shit.
The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.
They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.
Walking a little more, the first economist looks at the second and says, "You know, I gave you $100 to eat shit, then you gave me back the same $100 to eat shit. I can't help but feel like we both just ate shit for nothing."
"That's not true", responded the second economist. "We increased the GDP by $200!"
Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.
VladVladikoff3 days ago
Isn’t curl open source? I was under the impression that they are all working volunteer. This isn’t a net positive. It will burn out the good willed programmers and be a net negative on OSS.
sanex3 days ago
This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.
iLoveOncall3 days ago
Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.
zaik3 days ago
How much time did they save if they didn't find any vulnerability? They just wasted someone's time and nothing else.
duxup3 days ago
Arguably that's been a part of coding for a long time ...
I spend a lot of time doing cleanup for a predecessor who took shortcuts.
Granted I'm agreeing, just saying the methods / volume maybe changed.
> I appreciate your engagement and would like to clarify the situation.
WE APPRECIATE YOUR HUMAN ENGAGEMENT IN THIS TEST.
ekjhgkejhgk3 days ago
This is so disrespectful.
xtracto3 days ago
Someone has to make a base.org kind of site but with AI quotes...
bluefirebrand2 days ago
Do you mean bash.org?
I've never heard of base.org so if I'm thinking of the wrong thing, please let me know
cactusplant73743 days ago
I wonder if this could be startups that are testing on open source projects but eventually will release a product for companies and their proprietary code cases.
gigatree3 days ago
Wow that’s infuriating. Fascinating watching the maintainer respond in good faith.
philipwhiuk3 days ago
bagder is both extremely grumpy about the state of it and fascinatingly patient.
He's like 80% wise old barn owl.
hersko3 days ago
He's a pillar of the community. When i was starting out i made a basic PR to cURL to fix some typos and he was kind enough to engage and walk me through some other related changes i could add to the PR.
I've read all of them. It's interesting how over the last 2 years badger moved from being polite to zero fucks given.
NaN years ago
undefined
volkk3 days ago
wow this is infuriating--from 2023 so i guess the proliferation of chatgpt's vernacular wasn't yet carved into the curl dev
jordigh3 days ago
That's interesting. Was AI slop harder to spot in 2023? I can't remember anymore when did everything really start getting flooded with it.
ttyyzz3 days ago
Over time, I've gotten a feel for what kind of content is AI-generated (e.g., images, text, and especially code...), and this text screams "AI" from top to bottom. I think badger responded very professionally; I'd be interested to see Linus Torvalds' reaction in such a situation :D
ambicapter3 days ago
This one was pretty obvious, I shudder at the thought that they're going to get more subtle over time.
hopelite3 days ago
It’s interesting that you say that because besides the other perspectives on this type of matter, something I have come across is accusations of AI text that at the very least were not at all clearly AI, but also seemed like the accusation was simply a coping mechanism to deflect/evade having to accept or face new informatio/reality that was counter to one’s mental model or framework.
I think of that recent situation where video showed two black bags supposedly being thrown out of a White House window. I don’t really care enough to find out whether or not that video was real, but I did find it interesting that Trump immediately dismissed it as AI after immediately glancing at it. Regardless of whether it was real or not, it seems to me that his immediate “that’s AI” response was just a rather new form of lie, a type of blame shifting to AI.
I would argue that as stupid and meaningless as that kind of example is, a better response would have been something like “we will look into it” and then moving on. But it also feels like blaming AI for innocuous things preconditioned the public to deny and gaslight the public on other, more important things, e.g., for example claiming that Israel raining down bombs on civilian people in Gaza and mass murdering probably hundreds of thousands of innocent people in what looks like the start to the Terminator wars, is merely a figment of your imagination because you will be told that AI was used and AI will be scrubbed off that information so you also will never be told about it. It’s memory holed in the TelescreenAI.
These types of developments don’t exactly fill me with optimism. Remember how in 1984 the war never ended, always changed, while at the same time both always existed and also did not actually exist? It feels like we are heading in that direction, the gaslighting form here on out, especially in all the forms of overt and clandestine war will be so off the charts that it will likely cause unpredictable mass “hysterias” and various undulations in societies.
Most people have no idea just how much media is used to train humans like an AI would be trained or controlled, now throw in ever more believable AI generated audio, visual, and not even to mention the text slop.
strgcmc3 days ago
I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.
You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.
Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".
And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).
hopelite1 day ago
Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.
NaN years ago
undefined
joz1-k3 days ago
We will see more problems related to the attitude: "I know AI, and therefore I'm smarter than trilobites who coded this before the AI boom."
I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.
jennyholzer3 days ago
I know React, and therefore I'm smarter than trilobites who coded this before the Web App boom
MarsIronPI3 days ago
HN is so outdated! Let's rewrite this old legacy code in React to make it modern!
hermannj3143 days ago
Start charging users to submit a vulnerability report.
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
sealeck3 days ago
Even a deposit works well (and doesn't have to be large). Someone who has actually found a serious bug in cURL will probably pay $2-5 dollars as a deposit to report (especially given the high probability of a payout).
SAI_Peregrinus3 days ago
One issue is who pays the processing fees for the deposit & refund transactions. HackerOne could work around that issue by copying the practices of video game "microtransaction" payments: sell "report points packs", say 2500 points for $25 minimum in a pack. User needs to deposit 100 points to report, for each report they open. If the report is accepted they get their 100 points back, if not they lose their 100 points. If they want to open more than 25 reports at once they need more points packs. The $25 pack is non-refundable, so there's no added transaction fee for the refund.
kg3 days ago
I can afford it but I would never spend money to submit a vulnerability report. I'd need to be reporting dozens of vulnerabilities on a single site like hackerone to work up the motivation to plug in payment details and risk having them leaked/stolen in order to do someone else's work for them.
I'd sooner click sponsor for the cURL project on github (something I already do for some OSS I use) than spend money to report a bug.
Analemma_3 days ago
That's my attitude towards this sort of thing as well, but unfortunately it seems that this attitude is unsustainable now that the cost of generating plausible-looking bullshit has been driven to 0. "Pay to prove humanity" seems like one of the only ways to keep something like this running if we don't built a hugely-invasive system of attestation.
zupa-hu3 days ago
Exactly my thoughts.
I’d love to have this for phone calls and sms as well. If you didn’t spam me, I’ll refund.
pixl973 days ago
That or the dark vuln market will find a way to vet bugs and pay out faster and easier than the actual project.
sealeck3 days ago
I think people who find real bugs have lots of incentives to not sell them to criminals (in and of itself a crime!!)
NaN years ago
undefined
GalaxyNova3 days ago
This is a horrible idea. If you want to discourage people from submitting reports then this is how you do it..
hermannj3142 days ago
Reducing waste, fraud, and abuse is always only one side of the story. I agree it would have false negative impact (someone does not submit a good report that otherwise would have), but I don't think that instantly makes it a horrible idea. I think the net effect would have to be studied, but I highly doubt all true postive reports would become false negatives. The goal is reducing false positives, so it is going to be a tradeoff and you'd need specific numbers to conclude anything.
Do you really think it is a horrible idea? That is just so harsh of a label.
scosman3 days ago
Spent 15 minutes the other day testing a patch I received that claimed to fix a bug (Linux UI bug, not my forte).
The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.
What’s even the point…
thenickdude3 days ago
It's all in aid of some streetsweeper being able to add "contributor to X, Y, Z projects!" to their GitHub résumé. Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
craftkiller3 days ago
Are spelling correction PRs not welcome? I'd never put it on a résumé but if I'm following a README and I see a typo, I'll generally open a quick PR to fix that. (no automated tools, not scanning for typos, just a human reading a README).
palmotea3 days ago
> Are spelling correction PRs not welcome?
I think a true spelling correction would be welcome. But I think the kind BS attitude the GP is describing often leads to useless reformatting/language tweaks, because the goal isn't to make the repo better, it's to make a change for making a change's sake with as little effort as possible.
NaN years ago
undefined
naet3 days ago
A real improvement to the documentation or readme is welcome, even if it is only a minor improvement. I have put in small grammar PRs on some documentation myself.
On the flip side, I used to get a lot of spam PRs that made an arbitrary or net neutral change to our readme, presumably just to get "contributor" credit. That is not welcome or helpful to anyone.
renewiltord3 days ago
> Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
I always find it a pity when someone has been clever and it's missed. "Spelling incorrection", get it? It's not a correction. It's the opposite.
NaN years ago
undefined
yifanl3 days ago
Depends on the project.
vultour3 days ago
This is why I refuse to interact with people who use AI. You have to invest orders of magnitude more time to review their hallucinated garbage than they used to generate it. I’m not going to waste my time talking to a computer.
Ultimately it's always about someone somewhere getting a bigger boat.
alexisread3 days ago
> The reporter was banned and now it looks like he has removed his account.
I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.
Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.
We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?
duxup3 days ago
I see it on forums now too. On Reddit, midsized subs that get a mild amount of traffic get these brand new accounts that post what reads like an amalgam of past posts. Often in help forums where people ask questions.
They have that uncanny thing where yes it's on topic, but also not how a human would likely ask exactly AND they always let slip in just a hint of human drama that really draws in other users...
They almost never respond to comments, when they do it's pretty clear they're AI (much like the response in this story).
I've unsubscribed from a good half dozen subs in the past few months because of it.
andrewflnr3 days ago
> they always let slip in just a hint of human drama
I haven't seen this so it's hard to visualize, but that seems potentially kind of tricky to do via AI. Is it actually tricky, are they donw in a way where AI could conceivably do it on its own, or are those hints easy to drop in without disturbing the bulk of the slop?
dmiracle3 days ago
The ones I have seen are like "my wife thinks I just need to blah but I think ..." or something
duxup3 days ago
I think the AI might pick it up from the most popular / engaged with posts anyway.
jmuguy3 days ago
This is essentially what teachers are dealing with every day, across the majority of their students, for every subject where its even remotely possible to use AI.
jiggawatts2 days ago
Education as a profession will have to change. Homework is pointless. Verbal presentations will have to become the new norm, or all written answers must be in the confines of the classroom... with pen and paper. Etc...
mock-possum3 days ago
Why not deal with it the same way teachers have always felt with students breaking the rules?
jmuguy3 days ago
Wife is a high school history teacher - she would have to flunk 75% of her students. That is after proving they used AI, which would be extremely time consuming. Its very demoralizing for her, she has to spend a lot of time reading written essays generated by AI.
I think given time educators will adapt. Unless they get burnt out first. She could also just not give a shit and they let go on to be some college professor's problem, who could also not give a shit, and then they become our problem when they enter the workforce.
Noughmad3 days ago
You can go back to requiring home assignments be written by hand. It won't completely fix the AI issue, because you can still ask ChatGPT and then rewrite it, but it helps because it's very tedious and time consuming, so the benefit is much lower.
If that is not enough, we may have to stop grading take-home papers. Which is a good idea anyway.
NaN years ago
undefined
simulator5g18 hours ago
Maybe this is the answer to the fermi paradox. Intelligent life eventually invents the LLM, education collapses, dumb people empowered by technology destroy the environment.
criddell3 days ago
The educators will adapt. They might use AIs to grade papers written by AIs.
Or, do what my kids' school did for some classes. Instead of teaching in class and then assigning homework, the homework will be reading a text book and classroom time will be spent writing essays by hand, doing exercises, answering questions, etc...
NaN years ago
undefined
EasyMark3 days ago
Then they should be flunked. It sucks but parents need to enforce real learning. Schools can't be the sole "responsible" entity here. This is not the instructor's fault and school admin needs to push back. We as a society need to push back otherwise it all falls apart. Not everyone can be a blue collar worker, and most of the BC workers I know tend to be decent at math at least in those items that are part of their work, which they certainly couldn't have picked up if they didn't at least know some basics arithmetic
NaN years ago
undefined
watwut2 days ago
> Its very demoralizing for her, she has to spend a lot of time reading written essays generated by AI.
I think that obvious solution is for them to write those essays in school.
EasyMark3 days ago
This is why in person tests are given and bad grades as a result as part of the student feedback performance improvement loop. Maybe with AI as a new interloper we need to decrease "report card" times to 3 weeks (it was 9 weeks in my day) so that students have some shortened loop time with parental unit reviews to help straighten out issues before they become real problems.
esalman3 days ago
I mentioned it already, my sister resigned from her tenure track position due to a fight over this. She was strict, students reported her, faculty wouldn't assign her choice course, she resigned after one and a half year.
delfinom3 days ago
Because the US is assbackwards when it comes to education since the NCLB basically forces schools to make up metrics to prevent everyone losing their job through closure.
augment_me2 days ago
Because a teachers job is to make sure N% of the class passes as much as it is to teach. If you fail have the class, you have failed as a teacher because the administration will get parents coming in. If you force your class to do assignments by hand, especially in younger grades, more will fail, and you will be blamed and fired.
phyzome3 days ago
Because 1) you often can't prove it, and 2) there often isn't support from administration.
rsynnott3 days ago
This must be _absolutely exhausting_.
zelphirkalt3 days ago
Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
palmotea3 days ago
> Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
I think the shaming the use of LLMs to do stuff like this is a valuable public service.
ares6233 days ago
Imagine the headline if a slop security report ends up real but the maintainer ignored it.
It’s a lose-lose situation for the maintainers
xnickb3 days ago
Thankfully in this case it's a curl vulnerability that doesn't use curl in the reproducer. That's a fairly safe call.
joz1-k3 days ago
The problem is that AI can generate answers and code that look relevant and as if they were written by someone very competent. Since AI can generate a huge amount of code in a short time, it's difficult for the human brain to analyze it all and determine whether it's useful or just BS.
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
zelphirkalt3 days ago
True, that is in some cases a problem. Though in this case it was pretty clear cut. At least the obvious time wasters would get the treatment.
MBCook3 days ago
He’s been complaining about it a lot lately. I don’t blame him, it’s wasting an inordinate amount of time.
And it must be so demoralizing. And because they’re security issues they still have to be investigated.
rikschennink3 days ago
Recently a customer pasted a complete ChatGPT chat in the support system and then wrote “it doesn’t work” as subject. I kindly declined.
I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.
On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.
Overall it’s been a huge additional time drain.
I think it may be time to update the “what’s included in support” section of my softwares license agreement.
rdtsc3 days ago
I wonder what's going on in the minds of these people.
I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.
> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...
On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.
> submitter: After thinking it through, I’m really sad to say that I’m not comfortable with disclosing the report . I’d prefer to keep it private . I hope this doesn’t cause any issues, and I appreciate your understanding."
> bagder: I am willing to give you some time to think about your life choices, but I am going to disclose this report later. For human kind, for research, for everyone to learn. Including you.
> submitter: After thinking it over, I’ve decided I’m okay with disclosing the report. Honestly, the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help .
rdtsc3 days ago
A good one! I like how Daniel pretended like not disclosing it was an option just to show their reaction.
> "the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help"
I guess it worked, that's their only hackerone report they made from that account.
Well, in reality the probably abandoned it, created another account and continued on with the script.
They likely live somewhere where a $50 beg bounty would be half a year’s work.
How do you feel about pixels in a video game? That’s all the maintainer is to them.
keyle3 days ago
Resume hit piece, <failed/>.
What an absolute shamble of an industry we have ended up with.
spacecow3 days ago
Lord, did anyone else click through and read the actual attached "POC"? It's (for now) hilariously obviously doing nothing interesting at all, but my blood runs cold at AI potentially being able to generate more plausible-looking POC code in the future to waste even more dev time...
panstromek3 days ago
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug.
I don't even... You just have to laugh at this I guess.
dimaor3 days ago
maybe submitters should pay a dollar to submit bugs which they will get a refund for when bug is confirmed?
even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.
andrewflnr3 days ago
It might only need to be the first N reports from a given account. It's hard to imagine a spammer coming up with 5 legit security issues just to enable their GPT spamming operation. As long as they're not real-but-trivial typo types of issues...
moi23883 days ago
That actually sounds like a good idea.
spicyusername3 days ago
The amount of text alone in the original post was a giveaway.
LLMs produce so much text, including code, and most of it is not needed.
cgearhart3 days ago
I keep talking to people who say stuff like “Claude wrote it all for me in a day”, but when I look at the code (or try it myself) it’s just so much useless code.
I recently asked for Python code to parse some data into a Pandas dataframe and got 1k lines plus tests. Whatever—I’m just importing it, so let’s YOLO and see what happens. Worked like a charm in my local environment. But I wanted to share this in a Jupyter notebook and for semi-complicated reasons I couldn’t import any project-local modules in the target environment. So I asked a much more targeted question like “give me a pandas one-liner to…” and it spit out 3 lines of code that produced the same end result.
The rest of that 1k lines was decomposing the problem into a bunch of auxiliary/utility functions to handle every imaginable edge case and adding comments to almost every line. It seems the current default settings for these tools is approximately the “enterprise-grade fizzbuzz” repo.
Sure, I’ll get better at prompting and whatever else to reduce this problem over time, but this is not viable when the costs are being pushed onto other people in the process today.
yonatan80703 days ago
I'm using ChatGPT to generate some code for me quite often, and my instructions prompt for all chats is slowly gaining more and more ways to say "Answer shortly". And I need to prompt defensively to repeatedly tell it to only do what I tell it.
raffraffraff3 days ago
Verification Status: CONFIRMED bullet points
Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.
this LLM-emboldened, mass Dunning-Kruger schizophrenia has gone from hilarious to sad to simply invoking disgust. this isn't even an earnest altruistic effort but some insecure fever dream of finally being acknowledged as a "genius" of some sort. the worst i've seen of this is some random redditor claiming to have _the_ authoritative version of a theory of everything and spamming it in every theoretical physics adjacent subreddit, claims to have a phd but anonymous and doesn't represent any research group/institution nor does the spam have any citations.
scns3 days ago
Only found a short but good article about such a case [0], i'm sure someone has bookmarked the original. There are support groups for people like this now!
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.
coldpie3 days ago
The good news is this AI stuff is not profitable. Big companies and VCs are subsidizing all this AI slop. If it had cost this moron $5 to generate the slop to file this bug they probably would not have bothered. Hopefully the bubble bursts soon, very hard, and forces the money people to figure out how to charge for these services.
ale3 days ago
It's kind of depressing to read Daniel's article[1] on this issue given the rising "popularity" of these lazy attempts at cash grabbing. I hope they manage to combat the AI slop in a way that does not involve fighting fire with fire though.
Where the reporter says, "Sorry didnt mean to waste anyones time Badger, I thought you would be happy about this.".
People using LLMs think they are helping but in reality, they are not.
yifanl3 days ago
There's this very weird idea that makes some people think that the maintainer must have a godawful workflow and if I just showed him the output of _my_ workflow, I can ~~save the day~~ fix a bug for them.
brap3 days ago
why don’t they just limit the report to 100 chars or something? “Here’s the input, here’s the output, here’s why it sucks”. Easy to make a maybe/no decision at a glance.
clearleaf3 days ago
There's a phenomenon of fraudulent "security researchers" which has sprung out of the AI world. I became aware of it when someone on discord posted a video covering an "ACE exploit" against users of a particular AI coding assistant. The exploit was this:
1. You accidentally grab a malicious config file for the assistant
2. For some reason, you would pipe this entire file into curl and then into bash
3. This results in downloading and running a script that sets up malware.
It didn't make sense at any point but I was gripped by a need to know the intention such a worthless video.
It made sense when the host started shilling his online course about how to be a "security researcher" like him. Not only that, paying members get premium first access to the latest "disclosures" that professional engineers are afraid to admit exist.
It's likely that the creator of this bug report is building up their own repertoire of exploits that have been ignored. Or perhaps they're trying to put their course knowledge to use.
dncornholio3 days ago
These are the people that I imagine who go on forums and threads to announce how great AI is and are unable to provide any critique. They are blinded by ignorance.
a2353 days ago
Maintainer or curl gave recently a talk on AI slop in security reports, showing this and other examples:
https://youtu.be/6n2eDcRjSsk?si=p5ay52dOhJcgQtxo
-- AI slop attacks on the curl project - Daniel Stenberg.
Keynote at the FrOSCon 2025 conference, August 16, in Bonn Germany by Daniel Stenberg.
This is the AI that half of HackerNews insists is "the future" that you'll be "left behind" from if you don't embrace it.
zipy1243 days ago
It is quite clear from this that a major implication of LLM's in today's society is making spam much much more difficult to discern from actual content. I empathize with any website or project popular enough to draw this kind of attention, as it must be exhausting to deal with. I wonder if burnout rates in open source will drive even higher.
hintymad3 days ago
This reminded me of an interview I listened to by a startup founder talking about how his company integrates AI into all of its workflows. During the Q&A, he said that they could tackle any challenge simply by iteratively constructing better contexts for the AI. At first this sounded optimistic, but then it struck me that it was actually the ultimate pessimistic view of what current AI can do. His assumption seemed to be that software engineers have already implemented all the primitives humans will ever need. If that’s true, then the only task left is to phrase our instructions in the right way so the model can stitch those primitives together into a production system.
wnevets3 days ago
Say what you want about AI but it has undeniably made aspects of life worse. Unfortunately I foresee effective bug bounty programs that are open to the public going away because of the sheer amount of spam like this.
littlecranky673 days ago
What is the motivation behind posting such things? I understand if there is a bug bounty program, does cURL have one?
sailorganymede3 days ago
So you can put this on your resume:
Open Source Contributor:
- Diagnosed and fixed a key bug on Curl
netsharc3 days ago
Hah, the opposite of "AI" meaning "Actually Indian"... "Here's my CV, but actually all my work will be done by AI".
With apologies for stereotyping.
palmotea3 days ago
> "Here's my CV, but actually all my work will be done by AI".
What AI did you use? Because we want to hire that, not you.
If AI exceeds human capabilities, it won't because it achieved "superintelligence," it will because it caused human abilities to degrade until the AI looks good in comparison.
vdupras3 days ago
What if it was some kind of "meta DDoS"? I mean, you can DDoS a server with simple requests, but here the effect is meta: it "DoS"es real humans. What if someone had something to gain from doing this? The tools to do this seem to all be there.
progbits3 days ago
Yes they do. But I also wonder why curl seems to get so many of these. They don't have the highest payouts, have been around for long time so presumably most low hanging fruit the AI has even a remote chance of finding was fixed, and they are well known to be on the lookout and strict about AI reports.
jmuguy3 days ago
Might be easier for AI to generate this specific bullshit because of curl's long history.
More than half of the ads I get on Youtube these days are shovel-sellers with messages like
"We have reached a point where anyone can build an app without knowing how to code".
So obviously this kind of thing is going to happen. People are being encouraged by misleading marketing.
zahlman3 days ago
When I view this page without JavaScript (on my current small monitor), there is a micro-scroll vertically down to a banner which reads
> It looks like your JavaScript is disabled. To use HackerOne, enable JavaScript in your browser and refresh this page.
on a rgba(206, 0, 0, 0.3) background (this apparently interpolates onto pure white, so it's actually something like (240, 178, 178) ), and otherwise nothing but blank white.
I know I've complained about lack of "graceful degradation" before, but this seems like a new level.
2OEH8eoCRo03 days ago
Doing this should be a stain on your career. Since anons can't be named and shamed or have careers when do we start ignoring anons?
Also, if AI were so great we could trust it to review and test these CVE reports autonomously.
smusamashah3 days ago
Once you are sure, these users should be shadow banned and an AI clone should keep them engaged. There isn't a way around it, no one deserves wasting their time on this spam.
DetroitThrow3 days ago
There must be other corporate bounty programs they could DDOS with fake reports - doing it to curl surely won't yield much profit.
ares6233 days ago
This is headline driven development. Sooner or later one of these reports will make it and there will be much rejoicing.
baq3 days ago
s/much rejoicing/pandora's box/ I guess.
the thing is, these people aren't necessarily wrong - they're just 1) clueless 2) early. the folks with proper know-how and perhaps tuned models are probably selling zero days found this way as we speak.
jdefr893 days ago
Professional Security Researcher here.. I haven't really seen any models reliably find and exploit a 0day. Folks are are at least TRYING to develop such models internally at the MIT lab where I work, but not sure how far along they are coming yet.. If a model is developed that can find a 0day or two (like Big Sleep which I think maybe found some) I won't be surprised but keep in mind fuzzers find thousands of real 0days with far less compute... These capabilities are of course something worth looking into, but too many people are promising 0day oracles already and that simply just isn't where we are right now (or ever? ). Sorry for bad grammar typing quickly from phone here.
nenenejej3 days ago
Maybe using curl for RLHF training/tuning before running it on the money sites.
tootyskooty2 days ago
I've been getting a lot of vulnerability "spam mail" recently that's clearly AI-generated.
It's a surprise every public bounty program isn't completely buried in automatic reports by now, but it likely won't take long.
preommr3 days ago
Is there something about cUrl that attracts these AI bots, or is it just better documented by them - because I was going to say that this is old, but then I checked the date and realized that this is a new problem. Going down the rabbit-hole, @badger has made multiple posts [0][1] about AI slop.
My theory is that the cURL maintainer is independent and can respond forcefully to the "AI" nonsense.
Many other projects always have some corporate maintainers who are directed to push "AI" and will try to cover it up.
saulpw3 days ago
Curl is one of the most widely used/deployed libraries on the planet.
malux853 days ago
Wow even the followup response apologising for noise was full of noise.
It finishes "I can follow up ... blah blah blah ... should I find an issue"
Tone deaf and utterly infuriating.
teapot73 days ago
For me the followup was the most obviously AI bit of writing - it's exactly the tone you get when the AI admits it's been utterly wasting your time.
nenenejej3 days ago
Gaslighting at scale
EasyMark3 days ago
GaaS, I like it!
snoozeZzz3 days ago
Time pressures during sprints have started to change, and it's forcing many people to use AI for everything.
So when they interview for their next role they are rusty for some tasks
antiquark3 days ago
Nice ending:
> The reporter was banned and now it looks like he has removed his account.
byb3 days ago
We are witnessing a new eternal summer and the only way to stem to tide is to increase the amount of required personal identifying information to register, and then publicly shame these people as a warning to others. Maybe it is a good thing that I don't run any massively popular open source projects.
ekjhgkejhgk3 days ago
> the only way to stem to tide is
I see no evidence thats the only way. Its the only way that has crossed your mind as you were writing that message.
byb3 days ago
The two alternative solutions posed so far are 1) drop the ai bounty program 2) charge for submissions.
Present an alternative.
NaN years ago
undefined
kevincox3 days ago
It's not really a great ending. They or people like them just opened 3 new accounts. They just closed this one because it was tainted.
TheSilva3 days ago
50/50 title here: it can be the app devs or it can be the reporter.
nialse3 days ago
Imagine if these “benevolent” erroneous AI bug reports were part of a coordinated effort to map how vulnerable the projects and maintainers are, not the code. Slow response, no response is a likely target for take over or exploits, and accepting code without review is an indication of ease of injecting a vulnerability.
panstromek3 days ago
It's interesting idea, I just wouldn't consider slow or no response as likely target, I think that's actually a good defense strategy for spam like this.
nialse3 days ago
The line of thought is that a slow response makes the time windows of an eventually found vulnerability exploit longer. Thus, increasing its value.
bogwog3 days ago
It's funny to think that the criminal underworld that trades in zerodays also has to deal with AI spam like this.
tdeck3 days ago
This kind of thing isn't new. When I maintained a Google owned project on GitHub in the pre-LLM era someone submitted a slop PR "fixing" some tests, seemingly generated with some kind of static analysis tool. The description was clearly copy-pasted as well.
the_biot3 days ago
Still better than the old style reports from tools like that. They're typically commercial, and evidently came with some kind of licensing restriction that you couldn't give out their output.
So open source projects would get bug reports like "my commercial static analysis tool says there's a problem in this function, but I can't tell you what the problem is."
barnabee3 days ago
Yep. We also saw people run any fuzzing, scanning, etc. tool they could get their hands on and pretty much just paste the results in a bug report email, well before AI was a thing.
Completely useless 99% of the time but that didn’t stop a good number of them following up asking for money, sometimes quite aggressively.
vjk8003 days ago
What is the motivation for people doing this? Is it just for the lols or are they making money out of this somehow?
heldrida3 days ago
Possible bug bounty program.
bxsioshc3 days ago
I believe it's so they can put on their CV that they're contributors to XYZ famous projects.
elzbardico3 days ago
I see this kind of things with new hires in my company. It is becoming depressing, stupid overly detailed but content free issue comments, stupid code that does not do what it is supposed to do but it is a fucking lot of code for you to review.
flumpcakes3 days ago
Has anyone seen a good use of AI in the wild? Every example I see is honestly depressing, such as this.
Retr0id3 days ago
If someone is using AI effectively, there's often no way to tell that they're using AI at all. Toupée fallacy etc.
EasyMark3 days ago
Code? not much, other than small functions/classes/prototype libraries to get started, but I've often used it to figure out where code was that I was concerned with in huge project code bases and analyze where some of the edges of interfaces are without digging for a few hours. Copilot can give a decent summary of where to look in a couple of seconds instead of a half hour of marking what I think are important sections and jumping around/grepping
jdefr893 days ago
It is best used for yack shaving in my opinion. Anything other than that and I feel like I cannot trust its output.
belter3 days ago
Its Code Generators all the way down...
VladVladikoff3 days ago
If I was running hackerone I would add a grey list filter for any submission with emojis in it.
nurettin3 days ago
Just filter messages with emojis.
benbojangles3 days ago
I wonder how many university degrees have been passed using ai?
rasz3 days ago
>printf("<unicode icon that HN seems to remove>
hello LLM
nojs3 days ago
What is the motivation for people submitting these?
nicklevin3 days ago
It’s really quite disappointing to see how fast just copy/pasting AI responses has proliferated, even into things that don’t benefit the copy/pasters. I’m doing an online course currently that has absolutely no benefit outside of learning the content (i.e. the certificate or whatever you get for completing means nothing) - yet classmates are very clearly just copying/pasting in responses for the exercises. How does that benefit them? More than any slop I’ve experienced thus far, this instance has made me the most worried/sad/pessimistic to see. If even people who are supposedly motivated to learn (why else would you pay for this course?) just revert to the easiest AI slop path, what hope do we have for avoiding it in stuff that more resembles “work”?
I’ve never read something that made my blood boil and blood pressure go through the roof before lol. Fuck!! Off!!!
What a professional interaction by badger. Kudos to him.
ilaksh3 days ago
I wonder if there could be some kind of platform where you have to pay a $5 deposit or something to be able to post bugs. If you waste people's time with total nonsense then you lose the $5 and can no longer report. If it's less egregious than this, like they at least made a human effort, then maybe you keep some of the deposit. Although maybe $10 or $50 would be better.
ramon1563 days ago
On another note, I actually received a clearly GPT generated GitHub PR but eventually merged it. The changes were just doc changes but they seemed okay enough to add.
I feel like the goal is to get your name on a project, but I don't really lose anything from contributions like this
moktonar2 days ago
Given the stubbornness with which slop continued in the replies, I’m starting to doubt that this is actually part of an ongoing experiment with AI in vulnerability r&d.
userbinator2 days ago
IMHO the first reply looks very automated and may even encourage them to do stuff like this, as this should've been a "fuck off" after a quick glance at the "Verified POC Code".
eithed3 days ago
Why not verify these reports using LLMs first?
elzbardico3 days ago
Once you're at the 12th month of trying to shoehorn LLMs in several use cases at your job, you'll find the answer to this question:
BECAUSE YOU CAN'T FUCKING TRUST THOSE LYING HALLUCINATING PIECES OF SHIT.
efreak2 days ago
Clearly you just set an LLM to respond to messages that appear to be written by LLMs, then disregard that thread from that point on.
varjag3 days ago
It's the same problem, false positives.
elzbardico3 days ago
And false negatives too.
progforlyfe3 days ago
and this fucking slop is going to further pollute search engine results and future LLM models as it gets scraped up. Bleak future!
rvz3 days ago
The emoji usage was another dead giveaway that this was done by an AI.
rob_c3 days ago
Same as watching someone in school try to translate between French and English by a dictionary one word at a time ignoring context...
But frankly security theatre was always going to descend into this with a thousand wannabe l33ts targeting big projects with LLMs to be "that guy" who found some "bug" and "saved the world".
Shellshock showed how bad a large part of the industry is. It was not a bug. "Fixing" it caused a lot of old tried and tested solutions to break, but hey, we as an industry need to protect against the lowest common denominator who refuse to learn better...
PicassoCTs3 days ago
[dead]
ath3nd3 days ago
[dead]
kijjure3 days ago
[flagged]
AmazingTurtle3 days ago
Idk what foolish use of AI has to do with immigrants
also: reminder that someone wasted his precious time creating an account and writing this ragebait comment just for a little bit of internet visibility
rsynnott3 days ago
… Eh? This isn’t a person, it’s a magic robot.
Or are you suggesting that use of LLMs is confined to one country? I regret to inform you that it is not.
tonypapousek3 days ago
> account created 49 minutes ago
> exclusively spreading hate
So, last one get banned, then?
krapp3 days ago
Given the nature of every other social media platform I wonder how many of these racebaiting green accounts are themselves just AI bots.
I know people copy and paste comments from AI all the time now, but someone has to be full on botting HN at this point.
kijjure3 days ago
[flagged]
tryauuum3 days ago
I assure you, people don't import millions of wannabe bugbounty hunters :)
kijjure3 days ago
[flagged]
ioteg3 days ago
[flagged]
throwawayExSUSE3 days ago
[flagged]
AtNightWeCode3 days ago
Fun fact. I posted that post into Claude and asked if it was AI. Claude totally trashed the post.
rchaud3 days ago
Perhaps for your next post you could ask Claude for the definition of a "fun fact".
AtNightWeCode3 days ago
Typical Claude answer to not see the irony and whine about it.
weddpros3 days ago
You know what was an actual issue, that any AI would have correctly identified as an issue, but HackerOne dismissed? the 1.1.1.1 rogue certificate that later made the news...
Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.
To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.
The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.
I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.
So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.
This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.
This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.
I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.
Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.
undefined
undefined
undefined
Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".
Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.
This is not a new norm (LLM aside).
Old man time, providing unsolicited and unwelcome input…
My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.
The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.
- Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”
- Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”
My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?
Note: I hope this perspective shift helps you.
I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.
Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.
These are CEOs who have raised $1M+ pre-seed.
Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.
These Silicon Valley CEOs are hacks.
undefined
I watched someone do this during an interview.
They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)
https://news.ycombinator.com/item?id=44985254
I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.
Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.
I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.
They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.
undefined
You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"
How far did they get? Did they solve the problem?
undefined
undefined
Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.
Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.
undefined
My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.
This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?
undefined
undefined
This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.
The exponential growth of compute and data continues..
As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
undefined
I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal
undefined
undefined
undefined
undefined
I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha
> - The average length of documents and emails has increased.
Brevity is the soul of wit. Unfortunately, many people think more is better.
undefined
There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).
I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.
https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...
undefined
[dead]
If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer
Imagine the amount of energy and compute power used...
I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg
> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).
I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.
For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.
This has been a norm on Hacker One for over a decade.
No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.
undefined
Ha! We've become the robots!
We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)
If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.
If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.
> If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?
I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.
undefined
This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".
IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.
I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'
I don't think there are humans involved. I've now seen countless PRs to some repos I maintain that claim to be fixing non-existent bugs, or just fixing typos. One that I got recently didn't even correctly balanced the parenthesis in the code, ugh.
I call this technique: "sprAI and prAI".
We will quickly evolve a social contract that AI are not allowed to directly contact humans and waste their time with input that was not reviewed by other humans, and any transgression should by swiftly penalized.
It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.
If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.
undefined
undefined
undefined
undefined
undefined
You're absolutely right! There are no humans involved and I apologize for that! Let me try that again and involve some humans this time, as well as correctly balancing the the parentheses. I understand your frustration and apologize for it, I am still learning as a model!
Hey don't hate on us humans who genuinely do open random PRs to random projects to fix typos. https://github.com/pulls?q=is%3Apr+author%3Ahenrebotha+archi...
undefined
undefined
I think there are humans that watch "how to get rich with chatgpt and hackerone" videos (replace chatgpt and hackerone with whatever affiliate youtuber uses).
It's MLM in tech.
The future of everything with a text entry box is AIs shoveling plausible looking nonsense into it. This will result in a rise of paranoia, pre-verification hoops, Cloudflare like agent-blocking, and communities "going dark" or closed to new entrants who have not been verified in person somewhere.
(The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )
Even with closed communities, real user accounts will get sold for use by AI.
Don't need a human until someone is ready to pay a bounty!
This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.
I think you might be onto something-- perhaps something from the first sentence of the post to which you are replying.
You’re absolutely right, that’s a sharp observation that really gets to the heart of the issue.
undefined
undefined
undefined
undefined
Faking grammar mistakes is the new meta of proving that you wrote something yourself.
Or faking generated content into real one.
Providing valuable and accurate information was, is, and will continue to be the "meta".
They also don't really use profanity
Is it that crazy? He's doing exactly what the AI boosters have told him to do.
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
I have a long-running interest in NLP, LLMs basically solved or almost solved a lot of NLP problems.
The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.
But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.
Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.
But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.
I once tasked an LLM with correcting a badly-OCR'd text, and it went beast mode on that. Like setting an animal finally free in its habitat. But that kind of work won't propel a stock valuation :(
undefined
So basically a hundred billion dollar industry for just spam and fraud. Truly amazing technological progress.
Wait so are we now saying that these AIs are failing the Turing test?
(I mean I guess it has to mean that if we are able to spot them so easily)
You don't spot the ones you don't spot
Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
Makes me wonder whether the submitter even speaks english
AI's other acronym...
You do realize English is one of India's two official languages, I hope?
undefined
undefined
undefined
undefined
Probably yes, but not as smooth and eloquent as the AI they use.
The username sounds Turkish. Make what you will of it.
So... nothing? Because I'm also not from an English speaking country and I speak English.
At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.
You dont even have to instruct it for emojis, it does it on its own. printf with emoji is an instant red flag
It loves to put emojis in print statements, it's usually a red flag for me that something is written by AI.
What was it with em dash?
People usually don't type embdash, just use regular dash (minus sign) they have already on the keyboard. ChatGPT uses emdash instead.
undefined
undefined
undefined
undefined
undefined
Some people actually do that on Github too. Absolute psychopaths.
I think the JS/Node scene was the pioneer in spamming emojis absolutely everywhere, well before AI. Maybe that's where the models picked it up from.
undefined
undefined
undefined
undefined
undefined
undefined
Here is a damn example: https://gist.github.com/BlueNexus/599962d03a1b52a8d5f595dabd...
It was far before ChatGPT. I remember once on a Show HN post I commented something along the line with "The number of emoji in README makes it very hard for me to take this repo seriously" and my comment got (probably righteously) downvoted to dead.
undefined
Was this all actually an agent? I could see someone making the claim that a security research LLM should always report issues immediately from an ethics standpoint (and in turn acquire more human generated labels of accuracy).
To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.
It's an n8n bot without user input. If you Google the username you'll find a GitHub full of agent stuff.
Who was likely to start it and for what purpose?
Clout? The dude behind the username?
undefined
I felt like it was more likely to be a complete absence of a human in the loop.
Crazy on how the current 400 Billion AI bubble is based on this being feasible...
The rationale is that the AI companies are selling the shovels to both generate this pile as well as the ones we'll need to clean it up.
I vividly remember the image of one guy digging a hole and another filling it with dirt as a representation of government bureaucracy and similar. Looks like office workers are gonna have the same privilege.
undefined
And on externalizing costs - the actual humans who have to respond to bad vulnerability report spam.
Do you think it’s a person doing it? When I saw that reply I though maybe it’s a bot doing the whole thing!
I think we are now beyond just copy-pasting. I guess we are in the era where this shit is full automated.
Is this for internet points?
If it's an individual, it could be as simple as portfolio cred ('look, I found and helped fix a security flaw in this program that's on millions of devices ')
why assume someone is copy-pasting and didn't just build a bot to "report bugs everywhere" ?
The '—' gave it away. No one types this character on purpose.
I really loved how easy MacOS made these (option+hypen for en, with shift for em), so I used to use them all the time. I'm a bit miffed by good typography now being an AI smell.
On MacOS (and I have this disabled since I'm not infrequently typing code and getting an — where I specced a - can be not fun to debug)...
Right click in the text box, and select "Substitutions". Smart dashes will replace -- with — when typed that way. It can also do smart quotes to make them curly... which is even worse for code.
(turning those on...)
It is disappointing that proper typography is a sign of AI influence… (wait, that’s option semicolon? Things you learn) though I think part of it is that humans haven’t cared about proper typography in the past.
Just because you don’t, doesn’t mean other people don’t. Plenty of real humans use emdash. You probably don’t realise that on some platforms it’s easy to type an emdash.
In Office apps on Windows just type two hyphens and then a word afterwards and it will autoconvert to an em-dash.
undefined
And where did you suppose AIs learned this, if not from us?
Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.
I'm starting to wonder if there's a real difference between the populations who use em dashes and those who think it's a sign of AI. The former are the ones who write useful stuff online, which the AIs were trained on, and the latter are the consumers who probably never paid attention to typography and only started commenting on dashes after they became a meme on LinkedIn.
undefined
I find it disturbing that many people don't seem to realize that chatbot output is forced into a strict format that it fills in recursively, because the patterns that LLMs recognize are no longer than a few paragraphs. Chatbots are choosing response templates based on the type of response that is being given. Many of those templates include unordered lists, and the unordered list marker that they chose was the em-dash.
If a chatbot had to write freely, it would be word salad by the end of the length of the average chatbot response. Even its "free" templates are templates (I'm sure stolen from the standard essay writing guides), and the last paragraph is always a call to further engagement.
Chatbots are tightly designed dopamine dispensers.
edit: even weirder is people who think they use em-dashes at the rate of chatbots (they don't) even thinking that what they read on the web uses em-dashes at the rate of chatbots (it doesn't.) Oh, maybe in print? No, chatbots use them more than even Spanish writing, and they use em-dashes for quotation marks. It's just the format. I'm sure they regret it, but what are they going to replace them with? Asterisks or en-dashes? Maybe emoticons.
undefined
undefined
Books use it more liberally, internet writings not so much. Also some languages are much more prone to using it while some practically never use it
The AI is trained on human input. It uses the dash because humans did.
I'm skeptical this is the reason:
- Chatgpt uses mdashes in basically every answer, while on average humans don't (the average user might not even be aware it exists)
- if the preference for em dashes came from the training set, other AIs would show the same bias (gemini and Le chat don't seem to use them at all)
undefined
Is that why it uses colorful emoticons, too? Was it trained on Onlyfans updates?
undefined
Yeah but a dash, at least on my keyboard is a '-', not the one quoted above.
undefined
undefined
undefined
Or at least not anymore since this became the number 1 sign whether a text was written with AI. Which is a bit sad imo.
I do all the time, but might have to stop. Same with `…`.
Don't let them win. Stand proud with your "–" and your "—" and your "…" and your "×".
I dislike the ellipsis character on its own merits, honestly. Too scrunched-up, I think - ellipses in print are usually much wider, which looks better to me, and three periods approximates that more closely than the Unicode ellipsis.
In the words of Michael Bolton, "Why should I change? He's the one who sucks."
That got a giggle out of me. Not entirely relevant but AI tends to be overzealous in its use of emojis and punctuation, in a way people almost never do (too cumbersome on desktop where majority of typing work is done)
https://www.scottsmitelli.com/articles/em-dash-tool/
Academia certainly does, although, humorously, we also have professors making the same proclamation you do, while while en or em dashes in their syllabi.
I started using hyphens a few years ago. But now I had to stop, because AI ruined it :(
Keep in mind that now that people know what to pay attention to: em-dash, emojis, etc. they will instruct the LLM to not use that, so yeah.
Two dashes on the Mac or iOS do it unless you explicitly disable it, I think.
I absolutely bloody do -- though more commonly as a double dash when not at the keyboard -- and I'm so mad it was cargo-culted into the slop machines as a superficial signifier of literacy.
I used to.
"I heard you were extremely quick at math"
Me: "yes, as a matter of fact I am"
Interviewer: "Whats 14x27"
Me: "49"
Interviewer: "that's not even close"
me: "yeah, but it was fast"
There should be a language that uses "Almost-In-Time" compilation. If it runs out of time, it just gives a random answer.
"Progressive compilation" would be more fun: The compiler has a candidate output ready at all times, starting from a random program that progressively gets refined into what the source code says. Like progressive JPEG.
Best I can do is a system that gives you a random answer no matter how much time you give it.
Great! 80-20, Pareto principle, we're gonna use that! We are as good as done with the task. Everyone take phinnaeus as an example. This is how you get things done. We move quickly and break things. Remember our motto.
undefined
This might be a similar but possibly more sensible approach? -> https://en.wikipedia.org/wiki/Anytime_algorithm
Yes, the way I described it is actually a sensible approach to some problems.
"Almost-in-time compilation" is mostly an extremely funny name I came up with, and I've trying to figure out the funniest "explanation" for it for years. So far the "it prints a random answer" is the most catchy one, but I have the feeling there are better ones out there.
When you get the wrong answer you can just say 'ah yes, the halting problem'
You should send a pull request to DreamBerd/Gulf of Mexico[0], it's surely the only language that can handle it properly!
[0]https://github.com/TodePond/GulfOfMexico
Hilarious. I will actually do that :)
Soft real time systems often work like that. "Can't complete in time, best I can do is X".
AIighT
Good teacher. He really seems to care.
About what, I have no idea.
Prove to me that it's not perfectly random.
It is perfectly random for some distributions
undefined
It’s better with the comment
https://xkcd.com/221/
The lowest latency responses in my load tests is when something went wrong!
https://www.youtube.com/watch?v=4SI3GiPihQ4
“Is this your card?”
“No, but damn close, you’re the man I seek”
This is one of my favourite images from a long-defunct proto-meme blog: https://entropicthoughts.com/image/doesntworkbutfast.jpg
I wonder where the balance of “Actual time saved for me” vs “Everyone else's time wasted” lies in this technological “revolution”.
Agreed.
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
> If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.
Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).
>but there needs to be a human in the loop.
More than that - there needs to be a competent human in the loop.
We've going from being writers to editors: a particular human must still ultimately be responsible for signing off on their work, regardless of how it was put together.
This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.
You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.
You joke, but some companies are pushing this idea unironically by putting "use AI to expand a short message into a bloated mess" and "use AI to turn a bloated mess into a brief summary" into both sides of the same product. Good job everyone, we've invented the opposite of data compression.
Reminded me of this - an URL lengthener: https://looooooooooooooooooooooooooooooooooooooooooooooooooo...
Great cartoon with comment about this problem:
https://marketoonist.com/2023/03/ai-written-ai-read.html
We could call it “bsencode.
The next HTTP standard should include `Transfer-Encoding: polite` for AI-enabled servers and user agents.
Sadly, it might not be ironic. I've encountered many people (particularly software engineers and other tech bros) who assume most written language is mostly BS/padding, and assume the only real information there is what you get get from a concise summary or list of bullet points.
It's the kind of incuriosity that comes from the arrogance from believing you're very smart but actually being quite ignorant.
So it wounds like one of those guys took their misunderstanding and built and sell tools founded on it.
of course they are. that way they can sell both the shovels and the shit.
Two economists are walking in a forest when they come across a pile of shit. The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.
They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.
Walking a little more, the first economist looks at the second and says, "You know, I gave you $100 to eat shit, then you gave me back the same $100 to eat shit. I can't help but feel like we both just ate shit for nothing."
"That's not true", responded the second economist. "We increased the GDP by $200!"
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
I invented a new technique that cuts down on the AI bill. I call it "just send me the prompt": https://blog.gpkb.org/posts/just-send-me-the-prompt/
that's still a huge waste of time and resources. Rather, Daniel has focused on promoting good use of AI that has yielded good results for curl: https://mastodon.social/@bagder/115241241075258997 https://joshua.hu/llm-engineer-review-sast-security-ai-tools...
And then alien civilization will wonder how humans went extinct.
Don't Date Robots!
https://www.youtube.com/watch?v=3O3-ngj7I98
Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.
Isn’t curl open source? I was under the impression that they are all working volunteer. This isn’t a net positive. It will burn out the good willed programmers and be a net negative on OSS.
This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.
Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.
How much time did they save if they didn't find any vulnerability? They just wasted someone's time and nothing else.
Arguably that's been a part of coding for a long time ...
I spend a lot of time doing cleanup for a predecessor who took shortcuts.
Granted I'm agreeing, just saying the methods / volume maybe changed.
This example is much worse: https://hackerone.com/reports/2298307
> I appreciate your engagement and would like to clarify the situation.
WE APPRECIATE YOUR HUMAN ENGAGEMENT IN THIS TEST.
This is so disrespectful.
Someone has to make a base.org kind of site but with AI quotes...
Do you mean bash.org?
I've never heard of base.org so if I'm thinking of the wrong thing, please let me know
I wonder if this could be startups that are testing on open source projects but eventually will release a product for companies and their proprietary code cases.
Wow that’s infuriating. Fascinating watching the maintainer respond in good faith.
bagder is both extremely grumpy about the state of it and fascinatingly patient.
He's like 80% wise old barn owl.
He's a pillar of the community. When i was starting out i made a basic PR to cURL to fix some typos and he was kind enough to engage and walk me through some other related changes i could add to the PR.
I think he's a genuinely nice person.
Here's a list of AI slop reports: https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
I've read all of them. It's interesting how over the last 2 years badger moved from being polite to zero fucks given.
undefined
wow this is infuriating--from 2023 so i guess the proliferation of chatgpt's vernacular wasn't yet carved into the curl dev
That's interesting. Was AI slop harder to spot in 2023? I can't remember anymore when did everything really start getting flooded with it.
Over time, I've gotten a feel for what kind of content is AI-generated (e.g., images, text, and especially code...), and this text screams "AI" from top to bottom. I think badger responded very professionally; I'd be interested to see Linus Torvalds' reaction in such a situation :D
This one was pretty obvious, I shudder at the thought that they're going to get more subtle over time.
It’s interesting that you say that because besides the other perspectives on this type of matter, something I have come across is accusations of AI text that at the very least were not at all clearly AI, but also seemed like the accusation was simply a coping mechanism to deflect/evade having to accept or face new informatio/reality that was counter to one’s mental model or framework.
I think of that recent situation where video showed two black bags supposedly being thrown out of a White House window. I don’t really care enough to find out whether or not that video was real, but I did find it interesting that Trump immediately dismissed it as AI after immediately glancing at it. Regardless of whether it was real or not, it seems to me that his immediate “that’s AI” response was just a rather new form of lie, a type of blame shifting to AI.
I would argue that as stupid and meaningless as that kind of example is, a better response would have been something like “we will look into it” and then moving on. But it also feels like blaming AI for innocuous things preconditioned the public to deny and gaslight the public on other, more important things, e.g., for example claiming that Israel raining down bombs on civilian people in Gaza and mass murdering probably hundreds of thousands of innocent people in what looks like the start to the Terminator wars, is merely a figment of your imagination because you will be told that AI was used and AI will be scrubbed off that information so you also will never be told about it. It’s memory holed in the TelescreenAI.
These types of developments don’t exactly fill me with optimism. Remember how in 1984 the war never ended, always changed, while at the same time both always existed and also did not actually exist? It feels like we are heading in that direction, the gaslighting form here on out, especially in all the forms of overt and clandestine war will be so off the charts that it will likely cause unpredictable mass “hysterias” and various undulations in societies.
Most people have no idea just how much media is used to train humans like an AI would be trained or controlled, now throw in ever more believable AI generated audio, visual, and not even to mention the text slop.
I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.
You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.
Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".
And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).
Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.
undefined
We will see more problems related to the attitude: "I know AI, and therefore I'm smarter than trilobites who coded this before the AI boom."
I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.
I know React, and therefore I'm smarter than trilobites who coded this before the Web App boom
HN is so outdated! Let's rewrite this old legacy code in React to make it modern!
Start charging users to submit a vulnerability report.
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
Even a deposit works well (and doesn't have to be large). Someone who has actually found a serious bug in cURL will probably pay $2-5 dollars as a deposit to report (especially given the high probability of a payout).
One issue is who pays the processing fees for the deposit & refund transactions. HackerOne could work around that issue by copying the practices of video game "microtransaction" payments: sell "report points packs", say 2500 points for $25 minimum in a pack. User needs to deposit 100 points to report, for each report they open. If the report is accepted they get their 100 points back, if not they lose their 100 points. If they want to open more than 25 reports at once they need more points packs. The $25 pack is non-refundable, so there's no added transaction fee for the refund.
I can afford it but I would never spend money to submit a vulnerability report. I'd need to be reporting dozens of vulnerabilities on a single site like hackerone to work up the motivation to plug in payment details and risk having them leaked/stolen in order to do someone else's work for them.
I'd sooner click sponsor for the cURL project on github (something I already do for some OSS I use) than spend money to report a bug.
That's my attitude towards this sort of thing as well, but unfortunately it seems that this attitude is unsustainable now that the cost of generating plausible-looking bullshit has been driven to 0. "Pay to prove humanity" seems like one of the only ways to keep something like this running if we don't built a hugely-invasive system of attestation.
Exactly my thoughts.
I’d love to have this for phone calls and sms as well. If you didn’t spam me, I’ll refund.
That or the dark vuln market will find a way to vet bugs and pay out faster and easier than the actual project.
I think people who find real bugs have lots of incentives to not sell them to criminals (in and of itself a crime!!)
undefined
This is a horrible idea. If you want to discourage people from submitting reports then this is how you do it..
Reducing waste, fraud, and abuse is always only one side of the story. I agree it would have false negative impact (someone does not submit a good report that otherwise would have), but I don't think that instantly makes it a horrible idea. I think the net effect would have to be studied, but I highly doubt all true postive reports would become false negatives. The goal is reducing false positives, so it is going to be a tradeoff and you'd need specific numbers to conclude anything.
Do you really think it is a horrible idea? That is just so harsh of a label.
Spent 15 minutes the other day testing a patch I received that claimed to fix a bug (Linux UI bug, not my forte).
The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.
What’s even the point…
It's all in aid of some streetsweeper being able to add "contributor to X, Y, Z projects!" to their GitHub résumé. Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
Are spelling correction PRs not welcome? I'd never put it on a résumé but if I'm following a README and I see a typo, I'll generally open a quick PR to fix that. (no automated tools, not scanning for typos, just a human reading a README).
> Are spelling correction PRs not welcome?
I think a true spelling correction would be welcome. But I think the kind BS attitude the GP is describing often leads to useless reformatting/language tweaks, because the goal isn't to make the repo better, it's to make a change for making a change's sake with as little effort as possible.
undefined
A real improvement to the documentation or readme is welcome, even if it is only a minor improvement. I have put in small grammar PRs on some documentation myself.
On the flip side, I used to get a lot of spam PRs that made an arbitrary or net neutral change to our readme, presumably just to get "contributor" credit. That is not welcome or helpful to anyone.
> Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
I always find it a pity when someone has been clever and it's missed. "Spelling incorrection", get it? It's not a correction. It's the opposite.
undefined
Depends on the project.
This is why I refuse to interact with people who use AI. You have to invest orders of magnitude more time to review their hallucinated garbage than they used to generate it. I’m not going to waste my time talking to a computer.
https://knowyourmeme.com/photos/2054961-welcome-to-my-meme-p...
Ultimately it's always about someone somewhere getting a bigger boat.
> The reporter was banned and now it looks like he has removed his account.
I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.
Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.
We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?
I see it on forums now too. On Reddit, midsized subs that get a mild amount of traffic get these brand new accounts that post what reads like an amalgam of past posts. Often in help forums where people ask questions.
They have that uncanny thing where yes it's on topic, but also not how a human would likely ask exactly AND they always let slip in just a hint of human drama that really draws in other users...
They almost never respond to comments, when they do it's pretty clear they're AI (much like the response in this story).
I've unsubscribed from a good half dozen subs in the past few months because of it.
> they always let slip in just a hint of human drama
I haven't seen this so it's hard to visualize, but that seems potentially kind of tricky to do via AI. Is it actually tricky, are they donw in a way where AI could conceivably do it on its own, or are those hints easy to drop in without disturbing the bulk of the slop?
The ones I have seen are like "my wife thinks I just need to blah but I think ..." or something
I think the AI might pick it up from the most popular / engaged with posts anyway.
This is essentially what teachers are dealing with every day, across the majority of their students, for every subject where its even remotely possible to use AI.
Education as a profession will have to change. Homework is pointless. Verbal presentations will have to become the new norm, or all written answers must be in the confines of the classroom... with pen and paper. Etc...
Why not deal with it the same way teachers have always felt with students breaking the rules?
Wife is a high school history teacher - she would have to flunk 75% of her students. That is after proving they used AI, which would be extremely time consuming. Its very demoralizing for her, she has to spend a lot of time reading written essays generated by AI.
I think given time educators will adapt. Unless they get burnt out first. She could also just not give a shit and they let go on to be some college professor's problem, who could also not give a shit, and then they become our problem when they enter the workforce.
You can go back to requiring home assignments be written by hand. It won't completely fix the AI issue, because you can still ask ChatGPT and then rewrite it, but it helps because it's very tedious and time consuming, so the benefit is much lower.
If that is not enough, we may have to stop grading take-home papers. Which is a good idea anyway.
undefined
Maybe this is the answer to the fermi paradox. Intelligent life eventually invents the LLM, education collapses, dumb people empowered by technology destroy the environment.
The educators will adapt. They might use AIs to grade papers written by AIs.
Or, do what my kids' school did for some classes. Instead of teaching in class and then assigning homework, the homework will be reading a text book and classroom time will be spent writing essays by hand, doing exercises, answering questions, etc...
undefined
Then they should be flunked. It sucks but parents need to enforce real learning. Schools can't be the sole "responsible" entity here. This is not the instructor's fault and school admin needs to push back. We as a society need to push back otherwise it all falls apart. Not everyone can be a blue collar worker, and most of the BC workers I know tend to be decent at math at least in those items that are part of their work, which they certainly couldn't have picked up if they didn't at least know some basics arithmetic
undefined
> Its very demoralizing for her, she has to spend a lot of time reading written essays generated by AI.
I think that obvious solution is for them to write those essays in school.
This is why in person tests are given and bad grades as a result as part of the student feedback performance improvement loop. Maybe with AI as a new interloper we need to decrease "report card" times to 3 weeks (it was 9 weeks in my day) so that students have some shortened loop time with parental unit reviews to help straighten out issues before they become real problems.
I mentioned it already, my sister resigned from her tenure track position due to a fight over this. She was strict, students reported her, faculty wouldn't assign her choice course, she resigned after one and a half year.
Because the US is assbackwards when it comes to education since the NCLB basically forces schools to make up metrics to prevent everyone losing their job through closure.
Because a teachers job is to make sure N% of the class passes as much as it is to teach. If you fail have the class, you have failed as a teacher because the administration will get parents coming in. If you force your class to do assignments by hand, especially in younger grades, more will fail, and you will be blamed and fired.
Because 1) you often can't prove it, and 2) there often isn't support from administration.
This must be _absolutely exhausting_.
Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
> Yeah, I guess if I was him, I would just close issues silently and ban the person who created them, if possible. I don't think I could be as nice as he is.
I think the shaming the use of LLMs to do stuff like this is a valuable public service.
Imagine the headline if a slop security report ends up real but the maintainer ignored it.
It’s a lose-lose situation for the maintainers
Thankfully in this case it's a curl vulnerability that doesn't use curl in the reproducer. That's a fairly safe call.
The problem is that AI can generate answers and code that look relevant and as if they were written by someone very competent. Since AI can generate a huge amount of code in a short time, it's difficult for the human brain to analyze it all and determine whether it's useful or just BS.
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
True, that is in some cases a problem. Though in this case it was pretty clear cut. At least the obvious time wasters would get the treatment.
He’s been complaining about it a lot lately. I don’t blame him, it’s wasting an inordinate amount of time.
And it must be so demoralizing. And because they’re security issues they still have to be investigated.
Recently a customer pasted a complete ChatGPT chat in the support system and then wrote “it doesn’t work” as subject. I kindly declined.
I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.
On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.
Overall it’s been a huge additional time drain.
I think it may be time to update the “what’s included in support” section of my softwares license agreement.
I wonder what's going on in the minds of these people.
I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.
> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...
On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.
This one is fun: https://hackerone.com/reports/2981245
> submitter: After thinking it through, I’m really sad to say that I’m not comfortable with disclosing the report . I’d prefer to keep it private . I hope this doesn’t cause any issues, and I appreciate your understanding."
> bagder: I am willing to give you some time to think about your life choices, but I am going to disclose this report later. For human kind, for research, for everyone to learn. Including you.
> submitter: After thinking it over, I’ve decided I’m okay with disclosing the report. Honestly, the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help .
A good one! I like how Daniel pretended like not disclosing it was an option just to show their reaction.
> "the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help"
I guess it worked, that's their only hackerone report they made from that account.
Well, in reality the probably abandoned it, created another account and continued on with the script.
You're assuming the process isn't automated. This could just be an app that gets them more green dots on their Github page. This developer might not even be a real person. https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-t...
It’s a game to them, they don’t care.
They likely live somewhere where a $50 beg bounty would be half a year’s work.
How do you feel about pixels in a video game? That’s all the maintainer is to them.
Resume hit piece, <failed/>.
What an absolute shamble of an industry we have ended up with.
Lord, did anyone else click through and read the actual attached "POC"? It's (for now) hilariously obviously doing nothing interesting at all, but my blood runs cold at AI potentially being able to generate more plausible-looking POC code in the future to waste even more dev time...
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug.
I don't even... You just have to laugh at this I guess.
maybe submitters should pay a dollar to submit bugs which they will get a refund for when bug is confirmed?
even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.
It might only need to be the first N reports from a given account. It's hard to imagine a spammer coming up with 5 legit security issues just to enable their GPT spamming operation. As long as they're not real-but-trivial typo types of issues...
That actually sounds like a good idea.
The amount of text alone in the original post was a giveaway.
LLMs produce so much text, including code, and most of it is not needed.
I keep talking to people who say stuff like “Claude wrote it all for me in a day”, but when I look at the code (or try it myself) it’s just so much useless code.
I recently asked for Python code to parse some data into a Pandas dataframe and got 1k lines plus tests. Whatever—I’m just importing it, so let’s YOLO and see what happens. Worked like a charm in my local environment. But I wanted to share this in a Jupyter notebook and for semi-complicated reasons I couldn’t import any project-local modules in the target environment. So I asked a much more targeted question like “give me a pandas one-liner to…” and it spit out 3 lines of code that produced the same end result.
The rest of that 1k lines was decomposing the problem into a bunch of auxiliary/utility functions to handle every imaginable edge case and adding comments to almost every line. It seems the current default settings for these tools is approximately the “enterprise-grade fizzbuzz” repo.
Sure, I’ll get better at prompting and whatever else to reduce this problem over time, but this is not viable when the costs are being pushed onto other people in the process today.
I'm using ChatGPT to generate some code for me quite often, and my instructions prompt for all chats is slowly gaining more and more ways to say "Answer shortly". And I need to prompt defensively to repeatedly tell it to only do what I tell it.
Verification Status: CONFIRMED bullet points
Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.
Jump to the point: https://hackerone.com/reports/3340109#:~:text=you%20did%20th...
this LLM-emboldened, mass Dunning-Kruger schizophrenia has gone from hilarious to sad to simply invoking disgust. this isn't even an earnest altruistic effort but some insecure fever dream of finally being acknowledged as a "genius" of some sort. the worst i've seen of this is some random redditor claiming to have _the_ authoritative version of a theory of everything and spamming it in every theoretical physics adjacent subreddit, claims to have a phd but anonymous and doesn't represent any research group/institution nor does the spam have any citations.
Only found a short but good article about such a case [0], i'm sure someone has bookmarked the original. There are support groups for people like this now!
[0] https://www.bgnes.com/technology/chatgpt-convinced-canadian-...
This aspect is fascinating
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.
The good news is this AI stuff is not profitable. Big companies and VCs are subsidizing all this AI slop. If it had cost this moron $5 to generate the slop to file this bug they probably would not have bothered. Hopefully the bubble bursts soon, very hard, and forces the money people to figure out how to charge for these services.
It's kind of depressing to read Daniel's article[1] on this issue given the rising "popularity" of these lazy attempts at cash grabbing. I hope they manage to combat the AI slop in a way that does not involve fighting fire with fire though.
[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
I went through some of these and the one that stood out to me was this one
https://hackerone.com/reports/2823554
Where the reporter says, "Sorry didnt mean to waste anyones time Badger, I thought you would be happy about this.".
People using LLMs think they are helping but in reality, they are not.
There's this very weird idea that makes some people think that the maintainer must have a godawful workflow and if I just showed him the output of _my_ workflow, I can ~~save the day~~ fix a bug for them.
why don’t they just limit the report to 100 chars or something? “Here’s the input, here’s the output, here’s why it sucks”. Easy to make a maybe/no decision at a glance.
There's a phenomenon of fraudulent "security researchers" which has sprung out of the AI world. I became aware of it when someone on discord posted a video covering an "ACE exploit" against users of a particular AI coding assistant. The exploit was this: 1. You accidentally grab a malicious config file for the assistant 2. For some reason, you would pipe this entire file into curl and then into bash 3. This results in downloading and running a script that sets up malware.
It didn't make sense at any point but I was gripped by a need to know the intention such a worthless video. It made sense when the host started shilling his online course about how to be a "security researcher" like him. Not only that, paying members get premium first access to the latest "disclosures" that professional engineers are afraid to admit exist. It's likely that the creator of this bug report is building up their own repertoire of exploits that have been ignored. Or perhaps they're trying to put their course knowledge to use.
These are the people that I imagine who go on forums and threads to announce how great AI is and are unable to provide any critique. They are blinded by ignorance.
Maintainer or curl gave recently a talk on AI slop in security reports, showing this and other examples:
https://youtu.be/6n2eDcRjSsk?si=p5ay52dOhJcgQtxo -- AI slop attacks on the curl project - Daniel Stenberg. Keynote at the FrOSCon 2025 conference, August 16, in Bonn Germany by Daniel Stenberg.
Plus, linked above, his blogpost on the same subject https://daniel.haxx.se/blog/2025/08/18/ai-slop-attacks-on-th...
This is the AI that is now writing the next version of your operating system.
This is the AI adding more "growth" to the US economy than all consumer spending combined.
"The AI bubble is so big it's propping up the US economy" - https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
This is the AI that half of HackerNews insists is "the future" that you'll be "left behind" from if you don't embrace it.
It is quite clear from this that a major implication of LLM's in today's society is making spam much much more difficult to discern from actual content. I empathize with any website or project popular enough to draw this kind of attention, as it must be exhausting to deal with. I wonder if burnout rates in open source will drive even higher.
This reminded me of an interview I listened to by a startup founder talking about how his company integrates AI into all of its workflows. During the Q&A, he said that they could tackle any challenge simply by iteratively constructing better contexts for the AI. At first this sounded optimistic, but then it struck me that it was actually the ultimate pessimistic view of what current AI can do. His assumption seemed to be that software engineers have already implemented all the primitives humans will ever need. If that’s true, then the only task left is to phrase our instructions in the right way so the model can stitch those primitives together into a production system.
Say what you want about AI but it has undeniably made aspects of life worse. Unfortunately I foresee effective bug bounty programs that are open to the public going away because of the sheer amount of spam like this.
What is the motivation behind posting such things? I understand if there is a bug bounty program, does cURL have one?
So you can put this on your resume:
Open Source Contributor: - Diagnosed and fixed a key bug on Curl
Hah, the opposite of "AI" meaning "Actually Indian"... "Here's my CV, but actually all my work will be done by AI".
With apologies for stereotyping.
> "Here's my CV, but actually all my work will be done by AI".
What AI did you use? Because we want to hire that, not you.
If AI exceeds human capabilities, it won't because it achieved "superintelligence," it will because it caused human abilities to degrade until the AI looks good in comparison.
What if it was some kind of "meta DDoS"? I mean, you can DDoS a server with simple requests, but here the effect is meta: it "DoS"es real humans. What if someone had something to gain from doing this? The tools to do this seem to all be there.
Yes they do. But I also wonder why curl seems to get so many of these. They don't have the highest payouts, have been around for long time so presumably most low hanging fruit the AI has even a remote chance of finding was fixed, and they are well known to be on the lookout and strict about AI reports.
Might be easier for AI to generate this specific bullshit because of curl's long history.
https://curl.se/docs/bugbounty.html
More than half of the ads I get on Youtube these days are shovel-sellers with messages like
"We have reached a point where anyone can build an app without knowing how to code".
So obviously this kind of thing is going to happen. People are being encouraged by misleading marketing.
When I view this page without JavaScript (on my current small monitor), there is a micro-scroll vertically down to a banner which reads
> It looks like your JavaScript is disabled. To use HackerOne, enable JavaScript in your browser and refresh this page.
on a rgba(206, 0, 0, 0.3) background (this apparently interpolates onto pure white, so it's actually something like (240, 178, 178) ), and otherwise nothing but blank white.
I know I've complained about lack of "graceful degradation" before, but this seems like a new level.
Doing this should be a stain on your career. Since anons can't be named and shamed or have careers when do we start ignoring anons?
Also, if AI were so great we could trust it to review and test these CVE reports autonomously.
Once you are sure, these users should be shadow banned and an AI clone should keep them engaged. There isn't a way around it, no one deserves wasting their time on this spam.
There must be other corporate bounty programs they could DDOS with fake reports - doing it to curl surely won't yield much profit.
This is headline driven development. Sooner or later one of these reports will make it and there will be much rejoicing.
s/much rejoicing/pandora's box/ I guess.
the thing is, these people aren't necessarily wrong - they're just 1) clueless 2) early. the folks with proper know-how and perhaps tuned models are probably selling zero days found this way as we speak.
Professional Security Researcher here.. I haven't really seen any models reliably find and exploit a 0day. Folks are are at least TRYING to develop such models internally at the MIT lab where I work, but not sure how far along they are coming yet.. If a model is developed that can find a 0day or two (like Big Sleep which I think maybe found some) I won't be surprised but keep in mind fuzzers find thousands of real 0days with far less compute... These capabilities are of course something worth looking into, but too many people are promising 0day oracles already and that simply just isn't where we are right now (or ever? ). Sorry for bad grammar typing quickly from phone here.
Maybe using curl for RLHF training/tuning before running it on the money sites.
I've been getting a lot of vulnerability "spam mail" recently that's clearly AI-generated.
It's a surprise every public bounty program isn't completely buried in automatic reports by now, but it likely won't take long.
Is there something about cUrl that attracts these AI bots, or is it just better documented by them - because I was going to say that this is old, but then I checked the date and realized that this is a new problem. Going down the rabbit-hole, @badger has made multiple posts [0][1] about AI slop.
[0] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s... [1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
My theory is that the cURL maintainer is independent and can respond forcefully to the "AI" nonsense.
Many other projects always have some corporate maintainers who are directed to push "AI" and will try to cover it up.
Curl is one of the most widely used/deployed libraries on the planet.
Wow even the followup response apologising for noise was full of noise.
It finishes "I can follow up ... blah blah blah ... should I find an issue"
Tone deaf and utterly infuriating.
For me the followup was the most obviously AI bit of writing - it's exactly the tone you get when the AI admits it's been utterly wasting your time.
Gaslighting at scale
GaaS, I like it!
Time pressures during sprints have started to change, and it's forcing many people to use AI for everything. So when they interview for their next role they are rusty for some tasks
Nice ending:
> The reporter was banned and now it looks like he has removed his account.
We are witnessing a new eternal summer and the only way to stem to tide is to increase the amount of required personal identifying information to register, and then publicly shame these people as a warning to others. Maybe it is a good thing that I don't run any massively popular open source projects.
> the only way to stem to tide is
I see no evidence thats the only way. Its the only way that has crossed your mind as you were writing that message.
The two alternative solutions posed so far are 1) drop the ai bounty program 2) charge for submissions.
Present an alternative.
undefined
It's not really a great ending. They or people like them just opened 3 new accounts. They just closed this one because it was tainted.
50/50 title here: it can be the app devs or it can be the reporter.
Imagine if these “benevolent” erroneous AI bug reports were part of a coordinated effort to map how vulnerable the projects and maintainers are, not the code. Slow response, no response is a likely target for take over or exploits, and accepting code without review is an indication of ease of injecting a vulnerability.
It's interesting idea, I just wouldn't consider slow or no response as likely target, I think that's actually a good defense strategy for spam like this.
The line of thought is that a slow response makes the time windows of an eventually found vulnerability exploit longer. Thus, increasing its value.
It's funny to think that the criminal underworld that trades in zerodays also has to deal with AI spam like this.
This kind of thing isn't new. When I maintained a Google owned project on GitHub in the pre-LLM era someone submitted a slop PR "fixing" some tests, seemingly generated with some kind of static analysis tool. The description was clearly copy-pasted as well.
Still better than the old style reports from tools like that. They're typically commercial, and evidently came with some kind of licensing restriction that you couldn't give out their output.
So open source projects would get bug reports like "my commercial static analysis tool says there's a problem in this function, but I can't tell you what the problem is."
Yep. We also saw people run any fuzzing, scanning, etc. tool they could get their hands on and pretty much just paste the results in a bug report email, well before AI was a thing.
Completely useless 99% of the time but that didn’t stop a good number of them following up asking for money, sometimes quite aggressively.
What is the motivation for people doing this? Is it just for the lols or are they making money out of this somehow?
Possible bug bounty program.
I believe it's so they can put on their CV that they're contributors to XYZ famous projects.
I see this kind of things with new hires in my company. It is becoming depressing, stupid overly detailed but content free issue comments, stupid code that does not do what it is supposed to do but it is a fucking lot of code for you to review.
Has anyone seen a good use of AI in the wild? Every example I see is honestly depressing, such as this.
If someone is using AI effectively, there's often no way to tell that they're using AI at all. Toupée fallacy etc.
Code? not much, other than small functions/classes/prototype libraries to get started, but I've often used it to figure out where code was that I was concerned with in huge project code bases and analyze where some of the edges of interfaces are without digging for a few hours. Copilot can give a decent summary of where to look in a couple of seconds instead of a half hour of marking what I think are important sections and jumping around/grepping
It is best used for yack shaving in my opinion. Anything other than that and I feel like I cannot trust its output.
Its Code Generators all the way down...
If I was running hackerone I would add a grey list filter for any submission with emojis in it.
Just filter messages with emojis.
I wonder how many university degrees have been passed using ai?
>printf("<unicode icon that HN seems to remove>
hello LLM
What is the motivation for people submitting these?
It’s really quite disappointing to see how fast just copy/pasting AI responses has proliferated, even into things that don’t benefit the copy/pasters. I’m doing an online course currently that has absolutely no benefit outside of learning the content (i.e. the certificate or whatever you get for completing means nothing) - yet classmates are very clearly just copying/pasting in responses for the exercises. How does that benefit them? More than any slop I’ve experienced thus far, this instance has made me the most worried/sad/pessimistic to see. If even people who are supposedly motivated to learn (why else would you pay for this course?) just revert to the easiest AI slop path, what hope do we have for avoiding it in stuff that more resembles “work”?
last month curl developer Daniel Stenberg gave a talk "AI slop attacks on the curl project" : https://www.youtube.com/watch?v=6n2eDcRjSsk
This should be a t-shirt.
I’ve never read something that made my blood boil and blood pressure go through the roof before lol. Fuck!! Off!!!
What a professional interaction by badger. Kudos to him.
I wonder if there could be some kind of platform where you have to pay a $5 deposit or something to be able to post bugs. If you waste people's time with total nonsense then you lose the $5 and can no longer report. If it's less egregious than this, like they at least made a human effort, then maybe you keep some of the deposit. Although maybe $10 or $50 would be better.
On another note, I actually received a clearly GPT generated GitHub PR but eventually merged it. The changes were just doc changes but they seemed okay enough to add.
I feel like the goal is to get your name on a project, but I don't really lose anything from contributions like this
Given the stubbornness with which slop continued in the replies, I’m starting to doubt that this is actually part of an ongoing experiment with AI in vulnerability r&d.
IMHO the first reply looks very automated and may even encourage them to do stuff like this, as this should've been a "fuck off" after a quick glance at the "Verified POC Code".
Why not verify these reports using LLMs first?
Once you're at the 12th month of trying to shoehorn LLMs in several use cases at your job, you'll find the answer to this question:
BECAUSE YOU CAN'T FUCKING TRUST THOSE LYING HALLUCINATING PIECES OF SHIT.
Clearly you just set an LLM to respond to messages that appear to be written by LLMs, then disregard that thread from that point on.
It's the same problem, false positives.
And false negatives too.
and this fucking slop is going to further pollute search engine results and future LLM models as it gets scraped up. Bleak future!
The emoji usage was another dead giveaway that this was done by an AI.
Same as watching someone in school try to translate between French and English by a dictionary one word at a time ignoring context...
But frankly security theatre was always going to descend into this with a thousand wannabe l33ts targeting big projects with LLMs to be "that guy" who found some "bug" and "saved the world".
Shellshock showed how bad a large part of the industry is. It was not a bug. "Fixing" it caused a lot of old tried and tested solutions to break, but hey, we as an industry need to protect against the lowest common denominator who refuse to learn better...
[dead]
[dead]
[flagged]
Idk what foolish use of AI has to do with immigrants
also: reminder that someone wasted his precious time creating an account and writing this ragebait comment just for a little bit of internet visibility
… Eh? This isn’t a person, it’s a magic robot.
Or are you suggesting that use of LLMs is confined to one country? I regret to inform you that it is not.
> account created 49 minutes ago > exclusively spreading hate
So, last one get banned, then?
Given the nature of every other social media platform I wonder how many of these racebaiting green accounts are themselves just AI bots.
I know people copy and paste comments from AI all the time now, but someone has to be full on botting HN at this point.
[flagged]
I assure you, people don't import millions of wannabe bugbounty hunters :)
[flagged]
[flagged]
[flagged]
Fun fact. I posted that post into Claude and asked if it was AI. Claude totally trashed the post.
Perhaps for your next post you could ask Claude for the definition of a "fun fact".
Typical Claude answer to not see the irony and whine about it.
You know what was an actual issue, that any AI would have correctly identified as an issue, but HackerOne dismissed? the 1.1.1.1 rogue certificate that later made the news...