Quillx is an open standard for disclosing AI involvement in software projects (github.com)

pointlessone 1 hour ago

How can you expect adoption of this scale if the AI side is so obviously negative? You might as well label the full AI option “I drown kittens” and go write a long post about how AI users won’t engage with your AI usage disclosure initiative in good faith.

To have any chance of adoption you have to be at least a little strategic. You may think AI is pure evil but you have to make some concessions to AI users to incentivise participation. Try making it sound neutral through out the spectrum, use neutral colour scheme. Yes, you’re not telegraphing your position on AI so obviously any more but you might get some useful information out of others.

Kiro 3 hours ago

> Crafted like poetry

I find this idea that humans all of a sudden write beautiful code very funny. Most code produced by hand is dirty and filled with ugly hacks. The argument might work against AI art but falls flat for programming.

crimsonnoodle58 6 hours ago

I would think the term 'vibe coded', 'vibed', '100% vibes', etc would be far more appropriate and well known, than 'lorem ipsum' when it comes to generating code without reviewing the output.

If I saw that badge on someones github I would think it had something to do with lorem ipsum text generation, rather than anything to do with AI.

jannniii 5 hours ago

Nice idea, but the labels are a bit too opinionated for me.

Literally all my code has been ”ghostwritten” for the past 18 months. Does not sound like something enterprise customers would like to hear and try to understand what it means.

atoav 4 hours ago

Do you know all the code you're shipping? If no, as a customer that is a thing I'd certainly like to be informed of.

I think we really need hard liability for software engineering.

wewewedxfgdf 5 hours ago

We should assume projects have AI/LLM development assistance unless stated otherwise.

You may have noticed the absolutely vast array of AI development tools and assistants and IDEs and integrations - this is a reasonable indicator that developers are actually doing AI/LLM development.

big-chungus4 4 hours ago

The labels are not transparent - if you see this badge on GitHub readme, you won't be able to tell that it is about AI usage. I also don't find those labels to be particularly useful, when you are proposing an actual standard, you have to sit down and design it carefully and thoroughly, which I don't believe happened here. So, it looks cool, but I don't think it's super useful

varun_ch 7 hours ago

A little ironic that the README, SPEC.md and the poster's comment here all smell of LLM writing!

jofzar 5 hours ago

They put a 3/5 for themselves, atleast they are honest.

rzmmm 5 hours ago

In academia there has been a widespread practice to simply include a sentence about how AI has used in articles. It's simple and it works well.

nunobrito 3 hours ago

Color code is the other way around.

Red should meant manual human review without automated tools nor AI.

Green for proper AI review and tests verifying the expected input/outputs.

hedora 8 hours ago

(1) Why?

(2) The code I write with AI doesn’t fit on the scale.

atoav 4 hours ago

Ad 1: If my software vendor doesn't even know their own code, I'd like to know. This may be an issue of liability.

charcircuit 6 hours ago

Considering the more AI you use the more red it is along with demeaning language for the scales I assume it is mainly for anti-AI people to virtue signal about not using it.

retsibsi 5 hours ago

It looks a bit like that, but they gave their own repo a 3/5 rating and it's full of obvious LLMisms, so I think they're not totally anti-AI and are trying to be evenhanded.

To me, the metaphor doesn't really work, especially at level 5. Lorum Ipsum is literal placeholder text which is basically the same everywhere it's used; I don't see what that has to do with vibe code. (Also the verse/prose thing seems pretty wanky to me, but I admit that's just a matter of taste.)

peteforde 8 hours ago

Given the reality that there are a lot of people who [fairly or unfairly] judge anything that uses "AI" in a decisively negative way, what possible advantage is there in giving people a reason to dismiss your project without evaluating it on its own merits?

jimbooonooo 7 hours ago

Is honesty an important quality to you? Does lying by omission concern you for the people and projects you choose to interact with?

retsibsi 5 hours ago

I'm with you on honesty, and I've certainly seen people tacitly trying to pass off AI outputs as human written. But I think we've reached a point where, in lots of contexts, we can't reasonably assume human authorship by default any more. (We can reasonably want it and push for it! I just mean we can't literally expect it.) So even when we would prefer openness, I think 'lying by omission' is too harsh a characterisation for people who choose not to declare AI authorship but don't actively try to cover it up.

the_biot 5 hours ago

Honesty is the whole problem with ideas like this. If you're the kind of deluded idiot that considers LLM-generated crap "your code", stating exactly how little you had to do with it is not in your advantage. Far easier to maintain the lie.

9864247888754 6 hours ago

Nobody owes you any transparency about the way they develop their software.

jimbooonooo 6 hours ago

They sure don't, but often insight into/alignment with the story and development process makes all the difference for which projects people choose to contribute to.

eschaton 3 hours ago

They do if they want me to use it.

qainsights 10 hours ago

AIx is an open standard for disclosing AI involvement in software projects - expressed through the language of authorship. Not a judgment. Just transparency.

rvz 9 hours ago

Just one tiny issue:

"AIX®" is also a registered trademark of International Business Machines Corporation (IBM) and used for the AIX® operating system that is still in use today.

I would be careful to use that name.

qainsights 9 hours ago
yjftsjthsd-h 4 hours ago

> Every line deliberate. Crafted like poetry. Human-authored entirely.

This is perhaps overly generous to pure-human authorship. These days, when I write code I like to think I know what it does. I still wouldn't call most of it "crafted like poetry". When I was just learning though, I wrote plenty of code 100% without AI (in fairness, it didn't exist) that I had little understanding of, and it was only "deliberate" in that I deliberately cajoled it into passing the tests.

Or put differently: don't conflate human authorship with quality; people can write garbage without needing AI help.

zimpenfish 3 hours ago

> don't conflate human authorship with quality

If you're thinking "crafted like poetry" implies any kind of existential "quality", I'd like to introduce you to William McGonagall[0] and you will swiftly and powerfully be disabused of any "poetry -> quality" confusions.

[0] https://en.wikipedia.org/wiki/William_McGonagall

anematode 3 hours ago

Indeed. I like to call this form of code organic slop, and it predates the LLM era.

easygenes 6 hours ago

This is very similar to a project I created https://github.com/Entrpi/autonomy-golf and have been using as a gamified development process on active projects.

The key insight was to not just handwave or guess at how much is automated, but make evaluation and review part of the continuous development loop. I first implemented in https://github.com/Entrpi/autoresearch-everywhere where I used it to deliberately automate more, in the spirit of Karpathy's upstream (and to very good effect. I have some of the best autoresearch results anywhere, and the platform is far more robust than it started).

lsh0 4 hours ago

I like these efforts to neatly categorise the extent of AI usage in a project. I do think they need some kind of neutrally worded classification but this and the original post are fine attempts at this emerging niche. It's important to some of us and I look forward to what ends being adopted.

pbronez 8 hours ago

Neat idea. I like the five point scale