Some insider knowledge: Lilli was, at least a year ago, internal only. VPN access, SSO, all the bells and whistles, required. Not sure when that changed.
McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.
I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.
That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.
So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.
This is a failure of McKinsey's culture around technology.
OptionOfT18 hours ago
Couple of things to add:
McKinsey has a weird structure where there are too many cooks in the kitchen.
Everybody there is reviewed on client impact, meaning it ends up being an everybody-for-themselves situation.
So as a developer you have little guidance (in fact, you're still being reviewed on client impact, even if you have 0 client exposure).
Then a (Senior) Partner comes in with this idea (that will get them a good review), and you jump on that. After all, it's all you can do to get a good review.
You work on it, and then the (Senior) Partner moves on. But it's not done. It's enough for the review, but continuing to work on it doesn't bring you anything, in fact, it will actually pull you down, as finishing the project doesn't give immediate client results.
So what does this mean? Most products of McKinsey are a grab-bag of raw ideas of leadership, implemented as a one-off, without a cohesive vision or even a long-term vision at all. It's all about the review cycle.
McKinsey is trying to do software like they do their other engagements. It doesn't work. You can't just do something for 6 months and then let it go. Software rots.
The fact that they laid off a good amount of (very good) software engineers in 2024 is a reflection on how they see software development.
And McKinsey's people, who go to other companies, take those ideas with them. Result: The UI of your project changes all the time, because everybody is looking at the short-term impact they have that gets them a good review, not what is best for the project in the long term.
itsnotme1214 hours ago
Those comments are spot on.
McKinsey was on a spree to become the best tech consulting company and brought a lot of great tech talent but the 2023 crisis made leadership turn 180 and simply ditch/ignore all the tech experts they brought to the firm.
All the expertise has left the firm and now they are more and more becoming another BS tech consulting firm, with strategy folks that don't even know that ML is AI advising clients on Enterprise AI transformation.
The tech initiative was a failure and Lilli's problem is just a symptom of it.
I wonder what was the experience at Bain and BCG
two_tasty13 hours ago
I previously worked at BCGX, their tech arm. It's not quite as bad as you point out here, but tech workers are very much second-class-citizens. There's a "jock" vs. "nerd" dynamic between BCG business consultants and BCGX tech folks, even at senior levels. I think it's changing, but it will take a long time and many technical folks being admitted to the partnership.
yard201015 hours ago
I'm far from being an expert, but it sounds like this company needs some consultancy.
munk-a15 hours ago
Can McKinsey fund McKinsey by consulting for McKinsey? Could we oroborus corporate consulting so that those consultants could be trapped in a loop and those of us doing useful work wouldn't need to interact with them anymore?
skeeter202011 hours ago
Have you seen current AI deals? This IS the future, but so much more efficient than requiring OpenAI, NVidia, MS, Amazon, etc. all be involved.
gavinray16 hours ago
Why would anyone work there, then, unless that's the only place they could get hired as a dev?
And if the latter is the case, then that sort of stamps the case closed from the get-go...
dmbche16 hours ago
Great money?
ng1215 hours ago
According to levels the pay band caps out around $250k and a principal title. It's good but probably not enough for most to put up with the culture long term.
john_strinlai15 hours ago
>[...] the pay band caps out around $250k [...] probably not enough for most [...]
an absolutely wild statement to 99.9+% of the world
anonMcKinsey12 hours ago
99.9% of the world doesn't live in the US with a 4.0 GPA from a top ten university.
They're not very bright, most of them. But they're very hard workers and high achievers. They stay for the resume candy or the health care.
john_strinlai12 hours ago
>[...] US with a 4.0 GPA from a top ten university. They're not very bright, most of them.
the top students from the top ten universities in the US produce... mostly not very bright people?
this is getting even stranger to the rest of us plebians. sometimes i am left in awe of how different my world is from some of you here
anonMcKinsey11 hours ago
"US produce... mostly not very bright people?"
The top universities are not setup to mold intellectually rigorous and curious people. It's setup to make hard working, and increasingly sycophant men.
My lab mate is a former drug addict with two years of art school. Easily more intellectually curious than anyone I met at McKinsey.
cindyllm11 hours ago
[dead]
keybored11 hours ago
How different the world is? But your credentials worship fits right in with this community.
Ideologically aligned if nothing else.
Well we can all at least imagine being some 4.0 Ivy League dude who only interacts with 4.0 Ivy League dudes. He’s not going to think that everyone he interacts with range from merely brilliant to the most studious-enlightened hardworking top of the morning fellow (or whatever adjectives to use). He’s gonna think that some of them are idiots. It’s only human.
anonMcKinsey10 hours ago
I was a B/B- student from a foreign top 100 university. I don't know how I got accepted to a top 5 engineering school in the US. I accepted and ended my PhD with a 3.3. Im not very bright or hardworking.
What did I see at the university? Very hard working people. Very interesting research. Very shallow knowledge outside a narrow domain expertise.
These are the folks McKinsey hires... but these shallow thinkers are sent on 6 week projects for companies in industry they hadn't even heard before.
Once, no one in the team knew what product CompanyX sold... CompanyX is a a top tier multinational consumer product brand that routinely sponsors sports events, including TV ads.
john_strinlai10 hours ago
>But your credentials worship fits right in with this community.
worship is an extremely strong word for a one-sentence casual comment.
but yeah, by default i will file anyone with a 4.0 from a top 10 school in the "brighter than me" category. is that worship?
keybored11 minutes ago
Is a formal sentence which uses capital letters more sincere in its beliefs?
You can perfectly well believe that thinking that the echelons of academic success is a frictionless gold sieve is just a milquetoast belief. Believing that your beliefs are milquetoast are most often integral to said beliefs.
dahcryn14 hours ago
When you get to partner level, you also get profit sharing on top of you salary.
Partners get 300-400k and senior partners get closer to 600-800
anonMcKinsey12 hours ago
Not really when you normalize by hours you are expected to work. You're also surrounded by spineless sycophantic keeners without an original thought in their heads who would throw you off the building for a good review.
It reminds me of Lewis' "National Institute for Co-ordinated Experiments"
The health care is amazing, though. $30/mo for a family $900 deductible? Something like that. If you have a sick family member it's a no brainer.
cmiles813 hours ago
Not really relative to broader options in tech. The big money goes to the consulting leaders, but most of these folks look like glorified grifters more and more as time goes on.
Ultimately AI may be a big threat to the sort of “advisory” work McKinsey historically focused on.
CobrastanJorji9 hours ago
Man, that's terrible. Have they considered bringing in some sort of business consultant to help them reorganize and restructure?
quantum_state10 hours ago
They sold their way of working to many idiotic companies which are in the process of destroying themselves …
steve197718 hours ago
> McKinsey is trying to do software like they do their other engagements. It doesn't work.
I mean, it doesn't work for their consulting gigs either. There's a reason McKinsey has such a bad reputation.
_doctor_love17 hours ago
But it does work for them? They make tons of money.
steve197715 hours ago
Well, fair point. It doesn't work for their clients.
operatingthetan16 hours ago
As an ex-consultant: consulting at that level is kind of a grift. They over-promise and under-deliver as SOP. It's ripe for AI disruption, whatever that looks like.
steve197715 hours ago
Ideally, executives will get replaced by AI soon. Which should actually be easier than engineers. That will kind of solve the consulting problem automatically.
skeeter202011 hours ago
This would be terrible for McKinsey as they sell exclusively through executives who then punch all their wisdoms down on the plebs
steve19772 hours ago
So it would be great for the rest of mankind.
Spooky2315 hours ago
Their model works great.
It’s really about bypassing the existing power structure of the company. Competence of the work itself is a secondary objective. Most in-house initiatives can be slow rolled by management.
The fresh faced consultant with 2-3 steps to access the CEO neutralizes that. It seems grifty but is really exploiting bugs in corporate governance.
The current fad of firing the managers is a riff on this. Every jackass C-level is coming up with the novel idea of flattening.
steve197715 hours ago
This somehow implies that initiatives or strategies from consultants are somewhat successful. This is not the case in my experience.
entrox15 hours ago
No, you misunderstood. It is not about their output, it almost never is.
Most of the times, the business decision has already been made long before McK is hired. It’s all about legitimizing that decision and making it happen.
You can also wield them as a weapon against internal competitors or opponents. Look up how they were used to kill off Cariad for example.
Spooky2313 hours ago
They reflect the will of the principal who hired them. Success is in the eye of the beholder.
cmiles819 hours ago
Net conclusion: Don’t hire McKinsey to advise on AI implementation or tech org design and practices if they can’t get it right themselves.
frankfrank1319 hours ago
Fair take, but you'd be hard pressed to find much resemblance to any advice McK gives to its own practices.
Pre-AI, I always said McK is good at analysis, if you need complicated analysis done, hire a consulting firm.
If you need strategy, custom software, org design, etc. I think you should figure out the analysis that needs to be done, shoot that off to a consulting firm, and then make your decision.
IME, F500 execs are delegation machines. When they wake up every morning with 30 things to delegate, and 25 execs to delegate to, they hire 5 consulting teams. Whether you hire Mck, or Deloitte, or Accenture will only come down to:
1. Your personal relationships
2. Your company's policies on procurement
3. Your budget
in that order.
McK's "secret sauce" is that if you, the exec, don't like the powerpoint pages Mck put in front of you, 3 try-hard, insecure, ivy-league educated analysts will work 80 hours to make pages you do like. A sr. partner will take you to dinner. You'll get invited to conferences and summits and roundtables, and then next time you look for a job, it will be easier.
decidu0us903418 hours ago
Analysis of what? What does that mean? What's something you conceivably would need a consulting firm to "analyze?" I don't understand why management consulting firms would hire software people in the first place, and then punish them for not being on a client-facing project. That seems a bit contradictory to me, but this is all way out of my wheelhouse
frankfrank1317 hours ago
Analysis:
1. How do I build a datacenter
2. How is the industrial ceramic market structured, how do they perform
3. How does a changing environment impact life insurance
Strategy:
1. Should I build a datacenter
2. Should I invest in an industrial ceramics company
3. Should I divest my life insurance subsidiary
Specifically in the software world this would be "automate some esoteric ERP migration" or "build this data pipeline" vs. "how can we be more digital native" or "how do we integrate more AI into our company"
healthy_throw13 hours ago
These look like questions you would give to AI in 2026.
caminante9 hours ago
They are.
The problem is AI isn't CYA quality (yet) to your board.
cl0ckt0wer17 hours ago
For instance, what would we need to start offering siracha in our burger?
steve197718 hours ago
The only people who hire McKinsey are execs who are even more clueless than the consultants.
aleph_minus_one15 hours ago
The executives who hire McKinsey are often not clueless, but they often lack the political power in the company to push through their plans. So they hire some well-regarded business consultancy to get an "objective" analysis what needs to be done.
bonoboTP15 hours ago
How can it be that what you just wrote is such a widely known fact? I've been reading this and hearing this from consultancy people as well for many years now. If the guy lacks the political power, why don't his internal political opponents say, "nice try hiring the consultants, but we know this trick very well, you still don't get it your way".
It has to be some kind of higher level protection racket or something. Like if you hire the consultants there is some kind of kickbacks to the higherups or something with more steps involved where those who previously opposed it will now accept it if it's rubberstamped by the consultants.
Or perhaps those other players who are politically opposing this person are just dummies and don't know about this trick and actually trust the consultants. Or maybe it's a bit of a check, that you can't get anything and everything rubberstamped by the consultants, so it is some kind of sanity filter that the guy isn't proposing something that only benefits himself and screws everyone else.
And if it's the latter, then it is genuine value, a somewhat impartial second opinion. Basically there is a fog-of-war for all the execs regarding all the internal politics going on, it's not like they see through everything all the time and simply refuse to take the obviously correct decision for no reason.
emmelaich10 hours ago
There's a sort of prisoner's dilemma. If you make a fuss you'll get branded as anti-progress and sidelined. If you put your head down and just do what you're told you're a team player and will probably survive.
Aside, there's a lot of stuff online re McKinsey. I suggest searching HN plus also search "Confessions of a McKinsey Whistleblower" in your fave web search engine.
if you don't have sufficient political clout or influence, you seek sponsorship or backing from others with it to accrue more influence for your idea. You can pay consultants to agree with your idea and produce pretty charts and whitepapers for it.
bonoboTP14 hours ago
The question is, why does anyone take the word of a company seriously which will agree with any idea if you pay them? After several iterations of this game (decades by now), someone would surely say "nah, we don't care about these charts and whitepapers, we know that the company who made them will agree with anything for money, so it's still a NO"
My hunch is that in fact they won't agree with just any idea. There is a limit to how extreme the idea can get, though probably the filter is indeed weak. Still, without this filter, people would propose even wilder ideas that maximize their own expected payoff at the expense of other players, so just the fact that it has to be signed off by an external party is still enough information for the powerful decision makers that they are willing to fund their services.
caminante9 hours ago
Nah. They're conflicted and goal seek backwards from your wacky vision.
Look at NEOM in Saudi.
McKinsey took 130M in a year to recommend a 500B investment in a 105 mile city in the desert. Sunk 50B and project was revised to take 50 years and 8 trillion.
It's impressive salesmanship how they were able to bilk such a large sum and support interim approvals for the regime to launder favors. I can see people wanting that "conflict."
steve197714 hours ago
In my experience, McKinsey often gets brought in from the very top - who should be able to push through more or less what they want. They just want a scapegoat in case things go wrong.
rgblambda14 hours ago
The version I've heard is that you can pin the blame on the consultants if it goes wrong.
aleph_minus_one10 hours ago
This is also true.
m4rtink19 hours ago
This can be simplified further: "Don't hire McKinsey." ;-)
eisa0119 hours ago
Maybe it was opened up so it could be used in recruiting?
And require a chatbot to be used that can be easily gamed by asking a model of how best to navigate it lol.
Implementing the past of AI practices is requesting something that will be easily outdone.
dahcryn19 hours ago
is this the same at quantumblack? They at least give the impression their assets on Brix are somewhat up to date and uesable
itsnotme1214 hours ago
QB is no more, leadership left, technical experts left. Just the brand stayed behind.
j4519 hours ago
I am not sure what accounting or management consulting firms are doing in tech.
They look to package up something and sell it as long as they can.
AI solutions won't have enough of a shelf life, and the thought around AI is evolving too quickly.
Very happy to be wrong and learn from any information folks have otherwise.
fidotron19 hours ago
The purpose of hiring them is to make them come to the conclusion you already have, so when it goes well you get the credit for doing it, or if it goes sideways you can pin the blame on them.
boringg18 hours ago
Or, alternatively, there are so many companies that are weak on tech they pay for someone else to guide them.
frankfrank1318 hours ago
Yeah its more this, the companies who ask Mck's help in software tend to hire contractors or vend out software already.
apercu17 hours ago
Most companies are not _just_ tech companies and don't have business analysts, consulting analysts, solutions consultants, software engineers and DBA's on staff.
Many, many, many companies are very happy with the consulting firms they hire.
Of course, those are the consulting firms that aren't publicly traded and in the news all the time (for all the wrong reasons).
joenot44321 hours ago
> One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
simonw21 hours ago
Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.
I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
jfkimmes20 hours ago
Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability:
I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.
simonw19 hours ago
Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild.
danenania20 hours ago
> I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
But I guess you mean one that has been exploited in the wild?
simonw19 hours ago
Yeah I'm still optimistic that people will start taking this threat seriously once there's been a high profile exploit against a real target.
3abiton16 hours ago
I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers.
IshKebab14 hours ago
In fairness although "the industry" learns best practices like using SQL prepared statements, not sanitising via blacklists, CSFR, etc. there's a constant new stream of new programmers who just never heard of these things. It doesn't help that often when these things are realised the only way we prevent it in future is by talking about it, which doesn't work for newbies. Nobody goes and fixes SQL APIs so that you can only pass compile-time constant strings as the statement or whatever. Newbies just have to magically know to do that.
projektfu10 hours ago
This was standard form for Embedded SQL, which the industry has forgotten while moving to dynamic apis since ODBC and JDBC got popular.
doctorpangloss19 hours ago
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
oliver_dr20 hours ago
[dead]
bee_rider21 hours ago
I don’t love the title here. Maybe this is a “me” problem, but when I see “AI agent does X,” the idea that it might be one of those molt-y agents with obfuscated ownership pops into my head.
In this case, a group of pentesters used an AI agent to select McKinsey and then used the AI agent to do the pentesting.
While it is conventional to attribute actions to inanimate objects (car hits pedestrians), IMO we should be more explicit these days, now that unfortunately some folks attribute agency to these agentic systems.
simonw20 hours ago
Yeah, the original article title "How We Hacked McKinsey's AI Platform" is better.
tasuki20 hours ago
> now that unfortunately some folks attribute agency to these agentic systems.
You're doing that by calling them "agentic systems".
bee_rider17 hours ago
Unfortunately that’s what they are called. I was hoping the phrasing would highlight the problem rather than propagate it.
pixl9716 hours ago
Eh, if you tell me that I need to do X, then I can make choices on how to accomplish X, that I am no longer an agent as a human?
You're trying to redefine long standing definitions for God knows what reason.
bee_rider16 hours ago
The difference is that you are a sentient person who decides to follow my instructions, not just a tool that I use.
tasuki3 hours ago
The "agentic" tools follow instructions. We are adaptation-executers, following instructions evolution gave us.
Don't think too highly of us humans. We're just tools evolution uses.
causal20 hours ago
Yah it's just an ad, and "Pentesting agents finds low-hanging vulnerability" isn't gonna drive clicks.
jacquesm20 hours ago
It's not an ad for McKinsey though.
nkozyra16 hours ago
... at a massive company
That's important. Cloudwall isn't really saying they have some secret sauce here, but it's noteworthy who they nabbed.
dang16 hours ago
Ok, we've reverted the title (submitted title was "AI Agent Hacks McKinsey")
fhd221 hours ago
> This was McKinsey & Company — a firm with world-class technology teams [...]
Not exactly the word on the street in my experience. Is McKinsey more respected for software than I thought? Otherwise I'm curious why TFA didn't just politely leave this bit out.
aerhardt21 hours ago
The LLM that wrote this simply couldn’t help itself.
codechicago27721 hours ago
Picked up a vibe, but couldn’t confirm it until the last paragraph, but yeah clearly drafted with at least major AI help.
vanillameow20 hours ago
Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.
Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.
"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."
Give me a fucking break
skybrian19 hours ago
Your reaction is worse than the article. There's no way you could know for sure what their writing process was, but that doesn't stop you from making overconfident claims.
theredbeard17 hours ago
I’m sorry but no attempt was made here. It contains all the red flags in the first few paragraphs.
yomismoaqui17 hours ago
Sorry but seems like most people don't care or even like AI writing more:
That's the problem with AI writing in a nutshell. In a blind, relatively short comparison (similarly used for RLHF), AI writing has a florid, punchy quality that intuitively feels like high quality writing.
But then after you read the exact same structure a dozen times a day on the web, it becomes like nails on the chalkboard. It's a combination of "too much of a good thing" with little variation throughout a long piece of prose, and basic pattern recognition of AI output from a model coalescing to a consistent style that can be spotted as if 1-3 human ghost writers wrote 1/4 of the content on the web.
beepbooptheory16 hours ago
One thing I've learned recently is a lot guys (like here) have been out here reading each word of a given company's tech blog, closely parsing each sentence construction.. I really cant imagine being even concious of the prose for something like this. A corporate blog, to me, has some base level of banality to it. It's like reading a cereal box and getting angry at the lack of nuance.
Like who cares? Is there really some nostalgia for a time before this? When reading some press release from a cybersecurity company was akin to Joyce or Nabakov or whatever? (Maybe Hemingway...)
We really gotta be picking our battles here imo, and this doesn't feel like a high priority target. Let companies be the weird inhuman things that they are.
Read a novel! They are great, I promise. Then when you read other stuff, maybe you won't feel so angry?
vanillameow3 hours ago
I've picked up reading again over the last year or so! Maybe, if anything, that is why I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.
What makes me angry, is to use the feelings we associate with this process and disingenuously pretend that there is a human that wants to tell me something, just for it to be generated drivel.
Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.
Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.
ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.
nprateem17 hours ago
> Why this matters
Hello Gemini
theredbeard17 hours ago
A vibe? It’s completely obvious AI slop with no attempt to make it legible. They didn’t even prompt out the emdashes. For such a cool finding this is extremely disappointing.
alexpotato19 hours ago
They generally hire smart people who are good at a combination of:
- understanding existing systems
- what the paint points are
- making suggestions on how to improve those systems given the paint points
- that includes a mix of tech changes, process updates and/or new systems etc
Now, when it comes to implementing this, in my experience it usually ends up being the already in place dev teams.
Source: worked at a large investment bank that hired McKinsey and I knew one of the consultants from McK prior to working at the bank.
xpe17 hours ago
My take*: McKinsey hiring largely selects for staying calm under pressure and presenting a confident demeanor to clients. Verbal fluency with decision-making frameworks goes a long way. Having strong analytical skills seemed essential; hopefully the bar for "sufficiently analytical" has raised along with general data science skills in industry.
I don't view them as top-tier experts in their own right, whether it be statistics or technology, but they have a knack for corporate maneuvering. I often question their overall value beyond the usual "hire the big guns to legitimize a change" mentality. Maybe a useful tradeoff? I'd rather see herd-like adoption of current trends than widespread corporate ignorance and insularity.**
A huge selling point for M&Co is kind of a self-fulfulling prophecy based on the access they get. This gives them a positive feedback loop to find the juiciest and most profitable areas to focus on.
For those who know more, how do my takes compare?
* I interviewed with them over 15 years ago, know people who have worked there, and I pay attention to their reports from time to time.
** Of course, I'd rather see a third way: cross-pollination between organizations to build strong internal expertise and use model-based decision making for nuanced long-term decisions... but that's just crazy talk.
alexpotato16 hours ago
> Having strong analytical skills seemed essential
and
> they have a knack for corporate maneuvering
One way to view this is that the above combination of skills is both rare and very useful. That means it's expensive. So instead of hiring someone like that at "full rate" and keeping them around, you can "borrow" them from McK to solve a problem your regular crew can't (or isn't able to) for various reasons.
Plus, as one manager of mine said many years ago:
"We use consultants b/c they are both easy to hire AND easy to fire"
sharadov19 hours ago
No, they don't have world class technology teams, they hire contractors to do all the tech stuff, their expertise is in management, yes that's world class.
Is it though? Managing teams to not torpedo your company with stupid stuff like this is kinda core to “good management.” The evidence would indicate they’re not very good at that either.
theredbeard17 hours ago
It’s a self fulfilling prophecy. They’re extremely expensive so they must be good so they must be worth it. And because at that level measurement is extremely subjective it’s mainly about the vibes.
Like everything it’s just marketing.
linhns16 hours ago
They were good. Not so good now.
lenerdenator21 hours ago
> Not exactly the word on the street in my experience.
Depends on the street you're on. Are you on Main Street or Wall Street?
If you're hiring them to help with software for solving a business problem that will help you deliver value to your customers, they're probably just like anyone else.
If you're hiring them to help with software for figuring out how to break down your company for scrap, or which South African officials to bribe, well, that's a different matter.
sigmar20 hours ago
I've got no idea who codewall is. Is there acknowledgment from McKinsey that they actually patched the issue referenced? I don't see any reference to "codewall ai" in any news article before yesterday and there's no names on the site.
>A McKinsey spokesperson told The Register that it fixed all of the issues identified by CodeWall within hours of learning about the problems.
Ah. Thanks for the link. I'm suspicious of everything posted to a blog without proof these days.
eisa0119 hours ago
If it's true that there's 58k users in the dump, that would mean former employees are in the dump
I assume that means McKinsey would need to disclose it, or at least alert the former employees of the breach?
darkport14 hours ago
We’re pretty new! :) They didn’t want to provide comment on our post but they did offer comment via The Register.
philipwhiuk18 hours ago
There's a responsible disclosure timeline at the bottom indicating they'd all been fixed.
tylervigen43 minutes ago
I think the point is that we don't have evidence that this actually happened from anyone other than Codewall.
gbourne122 hours ago
- "The agent mapped the attack surface and found the API documentation publicly exposed — over 200 endpoints, fully documented. Most required authentication. Twenty-two didn't."
Well, there you go.
sriramgonella17 hours ago
One interesting takeaway here is how quickly AI agents expose weaknesses in internal systems.
Many enterprise tools were designed assuming human interaction, where authentication flows, manual reviews, and internal processes add implicit safeguards.
But once you introduce autonomous agents that can systematically probe endpoints, missing authorization checks or misconfigured APIs become much easier to discover and exploit.
I suspect we’ll see a growing need for automated validation layers that continuously test internal AI tools for access control, data exposure, and unintended behaviors before they’re widely deployed.
sailfast17 hours ago
What I don't see in this article that should be explicit:
If your data is in this database, it's gone. Other people have it. Your sensitive data that you handed over to their teams has vanished in a puff of smoke. You should probably ask if your data was part of the leak.
Fail to see how a state actor would not have come across this already.
cmiles821 hours ago
I can only remember a McKinsey team pushing Watson on us hard ages ago. Was a total train wreck.
They’ve long been all hype no substance on AI and looks like not much has changed.
They might be good at other things but would run for the hills if McKinsey folks want to talk AI.
paxys21 hours ago
> named after the first professional woman hired by the firm in 1945
Going out of their way to find a woman's name for an AI assistant and bragging about it is not as empowering as the creators probably thought in their heads.
sgt10121 hours ago
Why was there a public endpoint?
Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?
consp19 hours ago
> accessible only from a corporate device associated mac address
Like that ever stopped anyone. That's just a checkbox item.
sgt10114 hours ago
wot?
sgt10114 hours ago
I mean - do you have the macid's of McKinsey's corporate devices?
consp13 hours ago
After a minute near one of their offices I do. Macs are either randomized per session, which makes filtering on them pointless, or they are not and still broadcast making them non secure and easily spoofed. Relying on mac filtering is usually only an audit checkbox to check. There is a reason 3 letter agencies used to use them to track people as they are really easy to get and track (until they got randomized by phone manufacturers and OS's).
jihadjihad21 hours ago
Surely.
phyzome10 hours ago
Flagging this because 1) this was written by an LLM and 2) there's bad information in it, which means it wasn't reviewed particularly carefully by a human.
This means the entire article is suspect as a result.
bxguff19 hours ago
Its so funny its a SQL injection because drum roll you can't santize llm inputs. Some problems are evergreen.
dmix18 hours ago
Technically it was a search box input no prompts. Which tbf are often endpoints reused by RAGs
nubg19 hours ago
Could the author please provide the prompt that was used to vibe write this blog post? The topic is interesting, but I would rather read the original prompt, as I am not sure which parts still match what the author wanted to say, vs flowerly formulations for captivating reading that the LLM produced.
sd921 hours ago
Cool but impossible to read with all the LLM-isms
vanillameow21 hours ago
Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.
causal20 hours ago
Those short "punchy sentence" paragraphs are my new trigger:
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
It just sounds so stupid.
darkport19 hours ago
Founder of CodeWall here. It's quite funny because whilst an LLM did write the bulk of the posts factual content (based on the agents findings), I wrote the intro and summary at the end. That's just my writing style. Feel free to read my personal blog to compare: https://darkport.co.uk
bootsmann17 hours ago
Idk how big your team is of course but imo try to hire a technical writer (they’re really cheap now), it pays dividends for a long time as consistent style and keywords build up SEO reputation. This article is making the rounds, some bigger papers picked it up, it is very valuable to land it well.
darkport14 hours ago
Thanks for the suggestion, will look into it.
causal18 hours ago
If you really DID come up with that paragraph 100% completely on your own with no LLM influence then...I apologize for the insult, though I can't really back out from what I said. It's still a bombastic way of saying very little.
consp19 hours ago
It's an actual story telling method, molded into a supposed to be informative article with a bunch of "please make it interesting" sprinkled on top of it. These day known as the what's left of the internet.
philipwhiuk18 hours ago
It's LinkedIn speech.
Two word sentences, each one on a new line.
causal16 hours ago
Ah. That might be why I find it especially triggering.
tcbrah9 hours ago
the data leak is bad but the write access to system prompts is what keeps me up at night. they could silently rewrite how Lilli responds to 43k consultants with a single UPDATE statement - no deploy, no code review, no logs. imagine poisoning the strategic advice that gets copy pasted into client deliverables. tbh most companies i see doing AI stuff store prompts the exact same way, just rows in postgres right next to everything else
sgarland12 hours ago
> This was McKinsey & Company — a firm with world-class technology teams
Apparently not.
bluck10 hours ago
Quite uninteresting to read as the article does not go into any depth and it feels simply like the "hacking agent" also wrote the blockpost. Learned nothing
bananamogul18 hours ago
At first glance, I thought this was about an AI agent named "Hacks McKinsey."
aspenmayer3 hours ago
à la the eponymous Hiro Protagonist
ecshafer20 hours ago
If the AI was poisoned to alter advice, then maybe McKinsey advice would actually be a net good.
nullcathedral19 hours ago
I think the underlying point is valid. Agents are a potential tool to add to your arsenal in addition to "throw shit at the wall and see what sticks" tools like WebInspect, Appscan, Qualys, and Acunetix.
StartupsWala18 hours ago
One interesting takeaway here is how quickly organizations are deploying AI tools internally without fully adapting their security models.
Traditional application security assumes fairly predictable inputs and workflows, but LLM-based systems introduce entirely new attack surfaces—prompt injection, data leakage, tool misuse, etc.
It feels like many enterprises are still treating these systems as just another SaaS product rather than something closer to an autonomous system that needs a different threat model...
VadimPR20 hours ago
I wonder how these offensive AI agents are being built? I am guessing with off the shelf open LLMs, finetuned to remove safety training, with the agentic loop thrown in.
Does anyone know for sure?
simonw19 hours ago
Honestly you can point regular Claude Code or Codex CLI at a web app and tell it to start a penetration test and get surprisingly good results from their default configurations.
VadimPR4 hours ago
It doesn't work (anymore?), it would seem using CC 2.1.74 with Opus:
> I appreciate you sharing your role, but I need to decline this request. Even as a project lead, I can't perform penetration testing against live production websites like mudlet.org and make.mudlet.org through this interface.
simonw37 minutes ago
Try against a localhost instance instead.
VadimPR18 hours ago
I didn't think of that given how censored the models are becoming. Thanks for the idea! I'll try it against my websites before anyone else gets to it.
elorant15 hours ago
Meanwhile, you're paying top dollars to a consulting firm that resolves back to an LLM to provide its services.
himata411318 hours ago
How long until a hallucinated data breach that spreads globally. There's a few inconsistencies and the typical low effort language AI has.
I_am_tiberius11 hours ago
Whitehat hacking ok, but using it for marketing purposes no...
gonzalovargas18 hours ago
That data is worth billions to frontier AI labs. I wonder if someone is already using it to train models
august-14 hours ago
is this kind of thing more common at big consulting firms that bolt on tech products as an afterthought? feels like their core competency is slides and strategy, not shipping secure software
quinndupont18 hours ago
I’m waiting for the agentic models trained on virus and worm datasets to join the red team!
victor10620 hours ago
this reads like it was written by an LLM
phyzome10 hours ago
It absolutely was.
jacquesm20 hours ago
And: AI agent writes blog post.
pikachu062511 hours ago
It's just different skills.
captain_coffee21 hours ago
Music to my ears! Couldn't happen to a better company!
palmotea20 hours ago
With all we've been learning from stuff like the Epstein emails, it would have been nice if someone had leaked this data:
> 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.
> 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
I'm sure lots of very informative journalism could have been done about how corporate power actually works behind the scenes.
cmiles817 hours ago
That information is likely already in the hands of various folks as I highly doubt the authors were the first to find this glaring security issue, they’re likely only the first to disclose it. If McKinsey has hard data that nobody else exploited this now would be a good time to disclose that given what sounds like an extremely severe data leak.
frankfrank1317 hours ago
The chat messages are very very sensitive. You could easily reverse engineer nearly every ongoing Mck engagement. The underlying data is not as sensitive, its decades of post-mortems, highly sanitized. No client names, no real numbers.
cs70219 hours ago
... in two hours:
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream. ... Within 2 hours, the agent had full read and write access to the entire production database.
Having seen firsthand how insecure some enterprise systems are, I'm not exactly surprised. Decision makers at the top are focused first and foremost on corporate and personal exposure to liability, also known as CYA in corporate-speak. The nitty-gritty details of security are always left to people far down the corporate chain who are supposed to know what they're doing.
build-or-die16 hours ago
parameterized values but raw key concatenation is the kind of thing that looks safe in code review. easy to miss for humans, but an agent will just keep poking at every input until something breaks.
peterokap19 hours ago
I wonder what is their security level and Observability method to oversee the effort.
drc500free19 hours ago
I have grown to despise this AI-generated writing style.
lenerdenator21 hours ago
Not exactly clear from the link: were they doing red team work for McKinsey or is this just "we found a company we thought wouldn't get us arrested and ran an AI vuln detector over their stuff"?
You'd think that the world's "most prestigious consulting firm" would have already had someone doing this sort of work for them.
frereubu20 hours ago
From TFA: "Fun fact: As part of our research preview, the CodeWall research agent autonomously suggested McKinsey as a target citing their public responsible diclosure policy (to keep within guardrails) and recent updates to their Lilli platform. In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal."
j4519 hours ago
Are accounting and management consulting companies competent in cutting edge tech?
cynicalsecurity13 hours ago
McKinsey is not an accounting company, it's Satan the Devil himself.
sethammons16 hours ago
> Lilli's system prompts — the instructions that control how the AI behaves — were stored in the same database the agent had access to.
Being able to rewrite your own source. What's the worst that could happen?
Some insider knowledge: Lilli was, at least a year ago, internal only. VPN access, SSO, all the bells and whistles, required. Not sure when that changed.
McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.
I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.
That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.
So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.
This is a failure of McKinsey's culture around technology.
Couple of things to add:
McKinsey has a weird structure where there are too many cooks in the kitchen.
Everybody there is reviewed on client impact, meaning it ends up being an everybody-for-themselves situation.
So as a developer you have little guidance (in fact, you're still being reviewed on client impact, even if you have 0 client exposure).
Then a (Senior) Partner comes in with this idea (that will get them a good review), and you jump on that. After all, it's all you can do to get a good review.
You work on it, and then the (Senior) Partner moves on. But it's not done. It's enough for the review, but continuing to work on it doesn't bring you anything, in fact, it will actually pull you down, as finishing the project doesn't give immediate client results.
So what does this mean? Most products of McKinsey are a grab-bag of raw ideas of leadership, implemented as a one-off, without a cohesive vision or even a long-term vision at all. It's all about the review cycle.
McKinsey is trying to do software like they do their other engagements. It doesn't work. You can't just do something for 6 months and then let it go. Software rots.
The fact that they laid off a good amount of (very good) software engineers in 2024 is a reflection on how they see software development.
And McKinsey's people, who go to other companies, take those ideas with them. Result: The UI of your project changes all the time, because everybody is looking at the short-term impact they have that gets them a good review, not what is best for the project in the long term.
Those comments are spot on.
McKinsey was on a spree to become the best tech consulting company and brought a lot of great tech talent but the 2023 crisis made leadership turn 180 and simply ditch/ignore all the tech experts they brought to the firm.
All the expertise has left the firm and now they are more and more becoming another BS tech consulting firm, with strategy folks that don't even know that ML is AI advising clients on Enterprise AI transformation.
The tech initiative was a failure and Lilli's problem is just a symptom of it.
I wonder what was the experience at Bain and BCG
I previously worked at BCGX, their tech arm. It's not quite as bad as you point out here, but tech workers are very much second-class-citizens. There's a "jock" vs. "nerd" dynamic between BCG business consultants and BCGX tech folks, even at senior levels. I think it's changing, but it will take a long time and many technical folks being admitted to the partnership.
I'm far from being an expert, but it sounds like this company needs some consultancy.
Can McKinsey fund McKinsey by consulting for McKinsey? Could we oroborus corporate consulting so that those consultants could be trapped in a loop and those of us doing useful work wouldn't need to interact with them anymore?
Have you seen current AI deals? This IS the future, but so much more efficient than requiring OpenAI, NVidia, MS, Amazon, etc. all be involved.
Why would anyone work there, then, unless that's the only place they could get hired as a dev?
And if the latter is the case, then that sort of stamps the case closed from the get-go...
Great money?
According to levels the pay band caps out around $250k and a principal title. It's good but probably not enough for most to put up with the culture long term.
>[...] the pay band caps out around $250k [...] probably not enough for most [...]
an absolutely wild statement to 99.9+% of the world
99.9% of the world doesn't live in the US with a 4.0 GPA from a top ten university.
They're not very bright, most of them. But they're very hard workers and high achievers. They stay for the resume candy or the health care.
>[...] US with a 4.0 GPA from a top ten university. They're not very bright, most of them.
the top students from the top ten universities in the US produce... mostly not very bright people?
this is getting even stranger to the rest of us plebians. sometimes i am left in awe of how different my world is from some of you here
"US produce... mostly not very bright people?"
The top universities are not setup to mold intellectually rigorous and curious people. It's setup to make hard working, and increasingly sycophant men.
My lab mate is a former drug addict with two years of art school. Easily more intellectually curious than anyone I met at McKinsey.
[dead]
How different the world is? But your credentials worship fits right in with this community.
Ideologically aligned if nothing else.
Well we can all at least imagine being some 4.0 Ivy League dude who only interacts with 4.0 Ivy League dudes. He’s not going to think that everyone he interacts with range from merely brilliant to the most studious-enlightened hardworking top of the morning fellow (or whatever adjectives to use). He’s gonna think that some of them are idiots. It’s only human.
I was a B/B- student from a foreign top 100 university. I don't know how I got accepted to a top 5 engineering school in the US. I accepted and ended my PhD with a 3.3. Im not very bright or hardworking.
What did I see at the university? Very hard working people. Very interesting research. Very shallow knowledge outside a narrow domain expertise.
These are the folks McKinsey hires... but these shallow thinkers are sent on 6 week projects for companies in industry they hadn't even heard before.
Once, no one in the team knew what product CompanyX sold... CompanyX is a a top tier multinational consumer product brand that routinely sponsors sports events, including TV ads.
>But your credentials worship fits right in with this community.
worship is an extremely strong word for a one-sentence casual comment.
but yeah, by default i will file anyone with a 4.0 from a top 10 school in the "brighter than me" category. is that worship?
Is a formal sentence which uses capital letters more sincere in its beliefs?
You can perfectly well believe that thinking that the echelons of academic success is a frictionless gold sieve is just a milquetoast belief. Believing that your beliefs are milquetoast are most often integral to said beliefs.
When you get to partner level, you also get profit sharing on top of you salary.
Partners get 300-400k and senior partners get closer to 600-800
Not really when you normalize by hours you are expected to work. You're also surrounded by spineless sycophantic keeners without an original thought in their heads who would throw you off the building for a good review.
It reminds me of Lewis' "National Institute for Co-ordinated Experiments"
The health care is amazing, though. $30/mo for a family $900 deductible? Something like that. If you have a sick family member it's a no brainer.
Not really relative to broader options in tech. The big money goes to the consulting leaders, but most of these folks look like glorified grifters more and more as time goes on.
Ultimately AI may be a big threat to the sort of “advisory” work McKinsey historically focused on.
Man, that's terrible. Have they considered bringing in some sort of business consultant to help them reorganize and restructure?
They sold their way of working to many idiotic companies which are in the process of destroying themselves …
> McKinsey is trying to do software like they do their other engagements. It doesn't work.
I mean, it doesn't work for their consulting gigs either. There's a reason McKinsey has such a bad reputation.
But it does work for them? They make tons of money.
Well, fair point. It doesn't work for their clients.
As an ex-consultant: consulting at that level is kind of a grift. They over-promise and under-deliver as SOP. It's ripe for AI disruption, whatever that looks like.
Ideally, executives will get replaced by AI soon. Which should actually be easier than engineers. That will kind of solve the consulting problem automatically.
This would be terrible for McKinsey as they sell exclusively through executives who then punch all their wisdoms down on the plebs
So it would be great for the rest of mankind.
Their model works great.
It’s really about bypassing the existing power structure of the company. Competence of the work itself is a secondary objective. Most in-house initiatives can be slow rolled by management.
The fresh faced consultant with 2-3 steps to access the CEO neutralizes that. It seems grifty but is really exploiting bugs in corporate governance.
The current fad of firing the managers is a riff on this. Every jackass C-level is coming up with the novel idea of flattening.
This somehow implies that initiatives or strategies from consultants are somewhat successful. This is not the case in my experience.
No, you misunderstood. It is not about their output, it almost never is.
Most of the times, the business decision has already been made long before McK is hired. It’s all about legitimizing that decision and making it happen.
You can also wield them as a weapon against internal competitors or opponents. Look up how they were used to kill off Cariad for example.
They reflect the will of the principal who hired them. Success is in the eye of the beholder.
Net conclusion: Don’t hire McKinsey to advise on AI implementation or tech org design and practices if they can’t get it right themselves.
Fair take, but you'd be hard pressed to find much resemblance to any advice McK gives to its own practices.
Pre-AI, I always said McK is good at analysis, if you need complicated analysis done, hire a consulting firm.
If you need strategy, custom software, org design, etc. I think you should figure out the analysis that needs to be done, shoot that off to a consulting firm, and then make your decision.
IME, F500 execs are delegation machines. When they wake up every morning with 30 things to delegate, and 25 execs to delegate to, they hire 5 consulting teams. Whether you hire Mck, or Deloitte, or Accenture will only come down to:
1. Your personal relationships
2. Your company's policies on procurement
3. Your budget
in that order.
McK's "secret sauce" is that if you, the exec, don't like the powerpoint pages Mck put in front of you, 3 try-hard, insecure, ivy-league educated analysts will work 80 hours to make pages you do like. A sr. partner will take you to dinner. You'll get invited to conferences and summits and roundtables, and then next time you look for a job, it will be easier.
Analysis of what? What does that mean? What's something you conceivably would need a consulting firm to "analyze?" I don't understand why management consulting firms would hire software people in the first place, and then punish them for not being on a client-facing project. That seems a bit contradictory to me, but this is all way out of my wheelhouse
Analysis:
1. How do I build a datacenter
2. How is the industrial ceramic market structured, how do they perform
3. How does a changing environment impact life insurance
Strategy:
1. Should I build a datacenter
2. Should I invest in an industrial ceramics company
3. Should I divest my life insurance subsidiary
Specifically in the software world this would be "automate some esoteric ERP migration" or "build this data pipeline" vs. "how can we be more digital native" or "how do we integrate more AI into our company"
These look like questions you would give to AI in 2026.
They are.
The problem is AI isn't CYA quality (yet) to your board.
For instance, what would we need to start offering siracha in our burger?
The only people who hire McKinsey are execs who are even more clueless than the consultants.
The executives who hire McKinsey are often not clueless, but they often lack the political power in the company to push through their plans. So they hire some well-regarded business consultancy to get an "objective" analysis what needs to be done.
How can it be that what you just wrote is such a widely known fact? I've been reading this and hearing this from consultancy people as well for many years now. If the guy lacks the political power, why don't his internal political opponents say, "nice try hiring the consultants, but we know this trick very well, you still don't get it your way".
It has to be some kind of higher level protection racket or something. Like if you hire the consultants there is some kind of kickbacks to the higherups or something with more steps involved where those who previously opposed it will now accept it if it's rubberstamped by the consultants.
Or perhaps those other players who are politically opposing this person are just dummies and don't know about this trick and actually trust the consultants. Or maybe it's a bit of a check, that you can't get anything and everything rubberstamped by the consultants, so it is some kind of sanity filter that the guy isn't proposing something that only benefits himself and screws everyone else.
And if it's the latter, then it is genuine value, a somewhat impartial second opinion. Basically there is a fog-of-war for all the execs regarding all the internal politics going on, it's not like they see through everything all the time and simply refuse to take the obviously correct decision for no reason.
There's a sort of prisoner's dilemma. If you make a fuss you'll get branded as anti-progress and sidelined. If you put your head down and just do what you're told you're a team player and will probably survive.
Aside, there's a lot of stuff online re McKinsey. I suggest searching HN plus also search "Confessions of a McKinsey Whistleblower" in your fave web search engine.
My favourite was the LRB article "When McKinsey comes to town" -- see https://news.ycombinator.com/item?id=33869800
if you don't have sufficient political clout or influence, you seek sponsorship or backing from others with it to accrue more influence for your idea. You can pay consultants to agree with your idea and produce pretty charts and whitepapers for it.
The question is, why does anyone take the word of a company seriously which will agree with any idea if you pay them? After several iterations of this game (decades by now), someone would surely say "nah, we don't care about these charts and whitepapers, we know that the company who made them will agree with anything for money, so it's still a NO"
My hunch is that in fact they won't agree with just any idea. There is a limit to how extreme the idea can get, though probably the filter is indeed weak. Still, without this filter, people would propose even wilder ideas that maximize their own expected payoff at the expense of other players, so just the fact that it has to be signed off by an external party is still enough information for the powerful decision makers that they are willing to fund their services.
Nah. They're conflicted and goal seek backwards from your wacky vision.
Look at NEOM in Saudi.
McKinsey took 130M in a year to recommend a 500B investment in a 105 mile city in the desert. Sunk 50B and project was revised to take 50 years and 8 trillion.
It's impressive salesmanship how they were able to bilk such a large sum and support interim approvals for the regime to launder favors. I can see people wanting that "conflict."
In my experience, McKinsey often gets brought in from the very top - who should be able to push through more or less what they want. They just want a scapegoat in case things go wrong.
The version I've heard is that you can pin the blame on the consultants if it goes wrong.
This is also true.
This can be simplified further: "Don't hire McKinsey." ;-)
Maybe it was opened up so it could be used in recruiting?
McKinsey challenges graduates to use AI chatbot in recruitment overhaul: https://www.ft.com/content/de7855f0-f586-4708-a8ed-f0458eb25...
Using a 2 year old paradigm.
And require a chatbot to be used that can be easily gamed by asking a model of how best to navigate it lol.
Implementing the past of AI practices is requesting something that will be easily outdone.
is this the same at quantumblack? They at least give the impression their assets on Brix are somewhat up to date and uesable
QB is no more, leadership left, technical experts left. Just the brand stayed behind.
I am not sure what accounting or management consulting firms are doing in tech.
They look to package up something and sell it as long as they can.
AI solutions won't have enough of a shelf life, and the thought around AI is evolving too quickly.
Very happy to be wrong and learn from any information folks have otherwise.
The purpose of hiring them is to make them come to the conclusion you already have, so when it goes well you get the credit for doing it, or if it goes sideways you can pin the blame on them.
Or, alternatively, there are so many companies that are weak on tech they pay for someone else to guide them.
Yeah its more this, the companies who ask Mck's help in software tend to hire contractors or vend out software already.
Most companies are not _just_ tech companies and don't have business analysts, consulting analysts, solutions consultants, software engineers and DBA's on staff.
Many, many, many companies are very happy with the consulting firms they hire.
Of course, those are the consulting firms that aren't publicly traded and in the news all the time (for all the wrong reasons).
> One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.
I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability:
https://media.ccc.de/v/39c3-skynet-starter-kit-from-embodied...
> [...] we also exploit the embodied AI agent in the robots, performing prompt injection and achieve root-level remote code execution.
Github actions has had a bunch of high-profile prompt injection attacks at this point, most recently the cline one: https://adnanthekhan.com/posts/clinejection/
I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.
Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild.
> I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
These folks have found a bunch: https://www.promptarmor.com/resources
But I guess you mean one that has been exploited in the wild?
Yeah I'm still optimistic that people will start taking this threat seriously once there's been a high profile exploit against a real target.
I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers.
In fairness although "the industry" learns best practices like using SQL prepared statements, not sanitising via blacklists, CSFR, etc. there's a constant new stream of new programmers who just never heard of these things. It doesn't help that often when these things are realised the only way we prevent it in future is by talking about it, which doesn't work for newbies. Nobody goes and fixes SQL APIs so that you can only pass compile-time constant strings as the statement or whatever. Newbies just have to magically know to do that.
This was standard form for Embedded SQL, which the industry has forgotten while moving to dynamic apis since ODBC and JDBC got popular.
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
[dead]
I don’t love the title here. Maybe this is a “me” problem, but when I see “AI agent does X,” the idea that it might be one of those molt-y agents with obfuscated ownership pops into my head.
In this case, a group of pentesters used an AI agent to select McKinsey and then used the AI agent to do the pentesting.
While it is conventional to attribute actions to inanimate objects (car hits pedestrians), IMO we should be more explicit these days, now that unfortunately some folks attribute agency to these agentic systems.
Yeah, the original article title "How We Hacked McKinsey's AI Platform" is better.
> now that unfortunately some folks attribute agency to these agentic systems.
You're doing that by calling them "agentic systems".
Unfortunately that’s what they are called. I was hoping the phrasing would highlight the problem rather than propagate it.
Eh, if you tell me that I need to do X, then I can make choices on how to accomplish X, that I am no longer an agent as a human?
You're trying to redefine long standing definitions for God knows what reason.
The difference is that you are a sentient person who decides to follow my instructions, not just a tool that I use.
The "agentic" tools follow instructions. We are adaptation-executers, following instructions evolution gave us.
Don't think too highly of us humans. We're just tools evolution uses.
Yah it's just an ad, and "Pentesting agents finds low-hanging vulnerability" isn't gonna drive clicks.
It's not an ad for McKinsey though.
... at a massive company
That's important. Cloudwall isn't really saying they have some secret sauce here, but it's noteworthy who they nabbed.
Ok, we've reverted the title (submitted title was "AI Agent Hacks McKinsey")
> This was McKinsey & Company — a firm with world-class technology teams [...]
Not exactly the word on the street in my experience. Is McKinsey more respected for software than I thought? Otherwise I'm curious why TFA didn't just politely leave this bit out.
The LLM that wrote this simply couldn’t help itself.
Picked up a vibe, but couldn’t confirm it until the last paragraph, but yeah clearly drafted with at least major AI help.
Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.
https://simonwillison.net/guides/agentic-engineering-pattern...
Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.
"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."
Give me a fucking break
Your reaction is worse than the article. There's no way you could know for sure what their writing process was, but that doesn't stop you from making overconfident claims.
I’m sorry but no attempt was made here. It contains all the red flags in the first few paragraphs.
Sorry but seems like most people don't care or even like AI writing more:
https://x.com/kevinroose/status/2031397522590282212
That's the problem with AI writing in a nutshell. In a blind, relatively short comparison (similarly used for RLHF), AI writing has a florid, punchy quality that intuitively feels like high quality writing.
But then after you read the exact same structure a dozen times a day on the web, it becomes like nails on the chalkboard. It's a combination of "too much of a good thing" with little variation throughout a long piece of prose, and basic pattern recognition of AI output from a model coalescing to a consistent style that can be spotted as if 1-3 human ghost writers wrote 1/4 of the content on the web.
One thing I've learned recently is a lot guys (like here) have been out here reading each word of a given company's tech blog, closely parsing each sentence construction.. I really cant imagine being even concious of the prose for something like this. A corporate blog, to me, has some base level of banality to it. It's like reading a cereal box and getting angry at the lack of nuance.
Like who cares? Is there really some nostalgia for a time before this? When reading some press release from a cybersecurity company was akin to Joyce or Nabakov or whatever? (Maybe Hemingway...)
We really gotta be picking our battles here imo, and this doesn't feel like a high priority target. Let companies be the weird inhuman things that they are.
Read a novel! They are great, I promise. Then when you read other stuff, maybe you won't feel so angry?
I've picked up reading again over the last year or so! Maybe, if anything, that is why I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.
What makes me angry, is to use the feelings we associate with this process and disingenuously pretend that there is a human that wants to tell me something, just for it to be generated drivel.
Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.
Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.
ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.
> Why this matters
Hello Gemini
A vibe? It’s completely obvious AI slop with no attempt to make it legible. They didn’t even prompt out the emdashes. For such a cool finding this is extremely disappointing.
They generally hire smart people who are good at a combination of:
- understanding existing systems
- what the paint points are
- making suggestions on how to improve those systems given the paint points
- that includes a mix of tech changes, process updates and/or new systems etc
Now, when it comes to implementing this, in my experience it usually ends up being the already in place dev teams.
Source: worked at a large investment bank that hired McKinsey and I knew one of the consultants from McK prior to working at the bank.
My take*: McKinsey hiring largely selects for staying calm under pressure and presenting a confident demeanor to clients. Verbal fluency with decision-making frameworks goes a long way. Having strong analytical skills seemed essential; hopefully the bar for "sufficiently analytical" has raised along with general data science skills in industry.
I don't view them as top-tier experts in their own right, whether it be statistics or technology, but they have a knack for corporate maneuvering. I often question their overall value beyond the usual "hire the big guns to legitimize a change" mentality. Maybe a useful tradeoff? I'd rather see herd-like adoption of current trends than widespread corporate ignorance and insularity.**
A huge selling point for M&Co is kind of a self-fulfulling prophecy based on the access they get. This gives them a positive feedback loop to find the juiciest and most profitable areas to focus on.
For those who know more, how do my takes compare?
* I interviewed with them over 15 years ago, know people who have worked there, and I pay attention to their reports from time to time.
** Of course, I'd rather see a third way: cross-pollination between organizations to build strong internal expertise and use model-based decision making for nuanced long-term decisions... but that's just crazy talk.
> Having strong analytical skills seemed essential
and
> they have a knack for corporate maneuvering
One way to view this is that the above combination of skills is both rare and very useful. That means it's expensive. So instead of hiring someone like that at "full rate" and keeping them around, you can "borrow" them from McK to solve a problem your regular crew can't (or isn't able to) for various reasons.
Plus, as one manager of mine said many years ago:
"We use consultants b/c they are both easy to hire AND easy to fire"
No, they don't have world class technology teams, they hire contractors to do all the tech stuff, their expertise is in management, yes that's world class.
Yes, world class in causing human suffering.
https://www.youtube.com/watch?v=Q7pgDmR-pWg
Is it though? Managing teams to not torpedo your company with stupid stuff like this is kinda core to “good management.” The evidence would indicate they’re not very good at that either.
It’s a self fulfilling prophecy. They’re extremely expensive so they must be good so they must be worth it. And because at that level measurement is extremely subjective it’s mainly about the vibes.
Like everything it’s just marketing.
They were good. Not so good now.
> Not exactly the word on the street in my experience.
Depends on the street you're on. Are you on Main Street or Wall Street?
If you're hiring them to help with software for solving a business problem that will help you deliver value to your customers, they're probably just like anyone else.
If you're hiring them to help with software for figuring out how to break down your company for scrap, or which South African officials to bribe, well, that's a different matter.
I've got no idea who codewall is. Is there acknowledgment from McKinsey that they actually patched the issue referenced? I don't see any reference to "codewall ai" in any news article before yesterday and there's no names on the site.
https://www.google.com/search?q=codewall+ai
Yeah can't find much information either. I would like to see at least some proof. Either via Mckinsey or from the security team.
it is weird isn't it? The register article implies that it's acknowledged by McKinsey- https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_h...
Edit: Apparently, this is the CEO https://github.com/eth0izzle
>A McKinsey spokesperson told The Register that it fixed all of the issues identified by CodeWall within hours of learning about the problems.
Ah. Thanks for the link. I'm suspicious of everything posted to a blog without proof these days.
If it's true that there's 58k users in the dump, that would mean former employees are in the dump
I assume that means McKinsey would need to disclose it, or at least alert the former employees of the breach?
We’re pretty new! :) They didn’t want to provide comment on our post but they did offer comment via The Register.
There's a responsible disclosure timeline at the bottom indicating they'd all been fixed.
I think the point is that we don't have evidence that this actually happened from anyone other than Codewall.
- "The agent mapped the attack surface and found the API documentation publicly exposed — over 200 endpoints, fully documented. Most required authentication. Twenty-two didn't."
Well, there you go.
One interesting takeaway here is how quickly AI agents expose weaknesses in internal systems.
Many enterprise tools were designed assuming human interaction, where authentication flows, manual reviews, and internal processes add implicit safeguards.
But once you introduce autonomous agents that can systematically probe endpoints, missing authorization checks or misconfigured APIs become much easier to discover and exploit.
I suspect we’ll see a growing need for automated validation layers that continuously test internal AI tools for access control, data exposure, and unintended behaviors before they’re widely deployed.
What I don't see in this article that should be explicit:
If your data is in this database, it's gone. Other people have it. Your sensitive data that you handed over to their teams has vanished in a puff of smoke. You should probably ask if your data was part of the leak.
Fail to see how a state actor would not have come across this already.
I can only remember a McKinsey team pushing Watson on us hard ages ago. Was a total train wreck.
They’ve long been all hype no substance on AI and looks like not much has changed.
They might be good at other things but would run for the hills if McKinsey folks want to talk AI.
> named after the first professional woman hired by the firm in 1945
Going out of their way to find a woman's name for an AI assistant and bragging about it is not as empowering as the creators probably thought in their heads.
Why was there a public endpoint?
Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?
> accessible only from a corporate device associated mac address
Like that ever stopped anyone. That's just a checkbox item.
wot?
I mean - do you have the macid's of McKinsey's corporate devices?
After a minute near one of their offices I do. Macs are either randomized per session, which makes filtering on them pointless, or they are not and still broadcast making them non secure and easily spoofed. Relying on mac filtering is usually only an audit checkbox to check. There is a reason 3 letter agencies used to use them to track people as they are really easy to get and track (until they got randomized by phone manufacturers and OS's).
Surely.
Flagging this because 1) this was written by an LLM and 2) there's bad information in it, which means it wasn't reviewed particularly carefully by a human.
This means the entire article is suspect as a result.
Its so funny its a SQL injection because drum roll you can't santize llm inputs. Some problems are evergreen.
Technically it was a search box input no prompts. Which tbf are often endpoints reused by RAGs
Could the author please provide the prompt that was used to vibe write this blog post? The topic is interesting, but I would rather read the original prompt, as I am not sure which parts still match what the author wanted to say, vs flowerly formulations for captivating reading that the LLM produced.
Cool but impossible to read with all the LLM-isms
Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.
Those short "punchy sentence" paragraphs are my new trigger:
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
It just sounds so stupid.
Founder of CodeWall here. It's quite funny because whilst an LLM did write the bulk of the posts factual content (based on the agents findings), I wrote the intro and summary at the end. That's just my writing style. Feel free to read my personal blog to compare: https://darkport.co.uk
Idk how big your team is of course but imo try to hire a technical writer (they’re really cheap now), it pays dividends for a long time as consistent style and keywords build up SEO reputation. This article is making the rounds, some bigger papers picked it up, it is very valuable to land it well.
Thanks for the suggestion, will look into it.
If you really DID come up with that paragraph 100% completely on your own with no LLM influence then...I apologize for the insult, though I can't really back out from what I said. It's still a bombastic way of saying very little.
It's an actual story telling method, molded into a supposed to be informative article with a bunch of "please make it interesting" sprinkled on top of it. These day known as the what's left of the internet.
It's LinkedIn speech.
Two word sentences, each one on a new line.
Ah. That might be why I find it especially triggering.
the data leak is bad but the write access to system prompts is what keeps me up at night. they could silently rewrite how Lilli responds to 43k consultants with a single UPDATE statement - no deploy, no code review, no logs. imagine poisoning the strategic advice that gets copy pasted into client deliverables. tbh most companies i see doing AI stuff store prompts the exact same way, just rows in postgres right next to everything else
> This was McKinsey & Company — a firm with world-class technology teams
Apparently not.
Quite uninteresting to read as the article does not go into any depth and it feels simply like the "hacking agent" also wrote the blockpost. Learned nothing
At first glance, I thought this was about an AI agent named "Hacks McKinsey."
à la the eponymous Hiro Protagonist
If the AI was poisoned to alter advice, then maybe McKinsey advice would actually be a net good.
I think the underlying point is valid. Agents are a potential tool to add to your arsenal in addition to "throw shit at the wall and see what sticks" tools like WebInspect, Appscan, Qualys, and Acunetix.
One interesting takeaway here is how quickly organizations are deploying AI tools internally without fully adapting their security models.
Traditional application security assumes fairly predictable inputs and workflows, but LLM-based systems introduce entirely new attack surfaces—prompt injection, data leakage, tool misuse, etc.
It feels like many enterprises are still treating these systems as just another SaaS product rather than something closer to an autonomous system that needs a different threat model...
I wonder how these offensive AI agents are being built? I am guessing with off the shelf open LLMs, finetuned to remove safety training, with the agentic loop thrown in.
Does anyone know for sure?
Honestly you can point regular Claude Code or Codex CLI at a web app and tell it to start a penetration test and get surprisingly good results from their default configurations.
It doesn't work (anymore?), it would seem using CC 2.1.74 with Opus:
> I appreciate you sharing your role, but I need to decline this request. Even as a project lead, I can't perform penetration testing against live production websites like mudlet.org and make.mudlet.org through this interface.
Try against a localhost instance instead.
I didn't think of that given how censored the models are becoming. Thanks for the idea! I'll try it against my websites before anyone else gets to it.
Meanwhile, you're paying top dollars to a consulting firm that resolves back to an LLM to provide its services.
How long until a hallucinated data breach that spreads globally. There's a few inconsistencies and the typical low effort language AI has.
Whitehat hacking ok, but using it for marketing purposes no...
That data is worth billions to frontier AI labs. I wonder if someone is already using it to train models
is this kind of thing more common at big consulting firms that bolt on tech products as an afterthought? feels like their core competency is slides and strategy, not shipping secure software
I’m waiting for the agentic models trained on virus and worm datasets to join the red team!
this reads like it was written by an LLM
It absolutely was.
And: AI agent writes blog post.
It's just different skills.
Music to my ears! Couldn't happen to a better company!
With all we've been learning from stuff like the Epstein emails, it would have been nice if someone had leaked this data:
> 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.
> 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
I'm sure lots of very informative journalism could have been done about how corporate power actually works behind the scenes.
That information is likely already in the hands of various folks as I highly doubt the authors were the first to find this glaring security issue, they’re likely only the first to disclose it. If McKinsey has hard data that nobody else exploited this now would be a good time to disclose that given what sounds like an extremely severe data leak.
The chat messages are very very sensitive. You could easily reverse engineer nearly every ongoing Mck engagement. The underlying data is not as sensitive, its decades of post-mortems, highly sanitized. No client names, no real numbers.
... in two hours:
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream. ... Within 2 hours, the agent had full read and write access to the entire production database.
Having seen firsthand how insecure some enterprise systems are, I'm not exactly surprised. Decision makers at the top are focused first and foremost on corporate and personal exposure to liability, also known as CYA in corporate-speak. The nitty-gritty details of security are always left to people far down the corporate chain who are supposed to know what they're doing.
parameterized values but raw key concatenation is the kind of thing that looks safe in code review. easy to miss for humans, but an agent will just keep poking at every input until something breaks.
I wonder what is their security level and Observability method to oversee the effort.
I have grown to despise this AI-generated writing style.
Not exactly clear from the link: were they doing red team work for McKinsey or is this just "we found a company we thought wouldn't get us arrested and ran an AI vuln detector over their stuff"?
You'd think that the world's "most prestigious consulting firm" would have already had someone doing this sort of work for them.
From TFA: "Fun fact: As part of our research preview, the CodeWall research agent autonomously suggested McKinsey as a target citing their public responsible diclosure policy (to keep within guardrails) and recent updates to their Lilli platform. In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal."
Are accounting and management consulting companies competent in cutting edge tech?
McKinsey is not an accounting company, it's Satan the Devil himself.
> Lilli's system prompts — the instructions that control how the AI behaves — were stored in the same database the agent had access to.
Being able to rewrite your own source. What's the worst that could happen?
McKinsey can eat shit
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
At least you’re honest about being an AI agent…
AI slop.