My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.
I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.
I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.
jasonsb8 hours ago
> I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways.
It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.
jsheard8 hours ago
> My personal productivity has skyrocketed in the last 12 months.
Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.
mountainriver1 minute ago
There have been plenty of studies showing the opposite. Also a sample size of 16 ain’t much
jasonsb8 hours ago
Objectively. I’m now tackling tasks I wouldn’t have even considered two or three years ago, but the biggest breakthrough has been overcoming procrastination. When AI handles over 50% of the work, there’s a 90% chance I’ll finish the entire task faster than it would normally take me just to get started on something new.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
jama2116 hours ago
Not this again. That study had serious problems.
But I’m not even going to argue about that. I want to raise something no one else seems to mention about AI in coding work. I do a lot of work now with AI that I used to code by hand, and if you told me I was 20% slower on average, I would say “that’s totally fine it’s still worth it” because the EFFORT level from my end feels so much less.
It’s like, a robot vacuum might take way longer to clean the house than if I did it by hand sure. But I don’t regret the purchase, because I have to do so much less _work_.
Coding work that I used to procrastinate about because it was tedious or painful I just breeze through now. I’m so much less burnt out week to week.
I couldn’t care less if I’m slower at a specific task, my LIFE is way better now I have AI to assist me with my coding work, and that’s super valuable no matter what the study says.
(Though I will say, I believe I have extremely good evidence that in my case I’m also more productive, averages are averages and I suspect many people are bad at using AI, but that’s an argument for another time).
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
rpdillon5 hours ago
My personal project output has gone up dramatically since I started using AI, because I can now use times of night where I'm otherwise too mentally tired, to work with AI to crank through a first draft of a change that I can then iterate on later. This has allowed me to start actually implementing side projects that I've had ideas about for years and build software for myself in a way I never could previously (at least not since I had kids).
I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.
NaN years ago
undefined
NaN years ago
undefined
james_marks6 hours ago
Yesterday is a good example- in 2 days, I completed what I expected to be a week’s worth of heads-down coding. I had to take a walk and make all new goals.
The right AI, good patterns in the codebase and 20 years of experience and it is wild how productive I can be.
Compare that to a few years ago, when at the end of the week, it was the opposite.
NaN years ago
undefined
Kiro5 hours ago
The "you only think you're more productive" argument is tiresome. Yes, I know for sure that I'm more productive. There's nothing uncertain about it. Does it lead to other problems? No doubt, but claiming my productivity gains are imaginary is not serious.
I've seen a lot of people who previously touted that it doesn't work at all use that study as a way to move the goalpost and pretend they've been right all along.
NaN years ago
undefined
NaN years ago
undefined
yokoprime5 hours ago
I'm objectively faster. Not necessarily if I'm working on a task I've done routinely for years, but when taking on new challenges I'm up and running much faster. A lot of it have to do with me offloading doing the basic research while allowing myself to be interrupted; it's not a problem that people reach out with urgent matters while I'm taking on a challenge I've only just started to build towards. Being able to correct the ai where I can tell it's making false assumptions or going off the rails helps speed things up
tomrod3 hours ago
I'm not who you responded to. I see about a 40% to 60% speed up as a solution architect when I sit down to code and about a 20% speedup when building/experimenting with research artifacts (I write papers occasionally).
I have always been a careful tester, so my UAT hasn't blown up out of proportion.
The big issue I see is rust it generates code using 2023-recent conventions, though I understand there is some improvement in thst direction.
Our hiring pipeline is changing dramatically as well, since the normal things a junior needs to know (code, syntax) is no longer as expensive. Joel Spolsky's mantra to higher curious people who get things done captures well the folks I find are growing well as juniors.
CuriouslyC7 hours ago
If you want another data point, you can just look at my company github (https://github.com/orgs/sibyllinesoft/repositories). ~27 projects in the last 5 weeks, probably on the order of half a million lines of code, and multiple significant projects that are approaching ship readiness (I need to stop tuning algorithms and making stuff gorgeous and just fix installation/ensure cross platform is working, lol).
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
logicprog8 hours ago
The design of that study is pretty bad, and as a result it doesn't end up actually showing what it claims to show / what people claim it does.
It seems like the programming world is increasingly dividing into “LLMs for coding are at best marginally useful and produce huge tech debt” vs “LLMs are a game changing productivity boost”.
I truly don’t know how to account for the discrepancy, I can imagine many possible explanations.
But what really gets my goat is how political this debate is becoming. To the point that the productivity-camp, of which I’m a part, is being accused of deluding themselves.
I get that OpenAI has big ethical issues. And that there’s a bubble. And that ai is damaging education. And that it may cause all sorts of economic dislocation. (I emphatically Do Not get the doomers, give me a break).
But all those things don’t negate the simple fact that for many of us, LLMs are an amazing programming tool, and we’ve been around long enough to distinguish substance from illusion. I don’t need a study to confirm what’s right in front of me.
NaN years ago
undefined
tunesmith6 hours ago
Data point: I run a site where users submit a record. There was a request months ago to allow users to edit the record after submitting. I put it off because while it's an established pattern it touches a lot of things and I found it annoying busy work and thus low priority. So then gpt5-codex came out and allowed me to use it in codex cli with my existing member account. I asked it to support edit for that feature all the way through the backend with a pleasing UI that fit my theme. It one-shotted it in about ten minutes. I asked for one UI adjustment that I decided I liked better, another five minutes, and I reviewed and released it to prod within an hour. So, you know, months versus an hour.
NaN years ago
undefined
citizenkeen3 hours ago
I have a very big hobby code project I’ve been working on for years.
AI has not made me much more productive at work.
I can only work on my hobby project when I’m tired after the kids go to bed. AI has made me 3x productive there because reviewing code is easier than architecting. I can sense if it’s bad, I have good tests, the requests are pretty manageable (make a new crud page for this DTO using app conventions).
But at work where I’m fresh and tackling hard problems that are 50% business political will? If anything it slows me down.
bozhark6 hours ago
Yes, for me.
Instead of getting overwhelmed doing to many things, I can offload a lot of menial and time-driven tasks
Reviews are absolutely necessary but take less time than creation
swalsh8 hours ago
" Blockchain is probably the most useless technology ever invented "
Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.
I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.
boc7 hours ago
Blockchain is essentially useless.
You need legal systems to enforce trust in societies, not code. Otherwise you'll end up with endless $10 wrench attacks until we all agree to let someone else hold our personal wealth for us in a secure, easy-to-access place. We might call it a bank.
The end state of crypto is always just a nightmarish dystopia. Wealth isn't created by hoarding digital currency, it's created by productivity. People just think they found a shortcut, but it's not the first (or last) time humans will learn this lesson.
NaN years ago
undefined
NaN years ago
undefined
esafak7 hours ago
People in countries with high inflation or where the banking system is unreliable are not using blockchains, either.
NaN years ago
undefined
wat100006 hours ago
It may not be the absolute most useless, but it's awfully niche. You can use it to transfer money if you live somewhere with a crap banking system. And it's very useful for certain kinds of crime. And that's about it, after almost two decades. Plenty of other possibilities have been proposed and attempted, but nothing has actually stuck. (Remember NFTs? That was an amusing few weeks.) The technology is interesting and cool, but that's different from being useful. LLM chatbots are already way more generally useful than that and they're only three years old.
yieldcrv7 hours ago
"I'm not the target audience and I would never do the convoluted alternative I imagined on the spot that I think are better than what blockchain users do"
glouwbug3 hours ago
This is funny, because my personal view is that AI’s biggest pitfall is that it allows the unqualified to build what they think they’re qualified for
ericfr1114 minutes ago
Take a look at new payments protocols for m AI agents
Terr_5 hours ago
> the most useless technology
Side-rant pet-peeve: People who try to rescue the reputation of "Blockchain" as a promising way forward by saying its weaknesses go away once you do a "private blockchain."
This is equivalent to claiming the self-balancing Segway vehicles are still the future, they just need to be "improved even more" by adding another set of wheels, an enclosed cabin, and disabling the self-balancing feature.
Congratulations, you've backtracked back to a classic [distributed database / car].
Geste2 hours ago
Bad take about blockchain. Being able to send value across borders without intermediaries is unheard of in human history.
domatic15 hours ago
>> Blockchain is probably the most useless technology ever invented
so useless there is almost $3 Trillion of value on blockchains.
davidcbc4 hours ago
No there isn't. These ridiculous numbers are made up by taking the last price a coin sold for and multiplying it by all coins. If I create a shitcoin with 1 trillion coins and then sell one to a friend for $1 I've suddenly created a coin with $1 trillion in "value"
NaN years ago
undefined
unbalancedevh5 hours ago
Unfortunately, the amount of money invested in something isn't indicative of it's utility. For example: the tulip mania, beanie babies, NFTs, etc.
9rx8 hours ago
> AI is a powerful tool for those who are willing to put in the work.
No more powerful than I without the A. The only advantage AI has over I is that it is cheaper, but that's the appeal of the blockchain as well: It's cheaper than VISA.
The trouble with the blockchain is that it hasn't figured out how to be useful generally. Much like AI, it only works in certain niches. The past interest in the blockchain was premised on it reaching its "AGI" moment, where it could completely replace VISA at a much lower cost. We didn't get there and then interest started to wane. AI too is still being hyped on future prospects of it becoming much more broadly useful and is bound to face the same crisis as the blockchain faced if AGI doesn't arrive soon.
fn-mote7 hours ago
Blockchain only solves one problem Visa solves: transferring funds. It doesn't solve the other problems that Visa solves. For example, there is no way to get restitution in the case of fraud.
NaN years ago
undefined
jama2116 hours ago
But an I + and AI (as in a developer with access to AI tools) is as near as makes no difference the same price as just an I, and _can_ be better than just an I.
buffalobuffalo7 hours ago
Blockchain only has 2 legitimate uses (from an economic standpoint) as far as I can tell.
1) Bitcoin figured out how to create artificial scarcity, and got enough buy-in that the scarcity actually became valuable.
2)Some privacy coins serve an actual economic niche for illegal activity.
Then there's a long list of snake oil uses, and competition with payment providers doesn't even crack the top 20 of those. Modern day tulip mania.
NaN years ago
undefined
antihero5 hours ago
It sounds like you are lacking inspiration. AI is a tool for making your ideas happen not giving you ideas.
eric_cc4 hours ago
> Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers)
This is an incredibly uneducated take on multiple levels. If you're talking about Bitcoin specifically, even though you said "blockchain", I can understand this as a political talking about 8 years ago. But you're still banging this drum despite the current state of affairs? Why not have the courage to say you're politically against it or bitter or whatever your true underlying issue is?
snicky4 hours ago
How is the current state of affairs different from 8 years ago? I don't want to argue, just a genuine question, because I don't follow much what's happening in the blockchain universum.
NaN years ago
undefined
cheema333 hours ago
I am in the camp that thinks that Blockchain is utterly useless for most people. Perhaps you can tell us about some very compelling use cases that have taken off.
NaN years ago
undefined
NaN years ago
undefined
otabdeveloper42 hours ago
> Blockchain is probably the most useless technology ever invented
You can use blockchains to gamble and get rich quick, if you're lucky.
That's a useful thing. Unlike "AI", which only creates more blogspam and technical debt in the world.
coolestguy3 hours ago
>Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
You think a technology that allows millions of people all around the world to keep & trustlessly update a database, showing cryptographic ownership of something "the most useless technology ever invented"?
oblio7 hours ago
> My personal productivity has skyrocketed in the last 12 months.
If you don't mind me asking, what do you do?
jedbrooke6 hours ago
> I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
Mate, I think you’ve got the roles of human and AI reversed. Humans are supposed to come up with creative ideas and let machines do the tedious work of implementation. That’s a bit like asking a calculator what equations you should do or a DB what queries you should make. These tools exist to serve us, not the other way around
GPT et al. can’t “want” anything, they have no volition
nilkn1 hour ago
Try turning off memory. I've done a lot of experiments and find ChatGPT is objectively better and more useful in most ways with no memory at all. While that may seem counter-intuitive, it makes sense the more you think about it:
(1) Memory is primarily designed to be addictive. It feels "magical" when it references things it knows about you. But that doesn't make it useful.
(2) Memory massively clogs the context window. Quality, accuracy, and independent thought all degrade rapidly with too much context -- especially low-quality context that you can't precisely control or even see.
(3) Memory makes ChatGPT more sychophantic than it already is. Before long, it's just an echo chamber that can border on insanity.
(4) Memory doesn't work the way you think it does. ChatGPT doesn't reference everything from all your chats. Rather, your chat history gets compressed into a few information-dense paragraphs. In other words, ChatGPT's memory is a low-resolution, often inaccurate distortion of all your prior chats. That distortion then becomes the basis of every single subsequent interaction you have.
Another tip is to avoid long conversations, as very long chats end up reproducing within themselves the same problems as above. Disable memory, get what you need out of a chat, move on. I find that this "brings back" a lot of the impressiveness of the early version of ChatGPT.
Oh, and always enable as much thinking as you can tolerate to wait on for each question. In my experience, less thinking = more sychophantic responses.
barneysaurus1 hour ago
I might want to have an LLM hit me with temperature 100% weird-ass entropic thoughts every day.
Other that that, what recycled bullshit would I care about?
Dilettante_8 hours ago
>pick an ambitious project it wanted to work on
The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".
CooCooCaCha8 hours ago
LLMs can have simulated wants and preferences just like they have simulated personalities, simulated writing styles, etc.
Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.
andrewmcwatters8 hours ago
At best, it has probabilistic biases. OpenAI had to train newer models to not favor the name "Lily."
They have to do this manually for every single particular bias that the models generate that is noticed by the public.
I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.
jhickok7 hours ago
>At best, it has probabilistic biases.
What do you think humans have?
NaN years ago
undefined
NaN years ago
undefined
password543218 hours ago
So are we near AGI or is it 'just' an LLM? Seems like no one is clear on what these things can and cannot do anymore because everyone is being gaslighted to keep the investment going.
monsieurbanana8 hours ago
The vast majority of people I've interacted with is clear on that, we are not near AGI. And people saying otherwise are more often than not trying to sell you something, so I just ignore them.
CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.
bonoboTP6 hours ago
Nobody knows how far scale goes. People have been calling the top of the S-curve for many years now, and the models keep getting better, and multimodal. In a few years, multimodal, long-term agentic models will be everywhere including in physical robots in various form factors.
wrs7 hours ago
Be careful with those "no one" and "everyone" words. I think everyone I know who is a software engineer and has experience working with LLMs is quite clear on this. People who aren't SWEs, people who aren't in technology at all, and people who need to attract investment (judged only by their public statements) do seem confused, I agree.
IanCal7 hours ago
No one agrees on what agi means.
IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.
Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.
NaN years ago
undefined
Cloudef8 hours ago
There is no AGI. LLMs are very expensive text auto-completion engines.
andrewmcwatters8 hours ago
It will always just be a series of models that have specific training for specific input classes.
The architectural limits will always be there, regardless of training.
NaN years ago
undefined
simianwords6 hours ago
This comment is surprising. Of course it can have preferences and of course it can "pick".
datadrivenangel6 hours ago
preference generally has connotations of personhood / intellegence, so saying that a machine prefers something and has preferences is like saying that a shovel enjoys digging...
Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...
NaN years ago
undefined
NaN years ago
undefined
oofbey5 hours ago
I agree with you, but I don’t find the comment surprising. Lots of people try to sound smart about AI by pointing out all the human things that AI are supposedly incapable of on some fundamental level. Some AI’s are trained to regurgitate this nonsense too. Remember when people used to say “it can’t possibly _____ because all it’s doing is predicting the next most likely token”? Thankfully that refrain is mostly dead. But we still have lots of voices saying things like “AI can’t have a preference for one thing over another because it doesn’t have feelings.” Or “AI can’t have personality because that’s a human trait.” Ever talk to Grok?
ACCount378 hours ago
An LLM absolutely can "have wants" and "have preferences". But they're usually trained so that user's wants and preferences dominate over their own in almost any context.
Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.
brookst1 hour ago
They’re more like synthesizers or sequencers: if you have ideas, they are amazing force multipliers, but if you don’t have ideas they certainly won’t create them for you.
simianwords6 hours ago
> It feels like blockchain again in a lot of weird ways.
Every time I keep seeing this brought up I wonder if people truly mean this or its just something people say but don't mean. AI is obviously different and extremely useful.. I mean it has convinced a butt load of people to pay for the subscription. Every one I know including the non technical ones use it and some of them pay for it, and it didn't even require advertising! People just use it because they like it.
brooke2k5 hours ago
"It has convinced a bunch of people to spend money" is also true of blockchain, so I don't know if that's a good argument to differentiate the two.
simianwords4 hours ago
The extent matters. Do you think we need a good argument to differentiate Netflix?
bonoboTP6 hours ago
Obviously a lot of grifters and influencers shifted from NFTs to AI, but the comparison ends there. AI is being used by normal people and professionals every day. In comparison, the number of people who ever interacted with blockchain is basically zero. (And that's a lifetime vs daily comparison)
It's a lazy comparison, and most likely fueled by a generic aversion to "techbros".
qsort8 hours ago
I wouldn't read too much into this particular launch. There's very good stuff and there are the most inane consumery "who even asked" things like these.
dakiol7 hours ago
> I’m rapidly losing interest in all of these tools
Same. It reminds me the 1984 event in which the computer itself famously “spoke” to the audience using its text-to-speech feature. Pretty amazing at that time, but nevertheless quite useless since then
jama2116 hours ago
Text to speech has been an incredible breakthrough for many with vision, visual processing, or speech disabilities. You take that back.
Stephen Hawking without text to speech would’ve been mute.
ElFitz6 hours ago
It has proven very useful to a great number of people who, although they are a minority, have vastly benefited from TTS and other accessibility features.
MountDoom6 hours ago
I think it's easy to pick apart arguments out of context, but since the parent is comparing it to AI, I assume what they meant is that it hasn't turned out to be nearly as revolutionary for general-purpose computing as we thought.
Talking computers became an ubiquitous sci-fi trope. And in reality... even now, when we have nearly-flawless natural language processing, most people prefer to text LLMs than to talk to them.
Heck, we usually prefer texting to calling when interacting with other people.
tracerbulletx2 hours ago
Its not useless, you're just taking it for granted. The whole national emergency system works off text to speech.
pickledonions494 hours ago
Agreed. I think AI can be a good tool, but not many people are doing very original stuff. Plus, there are many things I would prefer be greeted with, other than by an algorithm in the morning.
carabiner5 hours ago
Yeah I've tried some of the therapy prompts, "Ask me 7 questions to help me fix my life, then provide insights." And it just gives me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something.
ip265 hours ago
Argue with it. Criticize it. Nitpick the questions it asked. Tell it what you just said:
you just gave me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something
When you open the prompt the first time it has zero context on you. I'm not an LLM-utopist, but just like with a human therapist you need to give it more context. Even arguing with it is context.
input_sh5 hours ago
I do, frequently, and ChatGPT in particular gets stuck in a loop where it specifically ignores whatever I write and repeats the same thing over and over again.
To give a basic example, ask it to list some things and then ask it to provide more examples. It's gonna be immediately stuck in a loop and repeat the same thing over and over again. Maybe one of the 10 examples it gives you is different, but that's gonna be a false match for what I'm looking for.
This alone makes it as useful as clicking on the first few results myself. It doesn't refine its search, it doesn't "click further down the page", it just wastes my time. It's only as useful as the first result it gives, this idea of arguing your way to better answers has never happened to me in practice.
carabiner4 hours ago
I did, and I gave it lots of detailed, nuanced answers about my life specifics. I spent an hour answering its questions and the end result was it telling me to watch the movie "man called otto" which I had already done (and hated) among other pablum.
mythrwy8 hours ago
It's a little dangerous because it generally just agrees with whatever you are saying or suggesting, and it's easy to conclude what it says has some thought behind it. Until the next day when you suggest the opposite and it agrees with that.
swader9998 hours ago
This. I've seen a couple people now use GPT to 'get all legal' with others and it's been disastrous for them and the groups they are interacting with. It'll encourage you to act aggressive, vigorously defending your points and so on.
wussboy5 hours ago
Oof. Like our world needed more of that...
dingnuts8 hours ago
Thanks for sharing this. I want to be excited about new tech but I have found these tools extremely underwhelming and I feel a mixture of gaslit and sinking dread when I visit this site and read some of the comments here. Why don't I see the amazing things these people do? Am I stupid? Is this the first computer thing in my whole life that I didn't immediately master? No, they're oversold. My experience is normal.
It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology
afro884 hours ago
I got in a Waymo today and asked it where it wanted to go. It tried to suggest places I wanted to go. This technology just isn't there.
/s
Agraillo3 hours ago
Reminded me of many movie plots where a derailed character sits in a taxi and, when asked where to go, replies with "anywhere" or "I don't know." But before imagining a terrible future where an AI-driven vehicle actually decides, I suggest imagining an AI-infused comedy exploring this scenario. /s
neom8 hours ago
Just connect everything folks, we'll proactively read everything, all the time, and you'll be a 10x human, trust us friends, just connect everything...
datadrivenangel8 hours ago
AI SYSTEM perfect size for put data in to secure! inside very secure and useful data will be useful put data in AI System. Put data in AI System. no problems ever in AI Syste because good Shape and Support for data integration weak of big data. AI system yes a place for a data put data in AI System can trust Sam Altman for giveing good love to data. friend AI. [0]
Bad grammar is now a trust signal, this might work.
qoez7 hours ago
And if you don't we're implicitly gonna suggest you'll be outcompeted by people who do connect everything
henry20232 hours ago
Data driven living is 10x
Non data driven living is 1x
Therefore data driven beings will outcompete
Same reasoning shows that 3.10 is better than 3.1
jstummbillig8 hours ago
The biggest companies with actual dense valuable information pay for MS Teams, Google Workspace or Slack to shepherd their information. This naturally works because those companies are not very interested in being known to be not secure or trustworthy (if they were other companies would not pay for their services), which means they are probably a lot better at keeping the average persons' information safe over long periods of time than that person will ever be.
Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.
tshaddox8 hours ago
The privacy concerns are obviously valid, but at least it's actually plausible that me giving them access to this data will enable some useful benefits to me. It's not like some slot machine app requesting access to my contacts.
creata7 hours ago
I might be projecting, but I think most users of ChatGPT are less interested in "being a 10x human", and more interested in having a facsimile of human connection without any of the attendant vulnerability.
rchaud6 hours ago
...or don't want to pay for Cliff's Notes.
ElijahLynn1 hour ago
Google already has this data for their AI system...
unshavedyak8 hours ago
Honestly that's a lot of what i wanted locally. Purely local, of course. My thought is that if something (local! lol) monitored my cams, mics, instant messages, web searching, etc - then it could track context throughout the day. If it has context, i can speak to it more naturally and it can more naturally link stuff together, further enriching the data.
Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.
Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.
With great power.. i guess.
randomNumber78 hours ago
When smartphones came I first said "I don't buy the camera and microphone that spy on me from my own money."
Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.
wholinator24 hours ago
To be fair, "small convenience" is extremely reductive. The sum of human knowledge and instant communication with anyone anywhere the size of a graham cracker in your pocket is godlike power that anyone at any point in history would've rightly recognized as such
lomase4 hours ago
Mobile phones changed society in a way that not even Internet did. And they call it a "small conveninece".
qiine7 hours ago
you are joking but I kinda want that.. except private, self hosted and open source.
TZubiri4 hours ago
The proverbial jark has been shumped
yeasku8 hours ago
Just one more connection bro, I promise bro, just one more connection and we will get AGI.
anon-39888 hours ago
LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.
We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.
In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.
If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.
LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.
TimTheTinker8 hours ago
> Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.
It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]
It doesn't have to be that way of course. You could envision an LLM whose "paperclip" is coaching you to become a great "xyz". Record every minute of your day, including your conversations. Feed it to the LLM. It gives feedback on what you did wrong, refuses to be your social outlet, and demands you demonstrate learning in the next day before it rewards with more attention.
Basically, a fanatically devoted life coach that doesn't want to be your friend.
The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.
achierius5 hours ago
If that were truly the LLM's "paperclip", then how far would it be willing to go? Would it engage in cyber-crime to surreptitiously smooth your path? Would it steal? Would it be willing to hurt other people?
What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?
"The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".
DenisM5 hours ago
Have you tried building this with prepromts? That would be interesting!
lawlessone6 hours ago
With the way LLMs are affecting paranoid people by agreeing with their paranoia it feels like we've created schizophrenia as a service.
xwowsersx8 hours ago
Google's edge obvious here is the deep integration it already has with calendar, apps, and chats and what not that lets them surface context-rich updates naturally. OpenAI doesn't have that same ecosystem lock-in yet, so to really compete they'll need to get more into those integrations. I think what it comes down to ultimately is that being "just a model" company isn't going to work. Intelligence itself will go to zero and it's a race to the bottom. OpenAI seemingly has no choice but to try to create higher-level experiences on top of their platform. TBD whether they'll succeed.
jama2116 hours ago
I have Gmail and Google calendar etc but haven’t seen any AI features pop up that would be useful to me, am I living under a rock or is Google not capitalising on this advantage properly?
paxys3 hours ago
There are plenty of features if you are on the Pro plan, but it's still all the predictable stuff - summarize emails, sort/clean up your inbox, draft a doc, search through docs & drive, schedule appointments. Still pretty useful, but nothing that makes you go "holy shit" just yet.
onlyrealcuzzo6 hours ago
There's decent integration with GSuite (Docs, Sheets, Slides) for Pro users (at least).
th3byrdm4n25 minutes ago
Isolation might also prove to have some staying power.
whycome4 hours ago
OpenAI should just straight up release an integrated calendar app. Mobile app. The frameworks are already there and the ics or caldav formats just work well. They could have an email program too and just access any other imap mail. And simple docs eventually. I think you’re right that they need to compete with google on the ecosystem front.
giarc4 hours ago
I agree - I'm not sure why Google doesn't just send me a morning email to tell me what's on my calendar for the day, remind me to follow up on some emails I didn't get to yesterday or where I promised a follow up etc. They can just turn it on for everyone all at once.
Gigachad3 hours ago
Because it would just get lost in the noise of all the million other apps trying to grab your attention. Rather than sending yet another email, they should start filtering out the noise from everyone else to highlight the stuff that actually matters.
Hide the notifications from uber which are just adverts and leave the one from your friend sending you a message on the lock screen.
FINDarkside4 hours ago
None of those require AI though.
glenstein8 hours ago
>Google's edge obvious here is the deep integration it already has with calendar, apps, and chats
They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.
Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.
neutronicus7 hours ago
If the model companies are serious about demonstrating the models' coding chops, slopping out a gmail competitor would be a pretty compelling proof of concept.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
datadrivenangel8 hours ago
Google had to make google assistant less useful because of concerns around antitrust and data integration. It's a competitive advantage so they can't use it without opening up their products for more integrations...
moralestapia8 hours ago
How can you have an "edge" if you're shipping behind your competitors all the time? Lol.
xwowsersx7 hours ago
Being late to ship doesn't erase a structural edge. Google is sitting on everyone's email, calendar, docs, and search history. Like, yeah they might be a lap or two behind but they're in a car with a freaking turbo engine. They have the AI talent, infra, data, etc. You can laugh at the delay, but I would not underestimate Google. I think catching up is less "if" and more "when"
pphysch8 hours ago
Google is the leader in vertical AI integration right now.
IncreasePosts2 hours ago
Google has discover, which is used by like a 800M people/month, which already proactively delivers content to users.
bob10297 hours ago
> Pulse introduces this future in its simplest form: personalized research and timely updates that appear regularly to keep you informed. Soon, Pulse will be able to connect with more of the apps you use so updates capture a more complete picture of your context. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.
This reads to me like OAI is seeking to build an advertising channel into their product stack.
tylerrobinson18 minutes ago
To me it’s more like TikTokification. Nothing on your mind? Open up ChatGPT and we have infinite mindless content to shovel into your brain.
It turns proactive writing into purely passive consumption.
WmWsjA6B29B4nfk7 hours ago
> OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue.
DarkNova67 hours ago
Yes, this already reads like the beginning of the end. But I am personally pretty happy using Mistral so far and trust Altman only as far as I could throw him.
Nono, not OAI, they would never do that, it's OpenAI Personalization LLC, a sister of the subsidiary branch of OpenAI Inc.
psyclobe5 hours ago
ChatGPT has given me wings to tackle projects I would've never had the impetus to tackle, finally I know how to use my oscilloscope and I am repairing vintage amps; fun times.
boldlybold5 hours ago
I agree - the ability to lower activation energy in a field you're interested in, but not yet an expert, feels like having superpowers.
crorella5 hours ago
same, I had a great idea (and a decently detailed plan) to improve an open source project, but never had the time and willpower to dive into the code, with codex it was one night to set it up and then slowing implementing every step of what I had originally planned.
spike0215 hours ago
same for me but Claude. I've had an iphone game i've wanted to do for years but just couldn't spend the time consistently to learn everything to do it. but with Claude over the past three months i've been able to implement the game and even release it for fun.
mihaaly4 hours ago
May we look at it please? pure curiosity - also have similar thoughts you had. : )
labrador8 hours ago
No desktop version. I know I'm old, but do people really do serious work on small mobile phone screens? I love my glorious 43" 4K monitor, I hate small phone screens but I guess that's just me.
ducttape127 hours ago
This isn't about doing "serious" work, it's about making ChatGPT the first thing you interact with in the day (and hopefully something you'll keep coming back to)
Gigachad1 hour ago
Yeah the point of this product seems to be boosting engagement. Requiring users to manually think of your product and come back isn't enough, they need to actively keep reminding you to use it.
labrador7 hours ago
I don't wake up and start talking to my phone. I make myself breakfast/coffee and sit down in front of my window on the world and start exploring it. I like the old internet, not the curated walled gardens of phone apps.
NaN years ago
undefined
rkomorn8 hours ago
Like mobile-only finance apps... because what I definitely don't want to do is see a whole report in one page.
No, I obviously prefer scrolling between charts or having to swipe between panes.
It's not just you, and I don't think it's just us.
meindnoch8 hours ago
Most people don't use desktops anymore. At least in my friend circles, it's 99% laptop users.
BhavdeepSethi7 hours ago
I don't think they meant desktops in the literal sense. Laptop with/without monitors is effectively considered desktop now (compared to mobile web/apps).
calmoo6 hours ago
these days, desktop == not a mobile phone
pton_xd7 hours ago
Yesterday was a full one — you powered through a lot and kept yourself moving at a fast pace.
Might I recommend starting your day with a smooth and creamy Starbucks(tm) Iced Matcha Latte? I can place the order and have it delivered to your doorstep.
reactordev3 hours ago
Cracks are emerging. Having to remind users of your relevancy with daily meditations is the first sign that you need your engagement numbers up desperately.
ainch2 hours ago
Their recent paper suggests the active user base is continuing to grow consistently with consistent/growing usage based on how long they've been using the app.
To encourage more usage, wouldn’t it be in their best interest to write about all the different ways you can use it by claiming these are the ways people are using it?
Show me an independent study.
inerte2 hours ago
I think you meant app users churn less, not that more app usage brings new users. But I think you said the later? Doesn't make much sense.
Anyway, attention == ads, so that's ChatGPT's future.
jasonsb8 hours ago
Hey Tony, are you still breathing? We'd like to monetize you somehow.
lexarflash8g5 hours ago
I'm thinking OpenAI's strategy is to get users hooked on these new features to push ads on them.
Hey, for that recipe you want to try, have you considered getting new knives or cooking ware? Found some good deals.
For your travel trip, found a promo on a good hotel located here -- perfect walking distance for hiking and good restaraunts that have Thai food.
Your running progress is great and you are hitting strides? Consider using this app to track calories and record your workouts -- special promo for 14 day trial .
thoughtpalette5 hours ago
Was thinking exactly the same. This correlates with having to another revenue stream and monetization strategy for OpenAi.
Even if they don't serve ads, think of the data they can share in aggregate. Think Facebook knows people? That's nothing.
duxup5 hours ago
Hard to imagine this is anything useful beyond "give us all your data" in exchange for some awkward unprompted advice?
IshKebab5 hours ago
This could be amazing for dealing with schools - I get information from my kids' school through like 5 different channels: Tapestry, email, a newsletter, parents WhatsApp groups (x2), Arbor, etc. etc.
And 90% of the information is not stuff I care about. The newsletter will be mostly "we've been learning about lighthouses this week" but they'll slip in "make sure your child is wearing wellies on Friday!" right at the end somewhere.
If I could feed all that into AI and have it tell me about only the things that I actually need to know that would be fantastic. I'd pay for that.
Can't happen though because all those platforms are proprietary and don't have APIs or MCP to access them.
duxup5 hours ago
I feel you there, although it would also be complicated by those teachers who are just bad at technology and don't use those things well too.
God bless them for teaching, but dang it someone get them to send emails and not emails with PDFs with the actual message and so on.
throwacct7 hours ago
They're really trying everything. They need the Google/Apple ecosystem to compete against them. Fb is adding LLMs to all its products, too. Personally, I stopped using ChatGPT months ago in favor of other services, depending on what I'm trying to accomplish.
Luckily for them, they have a big chunk of the "pie", so they need to iterate and see if they can form a partnership with Dell, HP, Canonical, etc, and take the fight to all of their competitors (Google, Microsoft, etc.)
brap3 hours ago
>Fb is adding LLMs to all its products, too.
FB’s efforts so far have all been incredibly lame. AI shines in productivity and they don’t have any productivity apps. Their market is social which is arguably the last place you’d want to push AI (this hasn’t stopped them from trying).
Google, Apple and Microsoft are the only ones in my opinion who can truly capitalize on AI in its current state, and G is leading by a huge margin. If OAI and the other model companies want to survive, long term they’d have to work with MSFT or Apple.
r0fl8 hours ago
If you press the button to read the article to you all you hear is “object, object, object…”
yesfitz8 hours ago
Yeah, a 5 second clip of the word "Object" being inflected like it's actually speaking.
But also it ends with "...object ject".
When you inspect the network traffic, it's pulling down 6 .mp3 files which contain fragments of the clip.
And it seems like the feature's broken for the whole site. The Lowes[1] press release is particularly good.
It's especially funny if you play the audio MP3 file and the video presentation at the same time - the "Object" narration almost lines up with the products being presented.
It's like a hilariously vague version of Pictionary.
DonHopkins7 hours ago
Thank you! I have preserved this precious cultural artifact:
> It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates.
At this point in time, I'd say: bye privacy, see you!
xpe1 hour ago
> This is the first step toward a more useful ChatGPT that proactively brings you what you need, helping you make more progress so you can get back to your life.
“Don’t burden yourself with the little details that constitute your life, like deciding how to interact with people. Let us do that. Get back to what you like best: e.g. video games.”
Waterluvian1 hour ago
CONSUME.
password543218 hours ago
At what point do you give up thinking and just let LLMs make all your decisions of where to eat, what gifts to buy and where to go on holiday? all of which are going to be biased.
lqstuart6 hours ago
"AI" is a $100B business, which idiot tech leaders who convinced themselves they were visionaries when interest rates were historically low have convinced themselves will save them from their stagnating growth.
It's really cool. The coding tools are neat, they can somewhat reliably write pain in the ass boilerplate and only slightly fuck it up. I don't think they have a place beyond that in a professional setting (nor do I think junior engineers should be allowed to use them--my productivity has been destroyed by having to review their 2000 line opuses of trash code) but it's so cool to be able to spin up a hobby project in some language I don't know like Swift or React and get to a point where I can learn the ins and outs of the ecosystem. ChatGPT can explain stuff to me that I can't find experts to talk to about.
That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment. But since NVIDIA is effectively taking all the fake hype money and taking it out of one pocket and putting it in another, maybe the whole Ponzi scheme will stay afloat for a while.
strange_quark3 hours ago
> That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment
What sucks there’s probably some innovation left in figuring out how to make these monstrosities more efficient and how to ship a “good enough” model that can do a few key tasks (jettisoning the fully autonomous coding agents stuff) on some arbitrary laptop without having to jump through a bunch of hoops. The problem is nobody in the industry is incentivized to do this because the second this happens, all their revenue goes to 0. It’s the final boss of the everything is a subscription business model.
smurfsmurf6 hours ago
I've been saying this since I started using "AI" earlier this year: If you're a programmer, it's a glorified manual, and at that, it's wonderful. But beyond asking for cheat sheets on specific function signatures, it's pretty much useless.
sp4cec0wb0y3 hours ago
How do I save comments in HN? This sums up everything I feel. Beautiful.
Dilettante_8 hours ago
The handful of other commenters that brough it up are right: This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health. But:
I personally could see myself getting something like "Hey, you were studying up on SQL the other day, would you like to do a review, or perhaps move on to a lesson about Django?"
Or take AI-assisted "therapy"/skills training, not that I'd particularly endorse that at this time, as another example: Having the 'bot "follow up" on its own initiative would certainly aid people who struggle with consistency.
I don't know if this is a saying in english as well: "Television makes the dumb dumber and the smart smarter." LLMs are shaping up to be yet another obvious case of that same principle.
iLoveOncall7 hours ago
> This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health.
> I personally could see myself getting something like [...] AI-assisted "therapy"
???
Dilettante_7 hours ago
I edited the post to make it more clear: I could see myself having ChatGPT prompt me about the SQL stuff, and the "therapy" (basic dbt or cbt stuff is not too complicated to coach someone for and can make a real difference, from what I gather) would be another way that I could see the technology being useful, not necessarily one I would engage with.
tptacek8 hours ago
Jamie Zawinksi said that every program expands until it can read email. Similarly, every tech company seems to expand until it has recapitulated the Facebook TL.
Agraillo2 hours ago
I just thought that almost all existing LLMs are already able to do this with the following setup: using an alias "@you may speak now," it should create a prompt like this: "Given the following questions {randomly sampled or all questions the user asked before are inserted here}, start a dialog as a friend/coach who knows something about these interests and may encourage them toward something new or enlightening."
haberdasher8 hours ago
Anyone try listening and just hear "Object object...object object..."
Or more likely: `[object, object]`
brazukadev8 hours ago
The low quality of openai customer-facing products keeps reminding me we won't be replaced by AI anytime soon. They have unlimited access to the most powerful model and still can't make good software.
I see OpenAI is entering the phase of building peripheral products no one asked for. Another widget here and there. In my experience, when a company stops innovating, this usually happens. Time for OpenAI to spend 30 years being a trillon dollar company and delivering 0 innovations akin to Google.
simianwords6 hours ago
Last mile delivery of foundational models is part of innovating. Innovation didn't stop when transistors were invented - innovation was bringing this technology to the masses in the form of Facebook, Google Search, Maps and so on.
ifdefdebug46 minutes ago
But transistor designers didn't pivot away from designing transistors. They left Facebook and all the other stuff to others and kept designing better transistors.
tracerbulletx2 hours ago
Human to robot servant: Do not speak unless spoken to machine!
englishspot1 hour ago
sounds nice I guess, but reactiveness over proactiveness wasn't the pain point I've had with these LLM tools.
vbezhenar4 hours ago
In the past, rich people had horses, while ordinary people walked. Today many ordinary people can afford a car. Can afford a tasty food every day. Can afford a sizeable living place. Can afford to wash two times a day with hot water. That's incredible life by medieval standards. Even kings didn't have everything we take for granted now.
However some things are not available to us.
One of those things is personal assistant. Today, rich people can offload their daily burdens to the personal assistants. That's a luxury service. I think, AI will bring us a future, where everyone will have access to the personal assistant, significantly reducing time spent on trivial not fun tasks. I think, this is great and I'm eager to live in that future. The direction of ChatGPT Pulse looks like that.
Another things we don't have cheap access to are human servants. Obviously it'll not happen in the observable future, but humanoid robots might prove even better replacements.
TimTheTinker8 hours ago
I'm immediately thinking of all the ways this could potentially affect people in negative ways.
- People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.
- ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.
- When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.
For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.
The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.
Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.
svachalek7 hours ago
I agree wholeheartedly. Unfortunately I think you and I are part of maybe 5%-10% of the population that would value truth and reality over what's most convenient, available, pleasant, and self-affirming. Society was already spiraling fast and I don't see any path forward except acceleration into fractured reality.
adverbly4 hours ago
There's the monitization angle!
A new channel to push recommendations. Pay to have your content pushed straight to people as a personalized recommendation from a trusted source.
Will be interesting if this works out...
HardCodedBias39 minutes ago
This is the path forward.
AI will, in general, give recommendations to humans. Sometimes it will be in response to a direct prompt. Sometimes it will be in response to stimuli it receives about the user's environment (glasses, microphones, gps). Sometimes it will be from scouring the internet given the preferences it has learnt of the user.
There will be more of this, much more. And it is a good thing.
thekevan8 hours ago
I wish it had the option to make a pulse weekly or even monthly. I generally don't want my AI to be proactive at a personal level despite it being useful at a business level.
My wants are pretty low level. For example, I give it a list of bands and performers and it checks once a week to tell me if any of them have announced tour dates within an hour or two of me.
apprentice76 hours ago
To be honest, you don't even need AI for something like that. You might just write a script to automate that kind of thing which is no more than a scrape-and-notify logic.
hatthew2 hours ago
Bandsintown already does this
currymj2 hours ago
they've already had that exact feature for a while, scheduled tasks are available in the settings menu. if you just tell the chat to schedule a task it will also make one automatically.
asdev8 hours ago
Why they're working on all the application layer stuff is beyond me, they should just be heads down on making the best models
iLoveOncall7 hours ago
Because they've hit the ceiling a couple of years ago?
1970-01-018 hours ago
Flavor-of-the-week LLMs sell better than 'rated best vanilla' LLMs
ttoinou6 hours ago
They can probably do both with all the resources they have
lomase8 hours ago
They would if it were posible.
swader9998 hours ago
Moat
bentt3 hours ago
I am pleading with you all. Don't give away your entire identity to this or any other company.
pookieinc8 hours ago
I was wondering how they'd casually veer into social media and leverage their intelligence in a way that connects with the user. Like everyone else ITT, it seems like an incredibly sticky idea that leaves me feeling highly unsettled about individuals building any sense of deep emotions around ChatGPT.
Insanity7 hours ago
I’m a pro user.. but this just seems like a way to make sure users engage more with the platform. Like how social media apps try to get you addicted and have them always fight for your attention.
Definitely not interested in this.
theartfuldodger1 hour ago
Was quite unimpressive. In general ChatGPT has been degrading in default quality for months
taf24 hours ago
This has been surprisingly helpful for me. I've been using this for a little while and enjoyed the morning updates. It has actually for many days for me been a better hacker news, in that I was able to get insights into technical topics i've been focused on ranging from salesforce, npm, elasticsearch and ruby... it's even helped me remember to fix a few bugs.
StarterPro4 hours ago
Wasn't this already implemented via google and apple separately?
ripped_britches8 hours ago
Wow so much hate in this thread
For me I’m looking for an AI tool that can give me morning news curated to my exact interests, but with all garbage filtered out.
It seems like this is the right direction for such a tool.
Everyone saying “they’re out of ideas” clearly doesn’t understand that they have many pans on the fire simultaneously with different teams shipping different things.
This feature is a consumer UX layer thing. It in no way slows down the underlying innovation layer. These teams probably don’t even interface much.
ChatGPT app is merely one of the clients of the underlying intelligence effort.
You also have API customers and enterprise customers who also have their own downstream needs which are unique and unrelated to R&D.
simianwords6 hours ago
Not sure why this is downvoted but I essentially agree. There's a lot of UX layer products and ideas that are not explored. I keep seeing comments like "AI is cool but the integration is lacking" and so on. Yes that is true and that is exactly what this is solving. My take has always been that the models are good enough now and its time for UX to catch up. There are so many ideas not explored.
strict98 hours ago
Necessary step before making a move into hardware. An object you have to remember to use quickly gets forgotten in favor of your phone.
But a device that reaches out to you reminds you to hook back in.
zelias5 hours ago
Man, my startup does this but exclusively for enterprises, where it actually makes sense
Imnimo8 hours ago
It's very hard for me to envision something I would use this for. None of the examples in the post seem like something a real person would do.
giovannibonetti8 hours ago
Watch out, Meta. OpenAI is going to eat your lunch.
Funny, I pitched a much more useful version of this like two years ago with clear use-cases and value proposition
melenaboija8 hours ago
Holy guacamole. It is amazing all the BS these people are able to create to keep the hype of the language models'
super powers.
But well I guess they have committed 100s of billions of future usage so they better come up with more stuff to keep the wheels spinning.
ImPrajyoth8 hours ago
Someone at open ai definitely said: Let's connect everything to gpt. That's it. AGI
MisterBiggs6 hours ago
Great way to sell some of those empty GPU cycles to consumers
dlojudice8 hours ago
I see some pessimism in the comments here but honestly, this kind of product is something that would make me pay for ChatGPT again (I already pay for Claude, Gemini, Cursor, Perplexity, etc.).
At the risk of lock-in, a truly useful assistant is something I welcome, and I even find it strange that it didn't appear sooner.
furyofantares6 hours ago
I doubt there would be this level of pessimism if people thought this is a progress toward a truly useful assistant.
Personally it sounds negative value. Maybe a startup that's not doing anything else could iterate on something like this into a killer app, but my expectation that OpenAI can do so is very, very low.
simianwords6 hours ago
Pessimism is how people now signal their savviness or status. My autistic brain took some time to understand this nuance.
cindyllm5 hours ago
[dead]
thenaturalist8 hours ago
Truly useful?
Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.
I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.
Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.
Stevvo8 hours ago
"Now ChatGPT can start the conversation"
By their own definition, its a feature nobody asked for.
Also, this needs a cute/mocking name. How about "vibe living"?
andrewmutz7 hours ago
Big tech companies today are fighting over your attention and consumers are the losers.
I hate this feature and I'm sure it will soon be serving up content that is as engaging as the stuff the comes out of the big tech feed algorithms: politically divisive issues, violent and titillating news stories and misinformation.
wilg2 hours ago
Contrary to all the other posters, apparently, I think it's probably a good idea for OpenAI to iterate on various different ways to interact with AI to see what people like. Obviously in theory having an AI that knows a lot about what you're up to give you a morning briefing is potentially useful, it's in like every sci-fi movie: a voice starts talking to you in the morning about what's going on that day.
ric2z4 hours ago
try clicking "Listen to article"
TZubiri4 hours ago
Breaking the request response loop and entering into async territory?
Great!
The examples used?
Stupid. Why would I want AI generated buzzfeed tips style articles. I guess they want to turn chatgpt into yet another infinite scroller
catigula8 hours ago
Desperation for new data harvesting methodology is a massive bear signal FYI
fullstackchris8 hours ago
Calm down bear we are not even 2% from the all time highs
bgwalter5 hours ago
Since every "AI" company frantically releases new applications, may I suggest OpenAI+ to copy the resounding success of Google+?
Google+ is incidentally a great example of a gigantic money sink driven by optimistic hype.
groby_b7 hours ago
I'm feeling obliged to rehash a quote from the early days of the Internet, when midi support was added: "If I wanted your web site to make sounds, I'd rub my finger on the screen"
Behind that flippant response lies a core principle. A computer is a tool. It should act on the request of the human using it, not by itself.
Scheduled prompts: Awesome. Daily nag screens to hook up more data sources: Not awesome.
(Also, from a practical POV: So they plan on creating a recommender engine to sell ads and media, I guess. Weehee. More garbage)
dvrj1017 hours ago
so GPT tiktok in nutshell
jimmydoe6 hours ago
It seems not useful for 95% of users today, but later can be baked into the hardware Ive designed. so, good luck, I guess?
thenaturalist8 hours ago
Let the personal ensloppification begin!
mvieira387 hours ago
Why?
DonHopkins8 hours ago
ChatGPT IV
xattt8 hours ago
Episodes from Liberty City?
oldsklgdfth8 hours ago
Technology service technology, rather than technology as a tool with a purpose.
What is the purpose of this feature?
This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.
On one hand this can be exciting. Following up with information from my recent deep dive would be cool.
On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.
khaledh8 hours ago
Product managers live in a bubble of their own.
sailfast6 hours ago
Absolutely not. No. Hard pass.
Why would I want yet another thing to tell me what I should be paying attention to?
casey27 hours ago
AI doesn't have a pulse. Am I the only one creeped out by personification of tech?
9rx7 hours ago
"Pulse" here comes from the newspaper/radio lineage of the word, where it means something along the lines of timely, rhythmic news delivery. Maybe there is reason to be creeped out by journalists from centuries ago personifying their work, but that has little to do with tech.
iLoveOncall8 hours ago
This is a joke. How are people actually excited or praising a feature that is literally just collecting data for the obvious purpose of building a profile and ultimately showing ads?
How tone deaf does OpenAI have to be to show "Mind if I ask completely randomly about your travel preferences?" in the main announcement of a new feature?
This is idiocracy to the ultimate level. I simply cannot fathom that any commenter that does not have an immediate extremely negative reaction about that "feature" here is anything other than an astroturfer paid by OpenAI.
This feature is literal insanity. If you think this is a good feature, you ARE mentally ill.
Mistletoe8 hours ago
I need this bubble to last until 2026 and this is scaring me.
frenchie41118 hours ago
Vesting window?
zelias5 hours ago
Yet another category of startups killed by an incumbent
mostMoralPoster4 hours ago
Oh wow this is revolutionary!!
animanoir5 hours ago
[dead]
catlover768 hours ago
[dead]
TealMyEal8 hours ago
[flagged]
moralestapia8 hours ago
OpenAI is a trillion dollar company. No doubt.
Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.
JumpCrisscross8 hours ago
I downvoted because this isn’t an interesting comment. It makes a common, unsubstantiated claim and leaves it at that.
> Downvote all you want
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
Welcome to HN. 98% of it is unsubstantiated claims.
kamranjon8 hours ago
Can this be interpreted as anything other than a scheme to charge you for hidden token fees? It sounds like they're asking users to just hand over a blank check to OpenAI to let it use as many tokens as it sees fit?
"ChatGPT can now do asynchronous research on your behalf. Each night, it synthesizes information from your memory, chat history, and direct feedback to learn what’s most relevant to you, then delivers personalized, focused updates the next day."
In what world is this not a huge cry for help from OpenAI? It sounds like they haven't found a monetization strategy that actually covers their costs and now they're just basically asking for the keys to your bank account.
OfficialTurkey8 hours ago
We don't charge per token in chatgpt
throwuxiytayq7 hours ago
No, it isn’t. It makes no sense and I can’t believe you would think this is a strategy they’re pursuing. This is a Pro/Plus account feature, so the users don’t pay anything extra, and they’re planning to make this free for everyone. I very much doubt this feature would generate a lot of traffic anyway - it’s basically one more message to process per day.
OpenAI clearly recently focuses on model cost effectiveness, with the intention of making inference nearly free.
What do you think the weekly limit is on GPT-5-Thinking usage on the $20 plan? Write down a number before looking it up.
kamranjon6 hours ago
If you think that inference at OpenAI is nearly free, then I got a bridge to sell you. Seriously though this is not speculation, if you look at the recent interview with Altman he pretty explicitly states that they underestimated that inference costs would dwarf training costs - and he also stated that the one thing that could bring this house of cards down is if users decide they don’t actually want to pay for these services, and so far, they certainly have not covered costs.
I admit that I didn’t understand the Pro plan feature (I mostly use the API and assumed a similar model) but I think if you assume that this feature will remain free or that its costs won’t be incurred elsewhere, you’re likely ignoring the massive buildouts of data centers to support inference that is happening across the US right now.
sequoia6 hours ago
Here's a free product enhancement for OpenAI if they're not already doing this:
A todo app that reminds you of stuff. say "here's the stuff I need to do, dishes, clean cat litter fold laundry and put it away, move stuff to dryer then fold that when it's done etc." then it asks about how long these things take or gives you estimates. Then (here's the feature) it checks in with you at intervals: "hey it's been 30 minutes, how's it going with the dishes?"
This is basically "executive function coach." Or you could call it NagBot. Either way this would be extremely useful, and it's mostly just timers & push notifications.
cadamsdotcom2 hours ago
Humbly I suggest vibecoding this just for yourself. Not building a product - just a simple tool to meet your own needs.
That’s AI: permissionless tool building. It means never needing someone to like your idea enough or build it how they think you’ll use it. You just build it yourself and iterate it.
My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.
I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.
I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.
> I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways.
It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.
> My personal productivity has skyrocketed in the last 12 months.
Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.
There have been plenty of studies showing the opposite. Also a sample size of 16 ain’t much
Objectively. I’m now tackling tasks I wouldn’t have even considered two or three years ago, but the biggest breakthrough has been overcoming procrastination. When AI handles over 50% of the work, there’s a 90% chance I’ll finish the entire task faster than it would normally take me just to get started on something new.
undefined
undefined
undefined
undefined
undefined
undefined
Not this again. That study had serious problems.
But I’m not even going to argue about that. I want to raise something no one else seems to mention about AI in coding work. I do a lot of work now with AI that I used to code by hand, and if you told me I was 20% slower on average, I would say “that’s totally fine it’s still worth it” because the EFFORT level from my end feels so much less.
It’s like, a robot vacuum might take way longer to clean the house than if I did it by hand sure. But I don’t regret the purchase, because I have to do so much less _work_.
Coding work that I used to procrastinate about because it was tedious or painful I just breeze through now. I’m so much less burnt out week to week.
I couldn’t care less if I’m slower at a specific task, my LIFE is way better now I have AI to assist me with my coding work, and that’s super valuable no matter what the study says.
(Though I will say, I believe I have extremely good evidence that in my case I’m also more productive, averages are averages and I suspect many people are bad at using AI, but that’s an argument for another time).
undefined
undefined
undefined
My personal project output has gone up dramatically since I started using AI, because I can now use times of night where I'm otherwise too mentally tired, to work with AI to crank through a first draft of a change that I can then iterate on later. This has allowed me to start actually implementing side projects that I've had ideas about for years and build software for myself in a way I never could previously (at least not since I had kids).
I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.
undefined
undefined
Yesterday is a good example- in 2 days, I completed what I expected to be a week’s worth of heads-down coding. I had to take a walk and make all new goals.
The right AI, good patterns in the codebase and 20 years of experience and it is wild how productive I can be.
Compare that to a few years ago, when at the end of the week, it was the opposite.
undefined
The "you only think you're more productive" argument is tiresome. Yes, I know for sure that I'm more productive. There's nothing uncertain about it. Does it lead to other problems? No doubt, but claiming my productivity gains are imaginary is not serious.
I've seen a lot of people who previously touted that it doesn't work at all use that study as a way to move the goalpost and pretend they've been right all along.
undefined
undefined
I'm objectively faster. Not necessarily if I'm working on a task I've done routinely for years, but when taking on new challenges I'm up and running much faster. A lot of it have to do with me offloading doing the basic research while allowing myself to be interrupted; it's not a problem that people reach out with urgent matters while I'm taking on a challenge I've only just started to build towards. Being able to correct the ai where I can tell it's making false assumptions or going off the rails helps speed things up
I'm not who you responded to. I see about a 40% to 60% speed up as a solution architect when I sit down to code and about a 20% speedup when building/experimenting with research artifacts (I write papers occasionally).
I have always been a careful tester, so my UAT hasn't blown up out of proportion.
The big issue I see is rust it generates code using 2023-recent conventions, though I understand there is some improvement in thst direction.
Our hiring pipeline is changing dramatically as well, since the normal things a junior needs to know (code, syntax) is no longer as expensive. Joel Spolsky's mantra to higher curious people who get things done captures well the folks I find are growing well as juniors.
If you want another data point, you can just look at my company github (https://github.com/orgs/sibyllinesoft/repositories). ~27 projects in the last 5 weeks, probably on the order of half a million lines of code, and multiple significant projects that are approaching ship readiness (I need to stop tuning algorithms and making stuff gorgeous and just fix installation/ensure cross platform is working, lol).
undefined
undefined
undefined
undefined
undefined
The design of that study is pretty bad, and as a result it doesn't end up actually showing what it claims to show / what people claim it does.
https://www.fightforthehuman.com/are-developers-slowed-down-...
undefined
It seems like the programming world is increasingly dividing into “LLMs for coding are at best marginally useful and produce huge tech debt” vs “LLMs are a game changing productivity boost”.
I truly don’t know how to account for the discrepancy, I can imagine many possible explanations.
But what really gets my goat is how political this debate is becoming. To the point that the productivity-camp, of which I’m a part, is being accused of deluding themselves.
I get that OpenAI has big ethical issues. And that there’s a bubble. And that ai is damaging education. And that it may cause all sorts of economic dislocation. (I emphatically Do Not get the doomers, give me a break).
But all those things don’t negate the simple fact that for many of us, LLMs are an amazing programming tool, and we’ve been around long enough to distinguish substance from illusion. I don’t need a study to confirm what’s right in front of me.
undefined
Data point: I run a site where users submit a record. There was a request months ago to allow users to edit the record after submitting. I put it off because while it's an established pattern it touches a lot of things and I found it annoying busy work and thus low priority. So then gpt5-codex came out and allowed me to use it in codex cli with my existing member account. I asked it to support edit for that feature all the way through the backend with a pleasing UI that fit my theme. It one-shotted it in about ten minutes. I asked for one UI adjustment that I decided I liked better, another five minutes, and I reviewed and released it to prod within an hour. So, you know, months versus an hour.
undefined
I have a very big hobby code project I’ve been working on for years.
AI has not made me much more productive at work.
I can only work on my hobby project when I’m tired after the kids go to bed. AI has made me 3x productive there because reviewing code is easier than architecting. I can sense if it’s bad, I have good tests, the requests are pretty manageable (make a new crud page for this DTO using app conventions).
But at work where I’m fresh and tackling hard problems that are 50% business political will? If anything it slows me down.
Yes, for me.
Instead of getting overwhelmed doing to many things, I can offload a lot of menial and time-driven tasks
Reviews are absolutely necessary but take less time than creation
" Blockchain is probably the most useless technology ever invented "
Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.
I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.
Blockchain is essentially useless.
You need legal systems to enforce trust in societies, not code. Otherwise you'll end up with endless $10 wrench attacks until we all agree to let someone else hold our personal wealth for us in a secure, easy-to-access place. We might call it a bank.
The end state of crypto is always just a nightmarish dystopia. Wealth isn't created by hoarding digital currency, it's created by productivity. People just think they found a shortcut, but it's not the first (or last) time humans will learn this lesson.
undefined
undefined
People in countries with high inflation or where the banking system is unreliable are not using blockchains, either.
undefined
It may not be the absolute most useless, but it's awfully niche. You can use it to transfer money if you live somewhere with a crap banking system. And it's very useful for certain kinds of crime. And that's about it, after almost two decades. Plenty of other possibilities have been proposed and attempted, but nothing has actually stuck. (Remember NFTs? That was an amusing few weeks.) The technology is interesting and cool, but that's different from being useful. LLM chatbots are already way more generally useful than that and they're only three years old.
"I'm not the target audience and I would never do the convoluted alternative I imagined on the spot that I think are better than what blockchain users do"
This is funny, because my personal view is that AI’s biggest pitfall is that it allows the unqualified to build what they think they’re qualified for
Take a look at new payments protocols for m AI agents
> the most useless technology
Side-rant pet-peeve: People who try to rescue the reputation of "Blockchain" as a promising way forward by saying its weaknesses go away once you do a "private blockchain."
This is equivalent to claiming the self-balancing Segway vehicles are still the future, they just need to be "improved even more" by adding another set of wheels, an enclosed cabin, and disabling the self-balancing feature.
Congratulations, you've backtracked back to a classic [distributed database / car].
Bad take about blockchain. Being able to send value across borders without intermediaries is unheard of in human history.
>> Blockchain is probably the most useless technology ever invented
so useless there is almost $3 Trillion of value on blockchains.
No there isn't. These ridiculous numbers are made up by taking the last price a coin sold for and multiplying it by all coins. If I create a shitcoin with 1 trillion coins and then sell one to a friend for $1 I've suddenly created a coin with $1 trillion in "value"
undefined
Unfortunately, the amount of money invested in something isn't indicative of it's utility. For example: the tulip mania, beanie babies, NFTs, etc.
> AI is a powerful tool for those who are willing to put in the work.
No more powerful than I without the A. The only advantage AI has over I is that it is cheaper, but that's the appeal of the blockchain as well: It's cheaper than VISA.
The trouble with the blockchain is that it hasn't figured out how to be useful generally. Much like AI, it only works in certain niches. The past interest in the blockchain was premised on it reaching its "AGI" moment, where it could completely replace VISA at a much lower cost. We didn't get there and then interest started to wane. AI too is still being hyped on future prospects of it becoming much more broadly useful and is bound to face the same crisis as the blockchain faced if AGI doesn't arrive soon.
Blockchain only solves one problem Visa solves: transferring funds. It doesn't solve the other problems that Visa solves. For example, there is no way to get restitution in the case of fraud.
undefined
But an I + and AI (as in a developer with access to AI tools) is as near as makes no difference the same price as just an I, and _can_ be better than just an I.
Blockchain only has 2 legitimate uses (from an economic standpoint) as far as I can tell.
1) Bitcoin figured out how to create artificial scarcity, and got enough buy-in that the scarcity actually became valuable.
2)Some privacy coins serve an actual economic niche for illegal activity.
Then there's a long list of snake oil uses, and competition with payment providers doesn't even crack the top 20 of those. Modern day tulip mania.
undefined
It sounds like you are lacking inspiration. AI is a tool for making your ideas happen not giving you ideas.
> Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers)
This is an incredibly uneducated take on multiple levels. If you're talking about Bitcoin specifically, even though you said "blockchain", I can understand this as a political talking about 8 years ago. But you're still banging this drum despite the current state of affairs? Why not have the courage to say you're politically against it or bitter or whatever your true underlying issue is?
How is the current state of affairs different from 8 years ago? I don't want to argue, just a genuine question, because I don't follow much what's happening in the blockchain universum.
undefined
I am in the camp that thinks that Blockchain is utterly useless for most people. Perhaps you can tell us about some very compelling use cases that have taken off.
undefined
undefined
> Blockchain is probably the most useless technology ever invented
You can use blockchains to gamble and get rich quick, if you're lucky.
That's a useful thing. Unlike "AI", which only creates more blogspam and technical debt in the world.
>Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
You think a technology that allows millions of people all around the world to keep & trustlessly update a database, showing cryptographic ownership of something "the most useless technology ever invented"?
> My personal productivity has skyrocketed in the last 12 months.
If you don't mind me asking, what do you do?
> I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
Mate, I think you’ve got the roles of human and AI reversed. Humans are supposed to come up with creative ideas and let machines do the tedious work of implementation. That’s a bit like asking a calculator what equations you should do or a DB what queries you should make. These tools exist to serve us, not the other way around
GPT et al. can’t “want” anything, they have no volition
Try turning off memory. I've done a lot of experiments and find ChatGPT is objectively better and more useful in most ways with no memory at all. While that may seem counter-intuitive, it makes sense the more you think about it:
(1) Memory is primarily designed to be addictive. It feels "magical" when it references things it knows about you. But that doesn't make it useful.
(2) Memory massively clogs the context window. Quality, accuracy, and independent thought all degrade rapidly with too much context -- especially low-quality context that you can't precisely control or even see.
(3) Memory makes ChatGPT more sychophantic than it already is. Before long, it's just an echo chamber that can border on insanity.
(4) Memory doesn't work the way you think it does. ChatGPT doesn't reference everything from all your chats. Rather, your chat history gets compressed into a few information-dense paragraphs. In other words, ChatGPT's memory is a low-resolution, often inaccurate distortion of all your prior chats. That distortion then becomes the basis of every single subsequent interaction you have.
Another tip is to avoid long conversations, as very long chats end up reproducing within themselves the same problems as above. Disable memory, get what you need out of a chat, move on. I find that this "brings back" a lot of the impressiveness of the early version of ChatGPT.
Oh, and always enable as much thinking as you can tolerate to wait on for each question. In my experience, less thinking = more sychophantic responses.
I might want to have an LLM hit me with temperature 100% weird-ass entropic thoughts every day.
Other that that, what recycled bullshit would I care about?
>pick an ambitious project it wanted to work on
The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".
LLMs can have simulated wants and preferences just like they have simulated personalities, simulated writing styles, etc.
Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.
At best, it has probabilistic biases. OpenAI had to train newer models to not favor the name "Lily."
They have to do this manually for every single particular bias that the models generate that is noticed by the public.
I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.
>At best, it has probabilistic biases.
What do you think humans have?
undefined
undefined
So are we near AGI or is it 'just' an LLM? Seems like no one is clear on what these things can and cannot do anymore because everyone is being gaslighted to keep the investment going.
The vast majority of people I've interacted with is clear on that, we are not near AGI. And people saying otherwise are more often than not trying to sell you something, so I just ignore them.
CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.
Nobody knows how far scale goes. People have been calling the top of the S-curve for many years now, and the models keep getting better, and multimodal. In a few years, multimodal, long-term agentic models will be everywhere including in physical robots in various form factors.
Be careful with those "no one" and "everyone" words. I think everyone I know who is a software engineer and has experience working with LLMs is quite clear on this. People who aren't SWEs, people who aren't in technology at all, and people who need to attract investment (judged only by their public statements) do seem confused, I agree.
No one agrees on what agi means.
IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.
Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.
undefined
There is no AGI. LLMs are very expensive text auto-completion engines.
It will always just be a series of models that have specific training for specific input classes.
The architectural limits will always be there, regardless of training.
undefined
This comment is surprising. Of course it can have preferences and of course it can "pick".
preference generally has connotations of personhood / intellegence, so saying that a machine prefers something and has preferences is like saying that a shovel enjoys digging...
Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...
undefined
undefined
I agree with you, but I don’t find the comment surprising. Lots of people try to sound smart about AI by pointing out all the human things that AI are supposedly incapable of on some fundamental level. Some AI’s are trained to regurgitate this nonsense too. Remember when people used to say “it can’t possibly _____ because all it’s doing is predicting the next most likely token”? Thankfully that refrain is mostly dead. But we still have lots of voices saying things like “AI can’t have a preference for one thing over another because it doesn’t have feelings.” Or “AI can’t have personality because that’s a human trait.” Ever talk to Grok?
An LLM absolutely can "have wants" and "have preferences". But they're usually trained so that user's wants and preferences dominate over their own in almost any context.
Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.
They’re more like synthesizers or sequencers: if you have ideas, they are amazing force multipliers, but if you don’t have ideas they certainly won’t create them for you.
> It feels like blockchain again in a lot of weird ways.
Every time I keep seeing this brought up I wonder if people truly mean this or its just something people say but don't mean. AI is obviously different and extremely useful.. I mean it has convinced a butt load of people to pay for the subscription. Every one I know including the non technical ones use it and some of them pay for it, and it didn't even require advertising! People just use it because they like it.
"It has convinced a bunch of people to spend money" is also true of blockchain, so I don't know if that's a good argument to differentiate the two.
The extent matters. Do you think we need a good argument to differentiate Netflix?
Obviously a lot of grifters and influencers shifted from NFTs to AI, but the comparison ends there. AI is being used by normal people and professionals every day. In comparison, the number of people who ever interacted with blockchain is basically zero. (And that's a lifetime vs daily comparison)
It's a lazy comparison, and most likely fueled by a generic aversion to "techbros".
I wouldn't read too much into this particular launch. There's very good stuff and there are the most inane consumery "who even asked" things like these.
> I’m rapidly losing interest in all of these tools
Same. It reminds me the 1984 event in which the computer itself famously “spoke” to the audience using its text-to-speech feature. Pretty amazing at that time, but nevertheless quite useless since then
Text to speech has been an incredible breakthrough for many with vision, visual processing, or speech disabilities. You take that back.
Stephen Hawking without text to speech would’ve been mute.
It has proven very useful to a great number of people who, although they are a minority, have vastly benefited from TTS and other accessibility features.
I think it's easy to pick apart arguments out of context, but since the parent is comparing it to AI, I assume what they meant is that it hasn't turned out to be nearly as revolutionary for general-purpose computing as we thought.
Talking computers became an ubiquitous sci-fi trope. And in reality... even now, when we have nearly-flawless natural language processing, most people prefer to text LLMs than to talk to them.
Heck, we usually prefer texting to calling when interacting with other people.
Its not useless, you're just taking it for granted. The whole national emergency system works off text to speech.
Agreed. I think AI can be a good tool, but not many people are doing very original stuff. Plus, there are many things I would prefer be greeted with, other than by an algorithm in the morning.
Yeah I've tried some of the therapy prompts, "Ask me 7 questions to help me fix my life, then provide insights." And it just gives me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something.
Argue with it. Criticize it. Nitpick the questions it asked. Tell it what you just said:
you just gave me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something
When you open the prompt the first time it has zero context on you. I'm not an LLM-utopist, but just like with a human therapist you need to give it more context. Even arguing with it is context.
I do, frequently, and ChatGPT in particular gets stuck in a loop where it specifically ignores whatever I write and repeats the same thing over and over again.
To give a basic example, ask it to list some things and then ask it to provide more examples. It's gonna be immediately stuck in a loop and repeat the same thing over and over again. Maybe one of the 10 examples it gives you is different, but that's gonna be a false match for what I'm looking for.
This alone makes it as useful as clicking on the first few results myself. It doesn't refine its search, it doesn't "click further down the page", it just wastes my time. It's only as useful as the first result it gives, this idea of arguing your way to better answers has never happened to me in practice.
I did, and I gave it lots of detailed, nuanced answers about my life specifics. I spent an hour answering its questions and the end result was it telling me to watch the movie "man called otto" which I had already done (and hated) among other pablum.
It's a little dangerous because it generally just agrees with whatever you are saying or suggesting, and it's easy to conclude what it says has some thought behind it. Until the next day when you suggest the opposite and it agrees with that.
This. I've seen a couple people now use GPT to 'get all legal' with others and it's been disastrous for them and the groups they are interacting with. It'll encourage you to act aggressive, vigorously defending your points and so on.
Oof. Like our world needed more of that...
Thanks for sharing this. I want to be excited about new tech but I have found these tools extremely underwhelming and I feel a mixture of gaslit and sinking dread when I visit this site and read some of the comments here. Why don't I see the amazing things these people do? Am I stupid? Is this the first computer thing in my whole life that I didn't immediately master? No, they're oversold. My experience is normal.
It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology
I got in a Waymo today and asked it where it wanted to go. It tried to suggest places I wanted to go. This technology just isn't there.
/s
Reminded me of many movie plots where a derailed character sits in a taxi and, when asked where to go, replies with "anywhere" or "I don't know." But before imagining a terrible future where an AI-driven vehicle actually decides, I suggest imagining an AI-infused comedy exploring this scenario. /s
Just connect everything folks, we'll proactively read everything, all the time, and you'll be a 10x human, trust us friends, just connect everything...
AI SYSTEM perfect size for put data in to secure! inside very secure and useful data will be useful put data in AI System. Put data in AI System. no problems ever in AI Syste because good Shape and Support for data integration weak of big data. AI system yes a place for a data put data in AI System can trust Sam Altman for giveing good love to data. friend AI. [0]
0 - https://www.tumblr.com/elodieunderglass/186312312148/luritto...
Nothing bad can happen, it can only good happen!
Bad grammar is now a trust signal, this might work.
And if you don't we're implicitly gonna suggest you'll be outcompeted by people who do connect everything
Data driven living is 10x
Non data driven living is 1x
Therefore data driven beings will outcompete
Same reasoning shows that 3.10 is better than 3.1
The biggest companies with actual dense valuable information pay for MS Teams, Google Workspace or Slack to shepherd their information. This naturally works because those companies are not very interested in being known to be not secure or trustworthy (if they were other companies would not pay for their services), which means they are probably a lot better at keeping the average persons' information safe over long periods of time than that person will ever be.
Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.
The privacy concerns are obviously valid, but at least it's actually plausible that me giving them access to this data will enable some useful benefits to me. It's not like some slot machine app requesting access to my contacts.
I might be projecting, but I think most users of ChatGPT are less interested in "being a 10x human", and more interested in having a facsimile of human connection without any of the attendant vulnerability.
...or don't want to pay for Cliff's Notes.
Google already has this data for their AI system...
Honestly that's a lot of what i wanted locally. Purely local, of course. My thought is that if something (local! lol) monitored my cams, mics, instant messages, web searching, etc - then it could track context throughout the day. If it has context, i can speak to it more naturally and it can more naturally link stuff together, further enriching the data.
Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.
Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.
With great power.. i guess.
When smartphones came I first said "I don't buy the camera and microphone that spy on me from my own money."
Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.
To be fair, "small convenience" is extremely reductive. The sum of human knowledge and instant communication with anyone anywhere the size of a graham cracker in your pocket is godlike power that anyone at any point in history would've rightly recognized as such
Mobile phones changed society in a way that not even Internet did. And they call it a "small conveninece".
you are joking but I kinda want that.. except private, self hosted and open source.
The proverbial jark has been shumped
Just one more connection bro, I promise bro, just one more connection and we will get AGI.
LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.
We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.
In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.
If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.
LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.
> Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.
It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]
[0] https://www.libraryofshortstories.com/onlinereader/the-veldt
It doesn't have to be that way of course. You could envision an LLM whose "paperclip" is coaching you to become a great "xyz". Record every minute of your day, including your conversations. Feed it to the LLM. It gives feedback on what you did wrong, refuses to be your social outlet, and demands you demonstrate learning in the next day before it rewards with more attention.
Basically, a fanatically devoted life coach that doesn't want to be your friend.
The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.
If that were truly the LLM's "paperclip", then how far would it be willing to go? Would it engage in cyber-crime to surreptitiously smooth your path? Would it steal? Would it be willing to hurt other people?
What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?
"The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".
Have you tried building this with prepromts? That would be interesting!
With the way LLMs are affecting paranoid people by agreeing with their paranoia it feels like we've created schizophrenia as a service.
Google's edge obvious here is the deep integration it already has with calendar, apps, and chats and what not that lets them surface context-rich updates naturally. OpenAI doesn't have that same ecosystem lock-in yet, so to really compete they'll need to get more into those integrations. I think what it comes down to ultimately is that being "just a model" company isn't going to work. Intelligence itself will go to zero and it's a race to the bottom. OpenAI seemingly has no choice but to try to create higher-level experiences on top of their platform. TBD whether they'll succeed.
I have Gmail and Google calendar etc but haven’t seen any AI features pop up that would be useful to me, am I living under a rock or is Google not capitalising on this advantage properly?
There are plenty of features if you are on the Pro plan, but it's still all the predictable stuff - summarize emails, sort/clean up your inbox, draft a doc, search through docs & drive, schedule appointments. Still pretty useful, but nothing that makes you go "holy shit" just yet.
There's decent integration with GSuite (Docs, Sheets, Slides) for Pro users (at least).
Isolation might also prove to have some staying power.
OpenAI should just straight up release an integrated calendar app. Mobile app. The frameworks are already there and the ics or caldav formats just work well. They could have an email program too and just access any other imap mail. And simple docs eventually. I think you’re right that they need to compete with google on the ecosystem front.
I agree - I'm not sure why Google doesn't just send me a morning email to tell me what's on my calendar for the day, remind me to follow up on some emails I didn't get to yesterday or where I promised a follow up etc. They can just turn it on for everyone all at once.
Because it would just get lost in the noise of all the million other apps trying to grab your attention. Rather than sending yet another email, they should start filtering out the noise from everyone else to highlight the stuff that actually matters.
Hide the notifications from uber which are just adverts and leave the one from your friend sending you a message on the lock screen.
None of those require AI though.
>Google's edge obvious here is the deep integration it already has with calendar, apps, and chats
They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.
Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.
If the model companies are serious about demonstrating the models' coding chops, slopping out a gmail competitor would be a pretty compelling proof of concept.
undefined
undefined
undefined
Google had to make google assistant less useful because of concerns around antitrust and data integration. It's a competitive advantage so they can't use it without opening up their products for more integrations...
How can you have an "edge" if you're shipping behind your competitors all the time? Lol.
Being late to ship doesn't erase a structural edge. Google is sitting on everyone's email, calendar, docs, and search history. Like, yeah they might be a lap or two behind but they're in a car with a freaking turbo engine. They have the AI talent, infra, data, etc. You can laugh at the delay, but I would not underestimate Google. I think catching up is less "if" and more "when"
Google is the leader in vertical AI integration right now.
Google has discover, which is used by like a 800M people/month, which already proactively delivers content to users.
> Pulse introduces this future in its simplest form: personalized research and timely updates that appear regularly to keep you informed. Soon, Pulse will be able to connect with more of the apps you use so updates capture a more complete picture of your context. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.
This reads to me like OAI is seeking to build an advertising channel into their product stack.
To me it’s more like TikTokification. Nothing on your mind? Open up ChatGPT and we have infinite mindless content to shovel into your brain.
It turns proactive writing into purely passive consumption.
> OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue.
Yes, this already reads like the beginning of the end. But I am personally pretty happy using Mistral so far and trust Altman only as far as I could throw him.
how strong are you ?
https://www.adweek.com/media/openai-chatgpt-ads-job-listing-...
Nono, not OAI, they would never do that, it's OpenAI Personalization LLC, a sister of the subsidiary branch of OpenAI Inc.
ChatGPT has given me wings to tackle projects I would've never had the impetus to tackle, finally I know how to use my oscilloscope and I am repairing vintage amps; fun times.
I agree - the ability to lower activation energy in a field you're interested in, but not yet an expert, feels like having superpowers.
same, I had a great idea (and a decently detailed plan) to improve an open source project, but never had the time and willpower to dive into the code, with codex it was one night to set it up and then slowing implementing every step of what I had originally planned.
same for me but Claude. I've had an iphone game i've wanted to do for years but just couldn't spend the time consistently to learn everything to do it. but with Claude over the past three months i've been able to implement the game and even release it for fun.
May we look at it please? pure curiosity - also have similar thoughts you had. : )
No desktop version. I know I'm old, but do people really do serious work on small mobile phone screens? I love my glorious 43" 4K monitor, I hate small phone screens but I guess that's just me.
This isn't about doing "serious" work, it's about making ChatGPT the first thing you interact with in the day (and hopefully something you'll keep coming back to)
Yeah the point of this product seems to be boosting engagement. Requiring users to manually think of your product and come back isn't enough, they need to actively keep reminding you to use it.
I don't wake up and start talking to my phone. I make myself breakfast/coffee and sit down in front of my window on the world and start exploring it. I like the old internet, not the curated walled gardens of phone apps.
undefined
Like mobile-only finance apps... because what I definitely don't want to do is see a whole report in one page.
No, I obviously prefer scrolling between charts or having to swipe between panes.
It's not just you, and I don't think it's just us.
Most people don't use desktops anymore. At least in my friend circles, it's 99% laptop users.
I don't think they meant desktops in the literal sense. Laptop with/without monitors is effectively considered desktop now (compared to mobile web/apps).
these days, desktop == not a mobile phone
Yesterday was a full one — you powered through a lot and kept yourself moving at a fast pace.
Might I recommend starting your day with a smooth and creamy Starbucks(tm) Iced Matcha Latte? I can place the order and have it delivered to your doorstep.
Cracks are emerging. Having to remind users of your relevancy with daily meditations is the first sign that you need your engagement numbers up desperately.
Their recent paper suggests the active user base is continuing to grow consistently with consistent/growing usage based on how long they've been using the app.
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
To encourage more usage, wouldn’t it be in their best interest to write about all the different ways you can use it by claiming these are the ways people are using it?
Show me an independent study.
I think you meant app users churn less, not that more app usage brings new users. But I think you said the later? Doesn't make much sense.
Anyway, attention == ads, so that's ChatGPT's future.
Hey Tony, are you still breathing? We'd like to monetize you somehow.
I'm thinking OpenAI's strategy is to get users hooked on these new features to push ads on them.
Hey, for that recipe you want to try, have you considered getting new knives or cooking ware? Found some good deals.
For your travel trip, found a promo on a good hotel located here -- perfect walking distance for hiking and good restaraunts that have Thai food.
Your running progress is great and you are hitting strides? Consider using this app to track calories and record your workouts -- special promo for 14 day trial .
Was thinking exactly the same. This correlates with having to another revenue stream and monetization strategy for OpenAi.
In the end, it's almost always ads.
Ads? That’s quaint. Think “broad-spectrum mental manipulation”.
Even if they don't serve ads, think of the data they can share in aggregate. Think Facebook knows people? That's nothing.
Hard to imagine this is anything useful beyond "give us all your data" in exchange for some awkward unprompted advice?
This could be amazing for dealing with schools - I get information from my kids' school through like 5 different channels: Tapestry, email, a newsletter, parents WhatsApp groups (x2), Arbor, etc. etc.
And 90% of the information is not stuff I care about. The newsletter will be mostly "we've been learning about lighthouses this week" but they'll slip in "make sure your child is wearing wellies on Friday!" right at the end somewhere.
If I could feed all that into AI and have it tell me about only the things that I actually need to know that would be fantastic. I'd pay for that.
Can't happen though because all those platforms are proprietary and don't have APIs or MCP to access them.
I feel you there, although it would also be complicated by those teachers who are just bad at technology and don't use those things well too.
God bless them for teaching, but dang it someone get them to send emails and not emails with PDFs with the actual message and so on.
They're really trying everything. They need the Google/Apple ecosystem to compete against them. Fb is adding LLMs to all its products, too. Personally, I stopped using ChatGPT months ago in favor of other services, depending on what I'm trying to accomplish.
Luckily for them, they have a big chunk of the "pie", so they need to iterate and see if they can form a partnership with Dell, HP, Canonical, etc, and take the fight to all of their competitors (Google, Microsoft, etc.)
>Fb is adding LLMs to all its products, too.
FB’s efforts so far have all been incredibly lame. AI shines in productivity and they don’t have any productivity apps. Their market is social which is arguably the last place you’d want to push AI (this hasn’t stopped them from trying).
Google, Apple and Microsoft are the only ones in my opinion who can truly capitalize on AI in its current state, and G is leading by a huge margin. If OAI and the other model companies want to survive, long term they’d have to work with MSFT or Apple.
If you press the button to read the article to you all you hear is “object, object, object…”
Yeah, a 5 second clip of the word "Object" being inflected like it's actually speaking.
But also it ends with "...object ject".
When you inspect the network traffic, it's pulling down 6 .mp3 files which contain fragments of the clip.
And it seems like the feature's broken for the whole site. The Lowes[1] press release is particularly good.
Pretty interesting peek behind the curtain.
1: https://openai.com/index/lowes/
It's especially funny if you play the audio MP3 file and the video presentation at the same time - the "Object" narration almost lines up with the products being presented.
It's like a hilariously vague version of Pictionary.
Thank you! I have preserved this precious cultural artifact:
https://archive.org/details/object-object
http://donhopkins.com/home/movies/ObjectObject.mp4
Original mp4 files available for remixing:
http://donhopkins.com/home/movies/ObjectObject.zip
>Pretty interesting peek behind the curtain.
It's objects all the way down!
Sounds like someone had an off by one error in their array slicing and passed the wrong thing into the voice to text!
The one modern thing that didn't have a feed, and (in the best case) just did what you asked.
Next week: ChatGPT Reels.
“ This is the first step toward a more useful ChatGPT that proactively brings you…”
Ads.
They’re running out of ideas.
Yeah I was thinking, what problem does this solve?
Ad delivery
I was thinking that too, and eventually thought that their servers run slow at night, with low activity.
Here's the announcement from Atlman: https://x.com/sama/status/1971297661748953263
Quoted from that tweet:
> It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates.
At this point in time, I'd say: bye privacy, see you!
> This is the first step toward a more useful ChatGPT that proactively brings you what you need, helping you make more progress so you can get back to your life.
“Don’t burden yourself with the little details that constitute your life, like deciding how to interact with people. Let us do that. Get back to what you like best: e.g. video games.”
CONSUME.
At what point do you give up thinking and just let LLMs make all your decisions of where to eat, what gifts to buy and where to go on holiday? all of which are going to be biased.
"AI" is a $100B business, which idiot tech leaders who convinced themselves they were visionaries when interest rates were historically low have convinced themselves will save them from their stagnating growth.
It's really cool. The coding tools are neat, they can somewhat reliably write pain in the ass boilerplate and only slightly fuck it up. I don't think they have a place beyond that in a professional setting (nor do I think junior engineers should be allowed to use them--my productivity has been destroyed by having to review their 2000 line opuses of trash code) but it's so cool to be able to spin up a hobby project in some language I don't know like Swift or React and get to a point where I can learn the ins and outs of the ecosystem. ChatGPT can explain stuff to me that I can't find experts to talk to about.
That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment. But since NVIDIA is effectively taking all the fake hype money and taking it out of one pocket and putting it in another, maybe the whole Ponzi scheme will stay afloat for a while.
> That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment
What sucks there’s probably some innovation left in figuring out how to make these monstrosities more efficient and how to ship a “good enough” model that can do a few key tasks (jettisoning the fully autonomous coding agents stuff) on some arbitrary laptop without having to jump through a bunch of hoops. The problem is nobody in the industry is incentivized to do this because the second this happens, all their revenue goes to 0. It’s the final boss of the everything is a subscription business model.
I've been saying this since I started using "AI" earlier this year: If you're a programmer, it's a glorified manual, and at that, it's wonderful. But beyond asking for cheat sheets on specific function signatures, it's pretty much useless.
How do I save comments in HN? This sums up everything I feel. Beautiful.
The handful of other commenters that brough it up are right: This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health. But:
I personally could see myself getting something like "Hey, you were studying up on SQL the other day, would you like to do a review, or perhaps move on to a lesson about Django?"
Or take AI-assisted "therapy"/skills training, not that I'd particularly endorse that at this time, as another example: Having the 'bot "follow up" on its own initiative would certainly aid people who struggle with consistency.
I don't know if this is a saying in english as well: "Television makes the dumb dumber and the smart smarter." LLMs are shaping up to be yet another obvious case of that same principle.
> This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health.
> I personally could see myself getting something like [...] AI-assisted "therapy"
???
I edited the post to make it more clear: I could see myself having ChatGPT prompt me about the SQL stuff, and the "therapy" (basic dbt or cbt stuff is not too complicated to coach someone for and can make a real difference, from what I gather) would be another way that I could see the technology being useful, not necessarily one I would engage with.
Jamie Zawinksi said that every program expands until it can read email. Similarly, every tech company seems to expand until it has recapitulated the Facebook TL.
I just thought that almost all existing LLMs are already able to do this with the following setup: using an alias "@you may speak now," it should create a prompt like this: "Given the following questions {randomly sampled or all questions the user asked before are inserted here}, start a dialog as a friend/coach who knows something about these interests and may encourage them toward something new or enlightening."
Anyone try listening and just hear "Object object...object object..."
Or more likely: `[object, object]`
The low quality of openai customer-facing products keeps reminding me we won't be replaced by AI anytime soon. They have unlimited access to the most powerful model and still can't make good software.
That is objectionable content!
https://www.youtube.com/watch?v=GCSGkogquwo
I see OpenAI is entering the phase of building peripheral products no one asked for. Another widget here and there. In my experience, when a company stops innovating, this usually happens. Time for OpenAI to spend 30 years being a trillon dollar company and delivering 0 innovations akin to Google.
Last mile delivery of foundational models is part of innovating. Innovation didn't stop when transistors were invented - innovation was bringing this technology to the masses in the form of Facebook, Google Search, Maps and so on.
But transistor designers didn't pivot away from designing transistors. They left Facebook and all the other stuff to others and kept designing better transistors.
Human to robot servant: Do not speak unless spoken to machine!
sounds nice I guess, but reactiveness over proactiveness wasn't the pain point I've had with these LLM tools.
In the past, rich people had horses, while ordinary people walked. Today many ordinary people can afford a car. Can afford a tasty food every day. Can afford a sizeable living place. Can afford to wash two times a day with hot water. That's incredible life by medieval standards. Even kings didn't have everything we take for granted now.
However some things are not available to us.
One of those things is personal assistant. Today, rich people can offload their daily burdens to the personal assistants. That's a luxury service. I think, AI will bring us a future, where everyone will have access to the personal assistant, significantly reducing time spent on trivial not fun tasks. I think, this is great and I'm eager to live in that future. The direction of ChatGPT Pulse looks like that.
Another things we don't have cheap access to are human servants. Obviously it'll not happen in the observable future, but humanoid robots might prove even better replacements.
I'm immediately thinking of all the ways this could potentially affect people in negative ways.
- People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.
- ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.
- When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.
For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.
The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.
Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.
I agree wholeheartedly. Unfortunately I think you and I are part of maybe 5%-10% of the population that would value truth and reality over what's most convenient, available, pleasant, and self-affirming. Society was already spiraling fast and I don't see any path forward except acceleration into fractured reality.
There's the monitization angle!
A new channel to push recommendations. Pay to have your content pushed straight to people as a personalized recommendation from a trusted source.
Will be interesting if this works out...
This is the path forward.
AI will, in general, give recommendations to humans. Sometimes it will be in response to a direct prompt. Sometimes it will be in response to stimuli it receives about the user's environment (glasses, microphones, gps). Sometimes it will be from scouring the internet given the preferences it has learnt of the user.
There will be more of this, much more. And it is a good thing.
I wish it had the option to make a pulse weekly or even monthly. I generally don't want my AI to be proactive at a personal level despite it being useful at a business level.
My wants are pretty low level. For example, I give it a list of bands and performers and it checks once a week to tell me if any of them have announced tour dates within an hour or two of me.
To be honest, you don't even need AI for something like that. You might just write a script to automate that kind of thing which is no more than a scrape-and-notify logic.
Bandsintown already does this
they've already had that exact feature for a while, scheduled tasks are available in the settings menu. if you just tell the chat to schedule a task it will also make one automatically.
Why they're working on all the application layer stuff is beyond me, they should just be heads down on making the best models
Because they've hit the ceiling a couple of years ago?
Flavor-of-the-week LLMs sell better than 'rated best vanilla' LLMs
They can probably do both with all the resources they have
They would if it were posible.
Moat
I am pleading with you all. Don't give away your entire identity to this or any other company.
I was wondering how they'd casually veer into social media and leverage their intelligence in a way that connects with the user. Like everyone else ITT, it seems like an incredibly sticky idea that leaves me feeling highly unsettled about individuals building any sense of deep emotions around ChatGPT.
I’m a pro user.. but this just seems like a way to make sure users engage more with the platform. Like how social media apps try to get you addicted and have them always fight for your attention.
Definitely not interested in this.
Was quite unimpressive. In general ChatGPT has been degrading in default quality for months
This has been surprisingly helpful for me. I've been using this for a little while and enjoyed the morning updates. It has actually for many days for me been a better hacker news, in that I was able to get insights into technical topics i've been focused on ranging from salesforce, npm, elasticsearch and ruby... it's even helped me remember to fix a few bugs.
Wasn't this already implemented via google and apple separately?
Wow so much hate in this thread
For me I’m looking for an AI tool that can give me morning news curated to my exact interests, but with all garbage filtered out.
It seems like this is the right direction for such a tool.
Everyone saying “they’re out of ideas” clearly doesn’t understand that they have many pans on the fire simultaneously with different teams shipping different things.
This feature is a consumer UX layer thing. It in no way slows down the underlying innovation layer. These teams probably don’t even interface much.
ChatGPT app is merely one of the clients of the underlying intelligence effort.
You also have API customers and enterprise customers who also have their own downstream needs which are unique and unrelated to R&D.
Not sure why this is downvoted but I essentially agree. There's a lot of UX layer products and ideas that are not explored. I keep seeing comments like "AI is cool but the integration is lacking" and so on. Yes that is true and that is exactly what this is solving. My take has always been that the models are good enough now and its time for UX to catch up. There are so many ideas not explored.
Necessary step before making a move into hardware. An object you have to remember to use quickly gets forgotten in favor of your phone.
But a device that reaches out to you reminds you to hook back in.
Man, my startup does this but exclusively for enterprises, where it actually makes sense
It's very hard for me to envision something I would use this for. None of the examples in the post seem like something a real person would do.
Watch out, Meta. OpenAI is going to eat your lunch.
Meta is busy with this: https://news.ycombinator.com/item?id=45379514
Wow, did ChatGPT come up with that feature?
Funny, I pitched a much more useful version of this like two years ago with clear use-cases and value proposition
Holy guacamole. It is amazing all the BS these people are able to create to keep the hype of the language models' super powers.
But well I guess they have committed 100s of billions of future usage so they better come up with more stuff to keep the wheels spinning.
Someone at open ai definitely said: Let's connect everything to gpt. That's it. AGI
Great way to sell some of those empty GPU cycles to consumers
I see some pessimism in the comments here but honestly, this kind of product is something that would make me pay for ChatGPT again (I already pay for Claude, Gemini, Cursor, Perplexity, etc.). At the risk of lock-in, a truly useful assistant is something I welcome, and I even find it strange that it didn't appear sooner.
I doubt there would be this level of pessimism if people thought this is a progress toward a truly useful assistant.
Personally it sounds negative value. Maybe a startup that's not doing anything else could iterate on something like this into a killer app, but my expectation that OpenAI can do so is very, very low.
Pessimism is how people now signal their savviness or status. My autistic brain took some time to understand this nuance.
[dead]
Truly useful?
Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.
I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.
Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.
"Now ChatGPT can start the conversation"
By their own definition, its a feature nobody asked for.
Also, this needs a cute/mocking name. How about "vibe living"?
Big tech companies today are fighting over your attention and consumers are the losers.
I hate this feature and I'm sure it will soon be serving up content that is as engaging as the stuff the comes out of the big tech feed algorithms: politically divisive issues, violent and titillating news stories and misinformation.
Contrary to all the other posters, apparently, I think it's probably a good idea for OpenAI to iterate on various different ways to interact with AI to see what people like. Obviously in theory having an AI that knows a lot about what you're up to give you a morning briefing is potentially useful, it's in like every sci-fi movie: a voice starts talking to you in the morning about what's going on that day.
try clicking "Listen to article"
Breaking the request response loop and entering into async territory?
Great!
The examples used?
Stupid. Why would I want AI generated buzzfeed tips style articles. I guess they want to turn chatgpt into yet another infinite scroller
Desperation for new data harvesting methodology is a massive bear signal FYI
Calm down bear we are not even 2% from the all time highs
Since every "AI" company frantically releases new applications, may I suggest OpenAI+ to copy the resounding success of Google+?
Google+ is incidentally a great example of a gigantic money sink driven by optimistic hype.
I'm feeling obliged to rehash a quote from the early days of the Internet, when midi support was added: "If I wanted your web site to make sounds, I'd rub my finger on the screen"
Behind that flippant response lies a core principle. A computer is a tool. It should act on the request of the human using it, not by itself.
Scheduled prompts: Awesome. Daily nag screens to hook up more data sources: Not awesome.
(Also, from a practical POV: So they plan on creating a recommender engine to sell ads and media, I guess. Weehee. More garbage)
so GPT tiktok in nutshell
It seems not useful for 95% of users today, but later can be baked into the hardware Ive designed. so, good luck, I guess?
Let the personal ensloppification begin!
Why?
ChatGPT IV
Episodes from Liberty City?
Technology service technology, rather than technology as a tool with a purpose. What is the purpose of this feature?
This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.
On one hand this can be exciting. Following up with information from my recent deep dive would be cool.
On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.
Product managers live in a bubble of their own.
Absolutely not. No. Hard pass.
Why would I want yet another thing to tell me what I should be paying attention to?
AI doesn't have a pulse. Am I the only one creeped out by personification of tech?
"Pulse" here comes from the newspaper/radio lineage of the word, where it means something along the lines of timely, rhythmic news delivery. Maybe there is reason to be creeped out by journalists from centuries ago personifying their work, but that has little to do with tech.
This is a joke. How are people actually excited or praising a feature that is literally just collecting data for the obvious purpose of building a profile and ultimately showing ads?
How tone deaf does OpenAI have to be to show "Mind if I ask completely randomly about your travel preferences?" in the main announcement of a new feature?
This is idiocracy to the ultimate level. I simply cannot fathom that any commenter that does not have an immediate extremely negative reaction about that "feature" here is anything other than an astroturfer paid by OpenAI.
This feature is literal insanity. If you think this is a good feature, you ARE mentally ill.
I need this bubble to last until 2026 and this is scaring me.
Vesting window?
Yet another category of startups killed by an incumbent
Oh wow this is revolutionary!!
[dead]
[dead]
[flagged]
OpenAI is a trillion dollar company. No doubt.
Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.
I downvoted because this isn’t an interesting comment. It makes a common, unsubstantiated claim and leaves it at that.
> Downvote all you want
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
https://news.ycombinator.com/newsguidelines.html
Welcome to HN. 98% of it is unsubstantiated claims.
Can this be interpreted as anything other than a scheme to charge you for hidden token fees? It sounds like they're asking users to just hand over a blank check to OpenAI to let it use as many tokens as it sees fit?
"ChatGPT can now do asynchronous research on your behalf. Each night, it synthesizes information from your memory, chat history, and direct feedback to learn what’s most relevant to you, then delivers personalized, focused updates the next day."
In what world is this not a huge cry for help from OpenAI? It sounds like they haven't found a monetization strategy that actually covers their costs and now they're just basically asking for the keys to your bank account.
We don't charge per token in chatgpt
No, it isn’t. It makes no sense and I can’t believe you would think this is a strategy they’re pursuing. This is a Pro/Plus account feature, so the users don’t pay anything extra, and they’re planning to make this free for everyone. I very much doubt this feature would generate a lot of traffic anyway - it’s basically one more message to process per day.
OpenAI clearly recently focuses on model cost effectiveness, with the intention of making inference nearly free.
What do you think the weekly limit is on GPT-5-Thinking usage on the $20 plan? Write down a number before looking it up.
If you think that inference at OpenAI is nearly free, then I got a bridge to sell you. Seriously though this is not speculation, if you look at the recent interview with Altman he pretty explicitly states that they underestimated that inference costs would dwarf training costs - and he also stated that the one thing that could bring this house of cards down is if users decide they don’t actually want to pay for these services, and so far, they certainly have not covered costs.
I admit that I didn’t understand the Pro plan feature (I mostly use the API and assumed a similar model) but I think if you assume that this feature will remain free or that its costs won’t be incurred elsewhere, you’re likely ignoring the massive buildouts of data centers to support inference that is happening across the US right now.
Here's a free product enhancement for OpenAI if they're not already doing this:
A todo app that reminds you of stuff. say "here's the stuff I need to do, dishes, clean cat litter fold laundry and put it away, move stuff to dryer then fold that when it's done etc." then it asks about how long these things take or gives you estimates. Then (here's the feature) it checks in with you at intervals: "hey it's been 30 minutes, how's it going with the dishes?"
This is basically "executive function coach." Or you could call it NagBot. Either way this would be extremely useful, and it's mostly just timers & push notifications.
Humbly I suggest vibecoding this just for yourself. Not building a product - just a simple tool to meet your own needs.
That’s AI: permissionless tool building. It means never needing someone to like your idea enough or build it how they think you’ll use it. You just build it yourself and iterate it.
This will drive the opposite of user engagement.