About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
jsheard9 days ago
There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
WJW8 days ago
The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
creaturemachine8 days ago
Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.
AzN1337c0d3r8 days ago
> Back in the real world, no race team would agree that their cars should disintegrate after one race.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
jperras8 days ago
If you go back further than that, teams used to destroy entire engines for a single qualifying.
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
NaN years ago
undefined
creaturemachine8 days ago
Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.
NaN years ago
undefined
NaN years ago
undefined
kllrnohj8 days ago
Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
pfdietz8 days ago
Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.
ortusdux8 days ago
Anyone can build a bridge, but it takes an engineer to barely build a bridge.
mikepurvis8 days ago
Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.
ortusdux8 days ago
There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad.
This is a great quote for the topic, but the quote is normally about a bridge that barely stands.
I'm chuckling at the thought of barely building something. (All in good fun, thank you.)
woliveirajr7 days ago
In my county, a company asked the Mayor if it was possible to improve some bridge because they need to carry 40t and the bridge had a sign telling it would only allow up to 32t. Their proposal was to do the construction and get tax rebates.
After two weeks, the Infrastructure department changed the sign allowing up to 45t.
signalToNose8 days ago
Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view
delichon8 days ago
I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.
rlander8 days ago
That’s not a small cycle count for a normal household.
90 × 24 = 2,160 total hours.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week.
That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
cestith8 days ago
A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
NaN years ago
undefined
hnuser1234568 days ago
Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those.
NaN years ago
undefined
NaN years ago
undefined
account428 days ago
Well from an evil business perspective their options are either
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well.
NaN years ago
undefined
doubled1128 days ago
When the design spec seems to be a 3 year long lease I can see why people get bothered.
aleks2248 days ago
There's a quote in the bible that says something similar:
"Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”
(John 12:24)
lelandfe9 days ago
So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.
(I also never managed to get it)
jonhohle9 days ago
I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.
ad1338 days ago
This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.
Insanity8 days ago
That’s a solid guess. And if that’s the case, that’s actually pretty good error handling!
Jare8 days ago
I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.
p1necone9 days ago
> Never knew why that worked.
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
jbreckmckye9 days ago
Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated
reactordev9 days ago
Longer vsync pauses but larger frame time deltas so it’s basically the same speed of play. The only thing that was even noticeable was the UI lag.
fredoralive8 days ago
Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.
reactordev8 days ago
Incorrect. I’m looking at the source code. It’s not perfect but it’s not just “slowed down to 50hz” like people claim.
jbreckmckye8 days ago
When you say looking at the source code, what do you mean here?
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
NaN years ago
undefined
NaN years ago
undefined
mungoman28 days ago
Wouldn't a slower tick make it easier as you get more wall time to do the same challenge.
fredoralive8 days ago
No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.
BolexNOLA8 days ago
Lord have mercy fandom really has become unbearable with the ads and pop ups.
coldpie8 days ago
Install an ad blocker.
BolexNOLA8 days ago
I opened this on an iPhone which has fewer adblock options. Desktop is better locked down.
Regardless I can still complain about how intrusive the ads are.
coldpie8 days ago
There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success.
JustExAWS8 days ago
I just opened this my iPhone with 1Blocker installed. I saw no ads. It’s been around since iOS 8
BolexNOLA8 days ago
Never heard of it, appreciate the recc!
Edit: ah only works on safari
NaN years ago
undefined
account428 days ago
Don't accept devices that limit your ad blocker options.
BolexNOLA8 days ago
Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?
These types of comments are always very unhelpful.
NaN years ago
undefined
elcritch8 days ago
We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)
debo_9 days ago
So that's why it's called Excalibur 2!
stevage9 days ago
You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...)
xhrpost8 days ago
Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?
jbreckmckye8 days ago
Some C programmers take the view that unsigneds have too many disadvantages: undefined behaviour for overflows, and weird type promotion rules. So, they try and avoid uints.
tekne8 days ago
Umm, signed integers are UB on overflow; unsigned is always fine.
jbreckmckye7 days ago
Sorry, you are correct. I don't think unsigned overflow behaviour was defined until C99 though.
Anyway, in answer to the question, I would guess the reason was because of signed / unsigned type promotion.
aidenn08 days ago
If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.
jonhohle9 days ago
I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Gamemaster13798 days ago
Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash.
https://n64squid.com/paper-mario-reward-block-glitch/
rybosome8 days ago
It’s a totally reasonable choice in that context.
I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.
When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).
Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.
technion8 days ago
Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
account428 days ago
There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.
jraph8 days ago
Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered?
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
sim7c008 days ago
aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
lstodd8 days ago
> if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.
lentil_soup8 days ago
they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|
account428 days ago
For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.
ThrowawayTestr8 days ago
Great video, just subscribed
teeray8 days ago
The true Time Twister unlocked
Insanity8 days ago
Literally unplayable, someone should fix that.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
bitwize8 days ago
Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.
kodarna8 days ago
They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.
account428 days ago
So in other words the own the part of PC gaming that's actually good.
jama2117 days ago
You’re saying the Witcher 3 and games like it are bad?
Novosell8 days ago
They own Minecraft as well.
nurettin8 days ago
> Microsoft pretty much owns most of PC gaming.
So valve next?
Lightkey8 days ago
They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.
simoncion8 days ago
> Valve is betting everything on Linux right now...
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
Insanity8 days ago
I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
Spoom8 days ago
Stating my bias up front, I've been using Linux since Windows Vista, and I'm a fan. That said, I have experienced the same things you did whenever I needed to run Wine for... well, anything. It was clunky as hell.
You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).
Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.
NaN years ago
undefined
jerf8 days ago
It has come lightyears.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
NaN years ago
undefined
lukan8 days ago
"Valve is betting everything on Linux right now"
Not everything, but they do invest in it.
account428 days ago
All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
tomwojcik8 days ago
As long as Gabe is alive, no way.
HeckFeck8 days ago
We must find a way to extend his life indefinitely.
account428 days ago
*in control of Valve
Old age can make him give that up before death.
jjbinx0078 days ago
This caters for people who prefer the classic Doom style of gameplay in FPS games:
Ahh yes, I'm quite happy that a few years ago this has become a trend!
jama2118 days ago
Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.
jeffwask8 days ago
I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
jama2117 days ago
Ooh, I’ll have to give this a try, thank you!
Insanity8 days ago
This is exactly how I want my FPS games to be. Just linear, run & gun.
TBH, I can even do without weapon upgrades or any "RPG" style elements.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
lapetitejort8 days ago
> find secrets
I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course
bombela8 days ago
The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game.
Insanity8 days ago
I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
jama2117 days ago
Huh, thanks! I’ll give it a try
xmonkee8 days ago
Same. And love those brutality mods.
shpongled8 days ago
2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other)
pizza2348 days ago
I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
billyp-rva8 days ago
The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling.
spjt8 days ago
Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging"
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
shultays8 days ago
Does that hardware traps overflows or something?
I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value
Doesn't sound like something that would crash, I wonder what was the actual crash
Sharlin8 days ago
Signed overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.
account428 days ago
Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.
Sharlin8 days ago
That’s what I said? It’s easy to come up with scenarios where signed overflow breaks a program in a crashy way if the optimizer, for example, optimizes out a check for said overflow because it’s allowed to assume that `++i < 0` can never happen if i is initialized to >= 0. That’s something that very real optimizers take advantage of in the very real world, not just on paper. For example, GCC needs -fwrapv to give you guaranteed wrapping behavior (there’s sctually -ftrapv which raises a SIGFPE on overflow – that’s likely the easiest way to cause this crash!)
But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.
phkahler8 days ago
That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK
jraph8 days ago
Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.
An actual analysis would be needed to understand the actual cause of the crash.
Sharlin8 days ago
Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
int foo[5] = { … }
foo[i % 5] = bar;
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)
account428 days ago
Dividing by a difference that is suddenly zero is another possibility.
ogurechny8 days ago
The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
NaN years ago
undefined
jraph8 days ago
Notably, DOOM crashed before Windows CE.
chatmasta8 days ago
Seriously… I’m most impressed that this PDA kept an application running for 2.5 years. I’d be shocked if any modern hardware could do this, even while disconnected from the Internet.
jraph8 days ago
I'd be more impressed by current software not crashing for 2.5 years than hardware, but that might be I'm a software developer, not a hardware developer :-)
wingi8 days ago
Yes, great achivement!
JoshGlazebrook9 days ago
2038 is going to be a fun year.
kevin_thibedeau9 days ago
Everybody is sleeping on 2036 for NTP. That's when the fun begins.
wiredpancake8 days ago
Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
jonhohle9 days ago
That seems much closer than it did in y2k.
aaronbrethorst8 days ago
[ 25 ] Now [ 13 ]
yep
cestith8 days ago
You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
account428 days ago
Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them.
cestith8 days ago
Most 32-bit games written for some form of Unix will use the system time_t if they care about time. The ones written properly, anyway. Modern Unix systems have a 64-bit time_t, even on 32-bit hardware and OS. If it’s on some other OS and uses the Unix epoch on a signed 32-bit integer that’s another design flaw.
chatmasta8 days ago
You’ve got 13 years to update unless any of your code includes dates in the future. Just stay away from anything related to mortgages, insurance policies, eight year PhD programs, retirement accounts…
cestith7 days ago
If you’re managing mortgages or retirement accounts on systems that weren’t ready for 2038 by 2008 you were already missing the biggest bucket of the market.
pjc508 days ago
Fixing that is my retirement plan.
Zobat8 days ago
This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.
jeffrallen8 days ago
This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.
Maybe I need my morning coffee. :)
minki_the_avali8 days ago
I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3
ranger_danger9 days ago
Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all.
unixhero9 days ago
Yes. I think it it seems like it was the os that overflowed, and not Doom in this case.
nomel9 days ago
It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]!
To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.
I am not an OS developer, so I take my own conclusion with a grain of salt.
jama2117 days ago
Did you read the article? They specifically said it was a variable in the game engine code that causes the overflow. A program crashing causes the OS to show the error, but the bug that caused the crash was clearly in the game code itself.
Any OS this game engine ran on would experience this crash.
0cf8612b2e1e9 days ago
I am going to need to see this replicated before I can believe.
piker8 days ago
Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.
ustad8 days ago
Was this specific to the PDA port or the core doom code?
@ID_AA_Carmack Are you going to write a patch to fix this?
bombcar7 days ago
Looks like top security high damage CVE to me!
cestith8 days ago
Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.
This was definitely NT. It was the IIS server at an ISP. It might have been the same timer, and it might’ve been 49 days instead of 42. Its was in the forties, and 42 sticks in my mind pretty easily. It may have been basically the same bug.
That, or the Reddit poster and I have the same wrong memory of the bug. I do know my boss at the time made us make the scheduled task to reboot because he understood it at the time to happen on NT 4.
minki_the_avali6 days ago
> about 42 days
About 42 sounds a bit too low, if this really was a timer overflow from a 16 bit timer it would have to be around 49 days
glitchc8 days ago
I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.
minki_the_avali6 days ago
Its not supposed to be blurry, you may have a configuration error in your browsers font renderers pixel order (like using BGR on an RGB screen), Id recommend testing with font smoothing turned off and seeing if it persists.
jraph8 days ago
Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-)
qiine8 days ago
In games I worked on I use time to pan textures for animated FX.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
serf9 days ago
The easy way to e-Nostradamus predictions:
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
prmoustache8 days ago
I had an iPaq for a while and I don't remember seeing OS/hardware crashes.
otikik8 days ago
Quick! John Carmack needs to be brought into this immediately.
patchtopic8 days ago
I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now?
DeathArrow8 days ago
It's good it didn't took a billion years to overflow. That would have been quite a long wait.
johnjames878 days ago
Literally unplayable
casey28 days ago
Has this ever come up in a TAS of custom levels?
EbNar8 days ago
Love the look of that board :-)
kwertyoowiyop8 days ago
CNR. Please attach video.
moomin8 days ago
Literally unplayable.
ZsoltT8 days ago
glitchless?
sunrunner9 days ago
Not a comment on the post, but I sure wish Jira would load even half as quickly as this site.
antsar9 days ago
It takes serious hardware investment [0] to pull that off.
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
commenting from dillo running on a disposable vape which boots desktop linux using a ram expansion
stevage9 days ago
It's not loading for me at all.
9dev8 days ago
We recently moved to Linear and couldn’t be happier, can recommend!
hughes9 days ago
Is this a joke because the site isn't loading at all?
sunrunner9 days ago
At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)
SpicyUme9 days ago
Came back to check this since the tab never loaded. I'm guessing traffic caused some issues?
minki_the_avali8 days ago
You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though
Insanity8 days ago
I’m guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills.
About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.
https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan...
Am reminded by this quote from Ferdinand Porsche:
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.
> Back in the real world, no race team would agree that their cars should disintegrate after one race.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
If you go back further than that, teams used to destroy entire engines for a single qualifying.
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
undefined
Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.
undefined
undefined
Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.
Anyone can build a bridge, but it takes an engineer to barely build a bridge.
Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.
There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad.
https://www.justice.gov/archives/opa/pr/aluminum-extrusion-m...
undefined
This is a great quote for the topic, but the quote is normally about a bridge that barely stands.
I'm chuckling at the thought of barely building something. (All in good fun, thank you.)
In my county, a company asked the Mayor if it was possible to improve some bridge because they need to carry 40t and the bridge had a sign telling it would only allow up to 32t. Their proposal was to do the construction and get tax rebates.
After two weeks, the Infrastructure department changed the sign allowing up to 45t.
Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view
I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.
That’s not a small cycle count for a normal household. 90 × 24 = 2,160 total hours.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
undefined
undefined
undefined
undefined
A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
undefined
Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those.
undefined
undefined
Well from an evil business perspective their options are either
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
Try a Breville PolyScience... https://www.breville.com/en-us/product/csv750
Or if you want something even beefier: https://sammic.com/en/smartvide-xl
undefined
What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well.
undefined
When the design spec seems to be a 3 year long lease I can see why people get bothered.
There's a quote in the bible that says something similar:
"Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”
(John 12:24)
So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.
(I also never managed to get it)
I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.
This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.
That’s a solid guess. And if that’s the case, that’s actually pretty good error handling!
I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.
> Never knew why that worked.
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated
Longer vsync pauses but larger frame time deltas so it’s basically the same speed of play. The only thing that was even noticeable was the UI lag.
Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.
Incorrect. I’m looking at the source code. It’s not perfect but it’s not just “slowed down to 50hz” like people claim.
When you say looking at the source code, what do you mean here?
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
undefined
undefined
Wouldn't a slower tick make it easier as you get more wall time to do the same challenge.
No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.
Lord have mercy fandom really has become unbearable with the ads and pop ups.
Install an ad blocker.
I opened this on an iPhone which has fewer adblock options. Desktop is better locked down.
Regardless I can still complain about how intrusive the ads are.
There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success.
I just opened this my iPhone with 1Blocker installed. I saw no ads. It’s been around since iOS 8
Never heard of it, appreciate the recc!
Edit: ah only works on safari
undefined
Don't accept devices that limit your ad blocker options.
Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?
These types of comments are always very unhelpful.
undefined
We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)
So that's why it's called Excalibur 2!
You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...)
Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?
Some C programmers take the view that unsigneds have too many disadvantages: undefined behaviour for overflows, and weird type promotion rules. So, they try and avoid uints.
Umm, signed integers are UB on overflow; unsigned is always fine.
Sorry, you are correct. I don't think unsigned overflow behaviour was defined until C99 though.
Anyway, in answer to the question, I would guess the reason was because of signed / unsigned type promotion.
If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.
I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash. https://n64squid.com/paper-mario-reward-block-glitch/
It’s a totally reasonable choice in that context.
I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.
When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).
Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.
Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.
Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered?
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
> if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.
they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|
For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.
Great video, just subscribed
The true Time Twister unlocked
Literally unplayable, someone should fix that.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.
They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.
So in other words the own the part of PC gaming that's actually good.
You’re saying the Witcher 3 and games like it are bad?
They own Minecraft as well.
> Microsoft pretty much owns most of PC gaming.
So valve next?
They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.
> Valve is betting everything on Linux right now...
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
Stating my bias up front, I've been using Linux since Windows Vista, and I'm a fan. That said, I have experienced the same things you did whenever I needed to run Wine for... well, anything. It was clunky as hell.
You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).
Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.
undefined
It has come lightyears.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
undefined
"Valve is betting everything on Linux right now"
Not everything, but they do invest in it.
All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
As long as Gabe is alive, no way.
We must find a way to extend his life indefinitely.
*in control of Valve
Old age can make him give that up before death.
This caters for people who prefer the classic Doom style of gameplay in FPS games:
https://www.reddit.com/r/boomershooters/
Ahh yes, I'm quite happy that a few years ago this has become a trend!
Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.
I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
Ooh, I’ll have to give this a try, thank you!
This is exactly how I want my FPS games to be. Just linear, run & gun. TBH, I can even do without weapon upgrades or any "RPG" style elements.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
> find secrets
I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course
The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game.
I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
Huh, thanks! I’ll give it a try
Same. And love those brutality mods.
2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other)
I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling.
Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging"
Since we've hugged the site to death, have an archive.org link: https://web.archive.org/web/20250916234009/https://lenowo.or...
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
Does that hardware traps overflows or something?
Doesn't sound like something that would crash, I wonder what was the actual crashSigned overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.
Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.
That’s what I said? It’s easy to come up with scenarios where signed overflow breaks a program in a crashy way if the optimizer, for example, optimizes out a check for said overflow because it’s allowed to assume that `++i < 0` can never happen if i is initialized to >= 0. That’s something that very real optimizers take advantage of in the very real world, not just on paper. For example, GCC needs -fwrapv to give you guaranteed wrapping behavior (there’s sctually -ftrapv which raises a SIGFPE on overflow – that’s likely the easiest way to cause this crash!)
But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.
That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK
Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.
An actual analysis would be needed to understand the actual cause of the crash.
Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)Dividing by a difference that is suddenly zero is another possibility.
The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
undefined
Notably, DOOM crashed before Windows CE.
Seriously… I’m most impressed that this PDA kept an application running for 2.5 years. I’d be shocked if any modern hardware could do this, even while disconnected from the Internet.
I'd be more impressed by current software not crashing for 2.5 years than hardware, but that might be I'm a software developer, not a hardware developer :-)
Yes, great achivement!
2038 is going to be a fun year.
Everybody is sleeping on 2036 for NTP. That's when the fun begins.
Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
That seems much closer than it did in y2k.
You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them.
Most 32-bit games written for some form of Unix will use the system time_t if they care about time. The ones written properly, anyway. Modern Unix systems have a 64-bit time_t, even on 32-bit hardware and OS. If it’s on some other OS and uses the Unix epoch on a signed 32-bit integer that’s another design flaw.
You’ve got 13 years to update unless any of your code includes dates in the future. Just stay away from anything related to mortgages, insurance policies, eight year PhD programs, retirement accounts…
If you’re managing mortgages or retirement accounts on systems that weren’t ready for 2038 by 2008 you were already missing the biggest bucket of the market.
Fixing that is my retirement plan.
This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.
This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.
Maybe I need my morning coffee. :)
I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3
Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all.
Yes. I think it it seems like it was the os that overflowed, and not Doom in this case.
It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]!
[1] https://www.bbc.com/future/article/20221011-how-space-weathe...
They explained it was in the game code though?
To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.
I am not an OS developer, so I take my own conclusion with a grain of salt.
Did you read the article? They specifically said it was a variable in the game engine code that causes the overflow. A program crashing causes the OS to show the error, but the bug that caused the crash was clearly in the game code itself.
Any OS this game engine ran on would experience this crash.
I am going to need to see this replicated before I can believe.
Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.
Was this specific to the PDA port or the core doom code?
@ID_AA_Carmack Are you going to write a patch to fix this?
Looks like top security high damage CVE to me!
Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.
Are you thinking of https://web.archive.org/web/19990508050925/http://support.mi... ? Or was there a different bug in NT 4?
This was definitely NT. It was the IIS server at an ISP. It might have been the same timer, and it might’ve been 49 days instead of 42. Its was in the forties, and 42 sticks in my mind pretty easily. It may have been basically the same bug.
UPDATE: Apparently it was 49.7 days in NT, same timer bug as 9x. Only remember this was a server OS. https://www.reddit.com/r/sysadmin/comments/86jxva/anyone_rem...
That, or the Reddit poster and I have the same wrong memory of the bug. I do know my boss at the time made us make the scheduled task to reboot because he understood it at the time to happen on NT 4.
> about 42 days About 42 sounds a bit too low, if this really was a timer overflow from a 16 bit timer it would have to be around 49 days
I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.
Its not supposed to be blurry, you may have a configuration error in your browsers font renderers pixel order (like using BGR on an RGB screen), Id recommend testing with font smoothing turned off and seeing if it persists.
Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-)
In games I worked on I use time to pan textures for animated FX.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
The easy way to e-Nostradamus predictions:
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
I had an iPaq for a while and I don't remember seeing OS/hardware crashes.
Quick! John Carmack needs to be brought into this immediately.
I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now?
It's good it didn't took a billion years to overflow. That would have been quite a long wait.
Literally unplayable
Has this ever come up in a TAS of custom levels?
Love the look of that board :-)
CNR. Please attach video.
Literally unplayable.
glitchless?
Not a comment on the post, but I sure wish Jira would load even half as quickly as this site.
It takes serious hardware investment [0] to pull that off.
[0] https://lenowo.org/viewtopic.php?t=28
Meta-Meta-Meta:
Update:
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
Source: https://lenowo.org/viewtopic.php?t=28
> Host it on the Fritzbox 7950 instead?
It's a router.. oh my god that made me laugh
Perhaps it's hosted on a disposable vape?
Pretty sure the dead sibling to this comment shouldn't be dead.
Source: https://lenowo.org/viewtopic.php?t=28
badass
Commenting on my Epic from an LG Fridge.
commenting from dillo running on a disposable vape which boots desktop linux using a ram expansion
It's not loading for me at all.
We recently moved to Linear and couldn’t be happier, can recommend!
Is this a joke because the site isn't loading at all?
At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)
Came back to check this since the tab never loaded. I'm guessing traffic caused some issues?
You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though
I’m guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills.
It just supports 1536 concurrent users [0].
Which is fine unless you get to HN frontpage.
[0] https://lenowo.org/viewtopic.php?t=28
"I hope someone got fired for that blunder." /s