The first Go proverb Rob Pike listed in his talk "Go Proverbs" was, "Don't communicate by sharing memory, share memory by communicating."
Go was designed from the beginning to use Tony Hoare's idea of communicating sequential processes for designing concurrent programs.
However, like any professional tool, Go allows you to do the dangerous thing when you absolutely need to, but it's disappointing when people insist on using the dangerous way and then blame it on the language.
This is all very nice as an idea or a mythical background story ("Go was designed entirely around CSP"), but Go is not a language that encourages "sharing by communicating". Yes, Go has channels, but many other languages also have channels, and they are less error prone than Go[1]. For many concurrent use cases (e.g. caching), sharing memory is far simpler and less error-prone than using channels.
If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.
> people insist on using the dangerous way and then blame it on the language
Can you blame them when the dangerous way uses 0 syntax while the safe way uses non-0 syntax? I think it's fine to criticize unsafe defaults, though of course it would not be fair to treat it like it's the only option
pbohun17 hours ago
They're not using the dangerous way because of syntax, they're using it because they think they're "optimizing" their code. They should write correct code first, measure, and then optimize if necessary.
izend21 hours ago
Meaning similar to Erlang style message passing?
pbohun16 hours ago
Not quite. Erlang uses the Actor model which delivers messages asynchronously to named processes. In Go, messages are passed between goroutines via channels, which provide a synchronization mechanism (when un-buffered). The ability to synchronize allow one to setup a "rhythm" to computation that the Actor model is explicitly not designed to do. Also, note that a process must know its consumer in the Actor model, but goroutines do not need to know their consumer in the CSP model. Channels can even be passed around to other goroutines!
There's also a nice talk Rob Pike gave that illustrated some very useful concurrency patterns that can be built using the CSP model:
https://www.youtube.com/watch?v=f6kdp27TYZs
asa40015 hours ago
It's true that message sends with Erlang processes do not perform rendezvous synchronization (i.e., sends are nonblocking), but they can be used in a similar way by having process A send a message to process B and then blocking on a reply from process B. This is not the same as unbuffered channel blocking in Go or Clojure, but it's somewhat similar.
For example, in Erlang, `receive` _is_ a blocking operation that you have to attach a timeout to if you want to unblock it.
You're correct about identity/names: the "queue" part of processes (the part that is most analogous to a channel) is their mailbox, which cannot be interacted with except via message sends to a known pid. However, you can again mimic some of the channel-like functionality by sending around pids, as they are first class values, and can be sent, stored, etc.
I agree with all of your points, just adding a little additional color.
ascendantlogic19 hours ago
> "Go is often touted for its ease to write highly concurrent programs. However, it is also mind-boggling how many ways Go happily gives us developers to shoot ourselves in the foot."
In my career I've found that if languages don't allow developers to shoot themselves (and everyone else) in the foot they're labelled toy languages or at the very least "too restrictive". But the moment you're given real power someone pulls the metaphorical trigger, blows their metaphorical foot off and then starts writing blog posts about how dangerous it is.
xnoreq18 hours ago
Though a good language would point out that what the junior (or in some cases even senior) dev is holding in their hand is in fact a gun and not a gun disguised and marketed as this nice and easy to use toy, which is especially true for Go.
One must keep in mind that devs manage to implement even flawed logic that is directly reflected by the code. I'd rather not give them a non-thread safe language that provides a two letter keyword to start a concurrent thread in the same address space. Insane language design.
gethly1 day ago
Every language has an arsenal of footguns. Go is no different. I would say that overall it is not too bad, comparatively.
From all the listed cases, only the first one is easy to get caught by, even as an experienced developer. There, the IDE and syntax highlighting is of tremendous help and for general prevention. The rest is just understanding the language and having some practice.
p2detar1 day ago
I'm still relatively new to Go, but I've never seen closures used that often thus far in production code. Is it really a common practice?
ad_hockey1 day ago
That first example is an unintended closure, since the err at the top level actually has nothing to do with the errs in the goroutines. I have seen that sometimes, although the use of = rather than := normally makes it obvious that something dodgy is going on.
As to whether it's a common pattern, I see closures on WaitGroups or ErrGroups quite often:
workerCount := 5
var wg sync.WaitGroup
wg.Add(workerCount)
for range workerCount {
go func() {
// Do work
wg.Done()
}()
}
wg.Wait()
You can avoid the closure by making the worker func take a *sync.WaitGroup and passing in &wg, but it doesn't really have any benefit over just using the closure for convenience.
gethly1 day ago
Yes, kind of.
lenkite1 day ago
The first one should be caught by the Go race detector AFAIK. It will warn about the conflicting write accesses to err when both goroutines run.
minus71 day ago
All code is inherently not concurrency-safe unless it says so. The http.Client docs mention concurrent usage is safe, but not modification.
The closure compiler flag trick looks interesting though, will give this a spin on some projects.
reader_100023 hours ago
I agree, any direct / field modification should be assumed to be not-thread safe. OTOH, I think Go made a mistake by exporting http.DefaultClient, because it is a pointer and using it causes several problems including thread safety, and there are libraries that use it. It would have been better if it were http.NewDefaultClient() which creates a new one every time it is called.
unscaled10 hours ago
I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).
If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.
Mawr1 day ago
> The http.Client docs mention concurrent usage is safe, but not modification.
Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.
saturn_vk1 day ago
On the other hand, it should be very obvious for anyone that has experience with concurrency, that changing a field on an object like the author showed can never be safe in a concurrency setting. In any language.
gf0001 day ago
This is not true in the general case. E.g. setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.
It depends on the platform though (e.g. in Java it is guaranteed that there is no tearing [1]).
[1] In OpenJDK. The JVM spec itself only guarantees it for 32-bit primitives and references, but given that 64-bit CPUs can cheaply/freely write a 64-bit value atomically, that's how it's implemented.
kiitos19 hours ago
> setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.
this only works when the language defines a memory model where bools are guaranteed to have atomic reads and writes
so you can't make a claim like "setting a field to true from ... multiple threads ... can be a meaningful operation e.g. if you only care about if ANY of the threads have finished execution"
as that claim only holds when the memory model allows it
which is not true in general, and definitely not true in go
assumptions everywhere!!
zbentley7 hours ago
GP didn’t say “setting a ‘bool’ value to true”, it referred to setting a “field”. Interpreted charitably, this would be done in Go via a type that does support atomic updates, which is totally possible.
gf00018 hours ago
> can never be safe in a concurrency setting. In any language.
Then I give an example of a language where it's safe
I don't get your point. The negation of all is a single example where it doesn't apply.
rowanseymour22 hours ago
I saw that bit about concurrent use of http.Client and immediately panicked about all our code in production hammering away concurrently on a couple of client instances... and then saw the example and thought... why would you think you can do that concurrently??
kiitos20 hours ago
the distinction between "concurrent use" and "concurrent modification" in go is in no way subtle
there is this whole demographic of folks, including the OP author, who seem to believe that they can start writing go programs without reading and understanding the language spec, the memory model, or any core docs, and that if the program compiles and runs that any error is the fault of the language rather than the programmer. this just ain't how it works. you have to understand the thing before you can use the thing. all of the bugs in the code in this blog post are immediately obvious to anyone who has even a basic understanding of the rules of the language. this stuff just isn't interesting.
lenkite1 day ago
> Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.
Which PL do you use then ? Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.
ViewTrick10021 day ago
> Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.
Please explain
zbentley6 hours ago
Not GP but off the top of my head: async cancellation, mutex poisoning, drop+clone+thread interactions, and the entire realm of unsafe (which specific language properties no longer hold in an unsafe block? Is undefined behavior present if there’s a defect in unsafe code, or just incorrect behavior? Both answers are indeed subtle and depend on the specifics of the unsafe block). And auto deref coercion, knowing whether a given piece of code allocates, and “into”/turbofish overload lookup, but those subtleties aren’t really concurrency related.
I like Rust fine, but it’s got plenty of subtle distinctions.
lenkite22 hours ago
Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.
Anyways, the article author lacks basic reading skills, since he forgot to mention that the Go http doc states that only the http client transport is safe for concurrent modification. There is no "subtlety" about it. It directly says so. Concurrent "use" is not Concurrent "modification" in Go. The Go stdlib doc uses this consistently everywhere.
aystatic21 hours ago
> Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.
Where are the “subtle linguistic distinctions”? These types do two completely different things. And neither are even capable of being used in a multithreaded context due to `!Sync` (and `!Send` for Rc and refguards)
lenkite19 hours ago
I did say "runtime borrow checking" ie using them together. Example: `Rc::new(RefCell::new(value));`. This will panic at runtime. Maybe I should have used the phrase "dynamic borrowing" ?
You don't need different threads. I said concurrency not multi-threading. Interleaving tasks within the same thread (in an event loop for example) can cause panics.
aystatic19 hours ago
I understand what you meant (but note that allocating an Rc isn’t necessary; &RefCell would work just fine). I just didn’t see the “subtle linguistic distinctions” - and still don’t… maybe you could point them out for me?
Yeah, it is a crappy example. Ignore me. I just re-read and the rustdoc has no “subtle linguistic distinctions”.
unscaled10 hours ago
Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.
If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.
bilbo-b-baggins1 day ago
4 ways to demonstrate that the author either knows nothing about closures, structs, mutexes, and atomicity OR they just come from a Rust background and made some super convoluted examples to crap on Go.
“A million ways to segfault in C” and its just the author assigning NULL to a pointer and reading it, then proclaiming C would be better if it didn’t have a NULL value like Rust.
I’m mad I read that. I want a refund on my time.
landr0id1 day ago
First sentence:
>I have been writing production applications in Go for a few years now. I like some aspects of Go. One aspect I do not like is how easy it is to create data races in Go.
To me it looks like simple, clear examples of potential issues. It's unfortunate to frame that as "crapping on Go", how are new Go programmers going to learn about the pitfalls if all discussion of them are seen as hostility?
Like, rightly or wrongly, Go chose pervasive mutability and shared memory, it inevitably comes with drawbacks. Pretending they don't exist doesn't make them go away.
bloppe1 day ago
Go famously summed up their preferred approach to shared state:
> Don't communicate by sharing memory; share memory by communicating.
dontlaugh1 day ago
Which they then failed to follow, especially since goroutines share memory with each other.
yvdriess1 day ago
Go is a bit more of a low level language compared to actor languages where the language enforces that programming model. I think the point of the slogan is that you want to make the shared memory access an implementation detail of your larger system.
bayindirh1 day ago
Threads share the same memory by definition, though. When you isolate these threads from a memory PoV, they become processes.
Moreover, threads are arguably useless without shared memory anyway. A thread is invoked to work on the same data structure with multiple "affectors". Coordination of these affectors is up to you. Atomics, locks, queues... The tools are many.
In fact, processes are just threads which are isolated from each other, and this isolation is enforced by the processor.
dontlaugh1 day ago
Goroutines aren't posix threads. They could've lacked shared memory by default, which could be enforced by a combination of the compiler and runtime like with Erlang.
bloppe1 day ago
Who is "they"? This isn't Rust. It's still up to the developer to follow the advice.
Anyway, I would stop short of saying "Go chose shared memory". They've always been clear that that's plan B.
dontlaugh1 day ago
Go's creators said "Don't communicate by sharing memory", but then designed goroutines to do exactly that. It's quite hard to not share memory by accident, actually.
It's not like it's a disaster, but it's certainly inconsistent.
bloppe1 day ago
I don't think allowing developers to use their discretion to share state is "certainly inconsistent". Not sure what your threshold is for "quite hard" but it seems pretty low to me.
dontlaugh1 day ago
Goroutines could've lacked shared memory by default, requiring you to explicitly pass in pointers to shared things. That would've significantly encouraged sharing memory by communicating.
The opposite default encourages the opposite behaviour.
marhee1 day ago
Concurrent programming is hard and has many pitfalls; people are warned about this from the very, very start. If you then go about it without studying proper usage/common pitfalls and do not use (very) defensive coding practices (violated by all examples) then the main issue is just naivity. No programming language can really defend against that.
gf0001 day ago
You are completely dismissing language design.
Also, these are minimal reproducers, the exact same mistakes can trivially happen in larger codebases across multiple files, where you wouldn't notice them immediately.
LtWorf1 day ago
The whole point of not using C is that such pitfalls shouldn't compile in other languages.
bayindirh1 day ago
> Pretending they don't exist doesn't make them go away.
It's generally assumed that people who defend their favorite programming language are oblivious to the problems the language has or choose to ignore these problems to cope with the language.
There's another possibility: Knowing the footguns and how to avoid them well. This is generally prevalent in (Go/C/C++) vs. Rust discussions. I for one know the footguns, I know how bad it can be, and I know how to avoid them.
Liking a programming language as is, operating within its safe-envelope and pushing this envelope with intent and care is not a bad thing. It's akin to saying that using a katana is bad because you can cut yourself.
We know, we accept, we like the operating envelope of the languages we use. These are tools, and no tool is perfect. Using a tool knowing its modus operandi is not "pretending the problems don't exist".
kryptiskt1 day ago
> Using a tool knowing its modus operandi is not "pretending the problems don't exist".
I said that in response to the hostility ("crap on Go") towards the article. If such articles aren't written, how will newbies learn about the pitfalls in the first place?
bloppe1 day ago
While I agree with you in principle, there is a small but important caveat about large codebases with hundreds of contributors or more. It only takes 1 bad apple to ruin the bunch.
I'll always love a greenfield C project, though!
littlestymaar1 day ago
During the short time I was working on a Go project I spent a significant amount of time debugging an issue like the one described in his first example in a library we depended on, so it's definitely not a problem of “super convoluted example”.
speedgoose1 day ago
I assume you are aware of "the billion dollar mistake" from Tony Hoare?
euroderf1 day ago
OT: This page uses the term "Learnings" a lot. As a Murrcan in tech comms in Europe, I always corrected this to something else. But, well, is it some sort of Britishism ? Or is it some weird internet usage that is creeping into general usage ?
Likewise for "Trainings". Looks weird to Murrcan eyes but maybe it's a Britishism.
pclmulqdq20 hours ago
"Learnings" is a piece of corpspeak derived from Indian English. I believe "trainings" also has the same origin.
euroderf17 hours ago
Please elucidate! Maybe a URL?
bimmbash1 day ago
The author is obviously an overcompensating French speaker naively going for the more English-sounding word, i.e. "learnings" instead of "lesson", in this instance an overly literal translation of French "enseignements" as in "tirer des enseignements" meaning "learn a lesson", but since you can also say "tirer des leçons" in French with the same meaning and root, it's just a case of choosing the wrong side haphazardly on the Anglo-Saxon/Latin-Norman-French divide of the English vocabulary, sheep/mutton ox/beef pairs and the like.
euroderf1 day ago
Interesting theory! And "Trainings"?
thfuran22 hours ago
It's pure corporatese.
shaftoe4441 day ago
I'm English and "learnings" is one piece of corporate speak that really annoys me. It just means "lessons", for people apparently unaware that noun already exists. British corpo drones seem to need their verb/noun pairs to be identical like
I gotta say, of all the corpo speak things, the whole verb/noun normalizing thing is maybe the least distasteful to me.
Not that I particularly like it, but compared to all the other stuff it at least seems tolerable. The penchant for deflecting questions and not answering directly, the weasel wording done to cover your ass, the use of words to mean something totally other than the word (e.g. "I take full responsibility" meaning "I will have no personal or professional repercussions"), etc. Some of it seems like it comes out of executive coaching, some of it definitely comes out of fear of lawsuits.
exasperaited21 hours ago
"In the end I did what I believed was right" meaning "I concede I did not do the right thing but accept no blame".
Mind you there are so many expressions like this and we British are masters of them, like "with the greatest of respect,", which conveys meaning slightly more severe than "you are a total fucking idiot and".
euroderf1 day ago
lettuce not forget strategerise/strategery
gspr1 day ago
From where I sit (in Norway), it seems to have become standard corporate-speak in any company where English is widely used. They've even started using the directly translated noun "læring" in Norwegian, too. It's equally silly. Both variants are usually spoken by the type of manager who sets out all future directions based on whatever their LinkedIn circle is talking about. It's thus a very valuable word, because the rash it elicits lets me know what people to avoid working with.
I'm not sure if the people who use this word think it's proper English. They rarely seem to care what words mean anyway.
exasperaited21 hours ago
The big question is why is it not proper english, when "teachings" is?
gspr6 hours ago
That's a good question. But it's not like English is at all logical in this way ;)
badc0ffee20 hours ago
I like to think it dates from 2000 when we had The Teaches of Peaches.
exasperaited21 hours ago
It's not a Britishism particularly. My sense was it is coming in part from Indian Standard English but it may well be European english mistranslation. I rather like it, actually. Not least because it is the reciprocal of "teachings", which is long established usage.
"What are the asks" and "what's the offer" are turning up much more than I'd like, and they annoy me. But not as much as other Americanisms: "concerning" meaning "a cause for concern", "addicting" when the word they are looking for is "addictive", and the rather whiny-sounding "cheater" when the word "cheat" works fine. These things can meet the proverbial fiery end, along with "performant" and "revert back" (the latter of which which is an Americanism sourced from Indian English that is perhaps the only intrusion from Indian English I dislike; generally I think ISE is warm and fun and joyful.)
The BBC still put "concerning" in quotes, because the UK has not yet given up the fight, and because people like me used to write in to ask "concerning what?" I had a very fun reply from a BBC person about this, once. So I assume they are still there, forcing journalists to encase this abuse in quotation marks.
Ultimately all our bugbears are personal, though, because English is the ultimate living language, and I don't think Americans have any particular standing to complain about any of them! :-)
ETA: Lest anyone think I am complaining more about Americanisms than other isms, I would just like to say that one of my favourite proofs of the extraordinary flexibility of English is the line from Mean Girls: "She doesn't even go here!"
exasperaited21 hours ago
The other day the varying meaning of "lolly" came up in a discussion. In the UK, when it's not a slang term for money, a "lolly" is either a sticky sweet (candy) on a stick, or a frozen treat on a stick. From "lollipop" and then a shortening of "ice lolly".
In Australia, a "lolly" is more or less any non-chocolate-based sweet (candy).
British people find this confusing in Australia, but this is a great example of a word whose meaning was refined in the UK long after we started transporting people to Australia. Before that, a "lollipop" was simply a boiled treacle sweet that might or might not have been on a stick; some time after transportation started, as the industrialised confectionary industry really kicked off, the British English meaning of the word slowly congealed around the stick, and the Australian meaning did not.
questioner82161 day ago
I dislike some of this article, my impression is similar to some of the complaints of others here.
However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place? If only immutable messages are shared between goroutines, it should be way easier to avoid many of these issues. That is of course not always viable, for instance due to performance concerns, but in theory can be done a lot of the time.
I have heard others call for making it easier to track mutability and immutability in Go, similar to what the author writes here.
As for closures having explicit capture lists like in C++, I have heard some Rust developers saying they would also have liked that in Rust. It is more verbose, but can be handy.
Someone1 day ago
> However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place?
C programmers aren’t supposed to access pointers after freeing them, either.
“Easy to do, even in clean-looking code, but you shouldn’t do it” more or less is the definition of a pitfall.
lenkite1 day ago
There is a LOT of demand for explicit capture clauses. This is one thing that C++ got right and Rust got wrong with all its implicit and magic behaviour.
Go is a weird one, because it's super easy to learn -if- you're familiar with say, C. If you're not, it still appears to be super easy to learn, but has enough pitfalls to make your day bad. I feel like much of the article falls into the latter camp.
I recently worked with a 'senior' Go engineer. I asked him why he never used pointer receivers, and after explaining what that meant, he said he didn't really understand when to use asterisks or not. But hey, immutability by default is something I guess.
badc0ffee20 hours ago
He must have been a senior in some other sense, not in Go experience.
neillyons20 hours ago
Does Elixir have any footguns like this? As it is immutable I don't think any of these are possible.
asa40016 hours ago
Sorry, this is going to be a slightly longer reply since this is a really interesting question to ask!
Elixir (and anything that runs on the BEAM) takes an entirely different perspective on concurrency than almost everything else out there. It still has concurrency gotchas, but at worst they result in logic bugs, not violations of the memory model.
Stuff like:
- forgetting to update a state return value in a genserver
- reusing an old conn value and/or not using the latest conn value in Plug/Phoenix
- in ETS, making the assumption nothing else writes to your key after doing a read (I wrote a library to do this safely with compare-and-swap: https://github.com/ckampfe/cas)
- same as the ETS example, but in a process: but doing a write after doing a read and assuming nothing else has altered the process state in the interim
- leaking processes (and things like sockets/ports), either by not supervising them, monitoring them, or forgetting to shut them down, etc. This can lead to things like OOMs, etc.
- deadlocking processes by getting them into a state where they each expect a reply from the other process (OTP timeouts fix this, try to always use OTP)
- logical race conditions in a genserver init callback, where the process performs some action in the init that cannot complete until the init has returned, but the init has not returned yet, so you end up with a race or an invalid state
- your classic resource exhaustion issues, where you have a ton of processes attempting to use some resource and that resource not being designed to be accessed by 1,000,000 things concurrently
- OOMing the VM by overfilling the mailbox of a process that can't process messages fast enough
Elixir doesn't really have locks in the same sense as a C-like language, so you don't really have lock lifetime issues, and Elixir datastructures cannot be modified at all (you can only return new, updated instances of them) so you can't modify them concurrently. Elixir has closures that can capture values from their environment, but since all values in Elixir are immutable, the closure can't modify values that it closes over.
Elixir really is designed for this stuff down to its core, and (in my opinion) it's evident how much better Elixir's design is for this problem space than Go's is if you spend an hour with each. The tradeoff Elixir makes is that Elixir isn't really what I'd call a general purpose language. It's not amazing for CLIs, not amazing for number crunching code, not amazing for throughput-bound problems. But it is a tremendous fit for the stuff most of us are doing: web services, job pipelines, etc. Basically anything where the primary interface is a network boundary.
Edited for formatting.
kiitos19 hours ago
> I have been writing production applications in Go for a few years now.
why would you create a new PricingService for every request? what makes you think a mutex in each of those (obviously unique) PricingService values would somehow protect the (inexplicably shared) PricingInfo value??
it's impossible to believe the author's claims about their experience in the language, this is just absolute beginner stuff..
evanelias17 hours ago
I had a similar reaction to that, glad it's not just me.
Meanwhile with the 4th item, this whole example is gross, repeatedly polling a buffer every 100ms is a massive red flag. And as for the data race in that item, the idiomatic fix is to just use io.Pipe, which solves the entire problem far more cleanly than inventing a SyncWriter.
The author's last comment regarding "It would also be nice if more types have a 'sync' version, e.g. SyncWriter, SyncReader, etc" probably indicates there's some fundamental confusion here about idiomatic Go.
scoodah12 hours ago
Yeah this whole section of the article threw me all the way off. What even is this code? There’s so many things wrong with it, it blows my mind.
About the only code example I saw in here and thought “yeah it sucks when that happens” is the accidental closure example. Accidentally shadowing something you’re trying to assign to in a branch because you need to handle an error or accidentally reassigning something can be subtle. But it’s pretty 101 go.
The rest is… questionable at best.
pipe0119 hours ago
That fix really confused me as well. Not only does the code behave differently from the problematic code, but why do they even need the mutex at that point?
On a phone and the formatting of the snippets is unreadable with the 8 space tabs…
That said, i think about all languages have their own quirks and footguns. I think people sometimes forget that tools are just that, tools. Go is remarkably easy to be productive in which is what the label on the tin can claims.
It isnt “fearless concurrency” but get shit done before 5 pm because traffics a bitch on Wednesdays
broken_broken_1 day ago
Author here, thanks for the feedback on legibility, I have now just learned about the CSS `tab-size` property to control how much space tabs get rendered with. I have reduced it, should be better now.
rustystump1 day ago
Thanks, much nicer now
ViewTrick10021 day ago
> Go is remarkably easy to be productive in which is what the label on the tin can claims.
To feel productive in.
logicchains23 hours ago
It feels productive because you're not waiting ages for it to compile again after every change.
ViewTrick100217 hours ago
I would say all the boiler plate and extra typing, while the language not preventing you from shooting yourself in the foot.
rollulus1 day ago
TL;DR. Author with “years of experience of shipping to prod” mutates globals without a mutex and is surprised enough to write a blog.
allcentury22 hours ago
There’s an example of a mutex too…
scoodah12 hours ago
An example where they’re creating a new mutex every time they call a function and then surprised when multiple goroutines that called that function and got entirely different mutexes somehow couldn’t coordinate the locks together.
That isn’t a core misunderstanding of Go, that’s a core misunderstanding of programming.
__loam1 day ago
In the first one, he complains that one character is enough to cause an issue, but the user should really have a good understanding of variable scope and the difference between assignment and instsntiation if they're writing concurrent go code. Some ides warn the user when they do this with a different color.
Races with mutexes can indicate the author either doesn't understand or refuses to engage with Go's message based concurrency model. You can use mutexes but I believe a lot of these races can be properly avoided using some of the techniques discussed in the go programming language book.
beeb1 day ago
I would argue it doesn't help that all errors are usually named `err` and sprinkled every third line of code in Go. It's an easy mistake to make to assign to an existing variable instead of create a new variable, especially if you frequently switch between languages (which might not have the `:=` operator).
konart1 day ago
>he complains that one character is enough
He complains that language design offers no way of avoiding it (in this particular case) and relies only on human or ide. Humans are not perfect and should not be a requirement to write good code.
xlii1 day ago
Whatever the case Go's tooling (i.e. IDE part) is one of the best in class and I think it shouldn't be be dismissed in the context of some footguns that Go has.
TheDong1 day ago
"best in class"?
I feel like Java's IDE support is best in class. I feel like go is firmly below average.
Like, Java has great tooling for attaching a debugger, including to running processes, and stepping through code, adding conditional breakpoints, poking through the stack at any given moment.
Most Go developers seem to still be stuck in println debugging land, akin to what you get in C.
The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project, and Go has various IDE features that work way slower (like "find implementations of this interface").
The JVM has all sorts of great knobs and features to help you understand memory usage and tune performance, while Go doesn't even have a "go build -debug" vs "go build -release" to turn on and off optimizations, so even in your fast iteration loop, go is making production builds (since that's the only option), and they also can't add any slow optimizations because that would slow down everyone's default build times. All the other sane compilers I know let you do a slower release build to get more performance.
The Go compiler doesn't emit warnings, insisting that you instead run a separate tool (govet), but since it's a separate tool you now have to effectively compile the code twice just to get your compiler warnings, making it slower than if the compiler just emit warnings.
Go's cgo tooling is also far from best in class, with even nodejs and ruby having better support for linking to C libraries in my opinion.
Like, it's incredibly impressive that Go managed to re-invent so many wheels so well, but they managed to reach the point where things are bearable, not "best in class".
I think the only two languages that achieved actually good IDE tooling are elisp and smalltalk, kinda a shame that they're both unserious languages.
Mawr1 day ago
> The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project
Okay, come on now :D Absolutely everything around Java consumes gigabytes of memory. The culture of wastefulness is real.
The Go vs Java plugins for VSCode are no comparison in terms of RAM usage.
I don't know how much the Go plugin uses, which is how it should be for all software — means usage is low enough I never had to worry about it.
Meanwhile, my small Java projects get OOM killed all the time during what I assume is the compilation the plugin does in the background? We're talking several gigabytes of RAM being used for... ??? I'm not exactly surprised, I've yet to see Java software that didn't demand gigabytes as a baseline. InteliJ is no different btw, asinine startup times during which RAM usage baloons.
gf0001 day ago
Java consumes memory because collecting garbage is extra work and under most circumstances it makes no sense to rush it. Meanwhile Go will rather take time away from your code to collect garbage, decreasing throughput. If there is ample memory available, why waste energy on that?
Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Also, intellij is a whole IDE that caches all your code in AST form for fast lookoup and stuff like that.. it has to use some extra memory by definition (though it's also configurable if you really want to, but it's a classic space vs time tradeoff again).
LtWorf1 day ago
refcounting gc is very fast and works fine for most of the references. Java not using a combination of both methods is a flaw.
gf0001 day ago
Refcounting is significantly slower under most circumstances. You are literally putting a bunch of atomic increments/decrements into your code (if you can't prove that the given object is only used from a single thread) which are crazy expensive operations on modern CPUs, evicting caches.
LtWorf1 day ago
Under most circumstances function local variables aren't passed to other threads, or passed at all.
gf0001 day ago
And? That's a small, optional optimization done by e.g. Swift.
Also, I don't know how it's relevant to Go which uses a tracing GC.
LtWorf14 hours ago
It's not "small" if it accounts for most of the allocations :)
rustystump1 day ago
Java best in class? I love java. It is my first love but ill take go ecosystem 1000% of the time.
TheDong1 day ago
Mind explaining your debugging setup, i.e. which IDE you use and what tooling you use to be able to step through and reason about code?
gf0001 day ago
It absolutely is. There is not many ecosystems where you can attach a debugger to a live prod system with minimal overhead, or one that has something like flight recorder, visualvm, etc.
TheDong1 day ago
The mutex case is one where they're using a mutex to guard read/writes to a map.
Please show us how to write that cleanly with channels, since clearly you understand channels better than the author.
I think the golang stdlib authors could use some help too, since they prefer mutexes for basically everything (look at sync.Map, it doesn't spin off a goroutine to handle read/write requests on channels, it uses a mutex).
In fact, almost every major go project seems to end up tending towards mutexes because channels are both incredibly slow, and worse for modeling some types of problems.
... I'll also point out that channels don't save you from data-races necessarily. In rust, passing a value over a channel moves ownership, so the writer can no longer access it. In go, it's incredibly easy to write data-races still, like for example the following is likely to be a data-race:
handleItemChannel <- item
slog.Debug("wrote item", "item", item) // <-- probably races because 'item' ownership should have been passed along.
questioner82161 day ago
For that last example, if 'item' is immutable, there is no issue, correct?
TheDong1 day ago
Yeah, indeed.
Developers have a bad habit of adding mutable fields to plain old data objects in Go though, so even if it's immutable now, it's now easy for a developer to create a race down the line. There's no way to indicate that something must be immutability at compile-time, so the compiler won't help you there.
questioner82161 day ago
Good points. I have also heard others say the same in the past regarding Go. I know very little about Go or its language development, however.
I wonder if Go could easily add some features regarding that. There are different ways to go about it. 'final' in Java is different from 'const' in C++, for example, and Rust has borrow checking and 'const'. I think the language developers of the OCaml language has experimented with something inspired by Rust regarding concurrency.
tialaramex22 hours ago
Rust's `const` is an actual constant, like 4 + 1 is a constant, it's 5, it's never anything else, we don't need to store it anywhere - it's just 5. In C++ `const` is a type qualifier and that keyword stands for constant but really means immutable not constant.
This results in things like you can "cast away" C++ const and modify that variable anyway, whereas obviously we can't try to modify a constant because that's not what the word constant means.
In both languages 5 += 3 is nonsense, it can't mean anything to modify 5. But in Rust we can write `const FIVE: i32 = 5;` and now FIVE is also a constant and FIVE += 3 is also nonsense and won't compile. In contrast in C++ altering an immutable "const" variable you've named FIVE is merely forbidden, once we actually do this anyway it compiles and on many platforms now FIVE is eight...
questioner821622 hours ago
Right, I forgot that 'const' in Rust is 'constexpr'/'consteval' in C++, while absence of 'mut' is probably closer to C++ 'const', my apologies.
C++ 'constexpr' and Rust 'const' is more about compile-time execution than marking something immutable.
In Rust, it is probably also possible to do a cast like &T to *mut T. Though that might require unsafe and might cause UB if not used properly. I recall some people hoping for better ergonomics when doing casting in unsafe Rust, since it might be easy to end up with UB.
Last I heard, C++ is better regarding 'constexpr' than Rust regarding 'const', and Zig is better than both on that subject.
tialaramex16 hours ago
AFAICT Although C++ now has const, constexpr. consteval and constinit, none of those mean an actual constant. In particular constexpr is largely just boilerplate left over from an earlier idea about true compile time constants, and so it means almost nothing today.
Yes, the C++ compile time execution could certainly be considered more powerful than Rust's and Zig's even more powerful than that. It is expected that Rust will some day ship compile time constant trait evaluations, which will mean you don't have to write awkward code that avoids e.g. iterators -- so with that change it's probably in the same ballpark as C++ 17 (maybe a little more powerful). However C++ 20 does compile-time dynamic allocation†, and I don't think that's on the horizon for Rust.
† In C++ 20 you must free these allocations inside the same compile-time expression, but that's still a lot of power compared to not being allowed to allocate. It is definitely possible that a future C++ language will find a way to sort of "grandfather in" these allocations so that somehow they can survive to runtime rather than needing to free them.
Rust does give you the option to break out the big guns by writing "procedural" aka "proc" macros which are essentially Rust that is run inside your compiler. Obviously these are arbitrarily powerful, but far too dangerous - there's a (serious) proc macro to run Python from inside your Rust program and (joke, in that you shouldn't use it even though it would work) proc macro which will try out different syntax until it finds which of several options results in a valid program...
iambvk1 day ago
Only looked at the first two examples. No language can save you when one writes bad code like that.
unscaled1 day ago
You can argue about how likely is code like that is, but both of these examples would result in a hard compiler error in Rust.
A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.
And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.
These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.
questioner82161 day ago
Rust concurrency also has issues, there are many complaints about async [0], and some Rust developers point to Go as having green threads. The original author of Rust originally wanted green threads as I understand it, but Rust evolved in a different direction.
As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.
Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
While I agree, in practice they can actually be parallel. Case in point - the Java Vert.x toolkit. It uses event-loop and futures, but they have also adopted virtual threads in the toolkit. So you still got your async concepts in the toolkit but the VTs are your concurrency carriers.
questioner82161 day ago
But Rust's async is one of the primary ways to handle concurrency in Rust, right? Like, async is a core part of how Tokio handles concurrency.
Smaug1231 day ago
Could you give an example to distinguish them? Async means not-synchronous, which I understand to mean that the next computation to start is not necessarily the next computation to finish. Concurrent means multiple different parts of the program may make progress before any one of them finishes. Are they not the same? (Of course, concurrency famously does not imply parallelism, one counterexample being a single-threaded async runtime.)
Sharlin1 day ago
Async, for better or worse, in 2025 is generally used to refer to the async/await programming model in particular, or more generally to non-blocking interfaces that notify you when they're finished (often leading to the so-called "callback hell" which motivated the async/await model).
hgomersall1 day ago
If you are waiting for a hardware interrupt to happen based on something external happening, then you might use async. The benefit is primarily to do with code structure - you write your code such that the next thing to happen only happens when the interrupt has triggered, without having to manually poll completion.
You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.
aallaall20 hours ago
So async enable concurrent outstanding requests.
gf0001 day ago
But even so, the JVM has well-defined data races that may cause logical problems, but can never cause memory issues.
That's not the case with Go, so these are significantly worse than both Rust and Java/C#, etc.
p2detar1 day ago
What is your definition of memory issues?
Of course you can have memory corruption in Java. The easiest way is to spawn 2 threads that write to the same ByteBuffer without write locks.
gf0001 day ago
And you would get garbled up bytes in application logic. But it has absolutely no way to mess up the runtime's state, so any future code can still execute correctly.
Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on. If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
So there are objective distinctions to have here, e.g. Rust guarantees that the source of such a corruption can only be an incorrect `unsafe` block, and Java flat out has no platform-native unsafe operations, even under data races. Go can segfault with data races on fat pointers.
Of course every language capable of FFI calls can corrupt its runtime, Java is no exception.
p2detar23 hours ago
> Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on.
In C, yes. In Rust, I have no real experience. In Go, as you pointed out, it should segfault, which is not great, but still better than in C, i.e., fail early. So I don't get or understand what your next comment means? What is a "less lucky" example in Go?
> If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
gf00022 hours ago
Silent corruption of unrelated data structures in memory. Segfault only happens if you are accessing memory outside the program's valid address space. But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc, or cause a million other kind of very hard to debug errors.
jerf23 hours ago
Haskell, Erlang/Elixir, and Rust would save you from most of these problems.
Then, of course, there's the languages that are still so deeply single-threaded that they simply can't write concurrency bugs in the first place, or you have to go way out of your way to get to them, not because they're better than Go but because they don't even play the game.
However, it is true the list is short and likely a lot of people taking the opportunity to complain about Go are working in languages where everything they are so excited to complain about are still either entirely possible in their own favorite language (with varying affordances and details around the issues) or they are working in a language that as mentioned simply aren't playing the game at all, which doesn't really count as being any better.
The first Go proverb Rob Pike listed in his talk "Go Proverbs" was, "Don't communicate by sharing memory, share memory by communicating."
Go was designed from the beginning to use Tony Hoare's idea of communicating sequential processes for designing concurrent programs.
However, like any professional tool, Go allows you to do the dangerous thing when you absolutely need to, but it's disappointing when people insist on using the dangerous way and then blame it on the language.
https://www.youtube.com/watch?v=PAAkCSZUG1c
This is all very nice as an idea or a mythical background story ("Go was designed entirely around CSP"), but Go is not a language that encourages "sharing by communicating". Yes, Go has channels, but many other languages also have channels, and they are less error prone than Go[1]. For many concurrent use cases (e.g. caching), sharing memory is far simpler and less error-prone than using channels.
If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.
[1] https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-s...
> people insist on using the dangerous way and then blame it on the language
Can you blame them when the dangerous way uses 0 syntax while the safe way uses non-0 syntax? I think it's fine to criticize unsafe defaults, though of course it would not be fair to treat it like it's the only option
They're not using the dangerous way because of syntax, they're using it because they think they're "optimizing" their code. They should write correct code first, measure, and then optimize if necessary.
Meaning similar to Erlang style message passing?
Not quite. Erlang uses the Actor model which delivers messages asynchronously to named processes. In Go, messages are passed between goroutines via channels, which provide a synchronization mechanism (when un-buffered). The ability to synchronize allow one to setup a "rhythm" to computation that the Actor model is explicitly not designed to do. Also, note that a process must know its consumer in the Actor model, but goroutines do not need to know their consumer in the CSP model. Channels can even be passed around to other goroutines!
Each have their own pros and cons. You can see some of the legends who invented different methods of concurrency here: https://www.youtube.com/watch?v=37wFVVVZlVU
There's also a nice talk Rob Pike gave that illustrated some very useful concurrency patterns that can be built using the CSP model: https://www.youtube.com/watch?v=f6kdp27TYZs
It's true that message sends with Erlang processes do not perform rendezvous synchronization (i.e., sends are nonblocking), but they can be used in a similar way by having process A send a message to process B and then blocking on a reply from process B. This is not the same as unbuffered channel blocking in Go or Clojure, but it's somewhat similar.
For example, in Erlang, `receive` _is_ a blocking operation that you have to attach a timeout to if you want to unblock it.
You're correct about identity/names: the "queue" part of processes (the part that is most analogous to a channel) is their mailbox, which cannot be interacted with except via message sends to a known pid. However, you can again mimic some of the channel-like functionality by sending around pids, as they are first class values, and can be sent, stored, etc.
I agree with all of your points, just adding a little additional color.
> "Go is often touted for its ease to write highly concurrent programs. However, it is also mind-boggling how many ways Go happily gives us developers to shoot ourselves in the foot."
In my career I've found that if languages don't allow developers to shoot themselves (and everyone else) in the foot they're labelled toy languages or at the very least "too restrictive". But the moment you're given real power someone pulls the metaphorical trigger, blows their metaphorical foot off and then starts writing blog posts about how dangerous it is.
Though a good language would point out that what the junior (or in some cases even senior) dev is holding in their hand is in fact a gun and not a gun disguised and marketed as this nice and easy to use toy, which is especially true for Go.
One must keep in mind that devs manage to implement even flawed logic that is directly reflected by the code. I'd rather not give them a non-thread safe language that provides a two letter keyword to start a concurrent thread in the same address space. Insane language design.
Every language has an arsenal of footguns. Go is no different. I would say that overall it is not too bad, comparatively.
From all the listed cases, only the first one is easy to get caught by, even as an experienced developer. There, the IDE and syntax highlighting is of tremendous help and for general prevention. The rest is just understanding the language and having some practice.
I'm still relatively new to Go, but I've never seen closures used that often thus far in production code. Is it really a common practice?
That first example is an unintended closure, since the err at the top level actually has nothing to do with the errs in the goroutines. I have seen that sometimes, although the use of = rather than := normally makes it obvious that something dodgy is going on.
As to whether it's a common pattern, I see closures on WaitGroups or ErrGroups quite often:
You can avoid the closure by making the worker func take a *sync.WaitGroup and passing in &wg, but it doesn't really have any benefit over just using the closure for convenience.Yes, kind of.
The first one should be caught by the Go race detector AFAIK. It will warn about the conflicting write accesses to err when both goroutines run.
All code is inherently not concurrency-safe unless it says so. The http.Client docs mention concurrent usage is safe, but not modification.
The closure compiler flag trick looks interesting though, will give this a spin on some projects.
I agree, any direct / field modification should be assumed to be not-thread safe. OTOH, I think Go made a mistake by exporting http.DefaultClient, because it is a pointer and using it causes several problems including thread safety, and there are libraries that use it. It would have been better if it were http.NewDefaultClient() which creates a new one every time it is called.
I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).
If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.
> The http.Client docs mention concurrent usage is safe, but not modification.
Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.
On the other hand, it should be very obvious for anyone that has experience with concurrency, that changing a field on an object like the author showed can never be safe in a concurrency setting. In any language.
This is not true in the general case. E.g. setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.
It depends on the platform though (e.g. in Java it is guaranteed that there is no tearing [1]).
[1] In OpenJDK. The JVM spec itself only guarantees it for 32-bit primitives and references, but given that 64-bit CPUs can cheaply/freely write a 64-bit value atomically, that's how it's implemented.
> setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.
this only works when the language defines a memory model where bools are guaranteed to have atomic reads and writes
so you can't make a claim like "setting a field to true from ... multiple threads ... can be a meaningful operation e.g. if you only care about if ANY of the threads have finished execution"
as that claim only holds when the memory model allows it
which is not true in general, and definitely not true in go
assumptions everywhere!!
GP didn’t say “setting a ‘bool’ value to true”, it referred to setting a “field”. Interpreted charitably, this would be done in Go via a type that does support atomic updates, which is totally possible.
> can never be safe in a concurrency setting. In any language.
Then I give an example of a language where it's safe
I don't get your point. The negation of all is a single example where it doesn't apply.
I saw that bit about concurrent use of http.Client and immediately panicked about all our code in production hammering away concurrently on a couple of client instances... and then saw the example and thought... why would you think you can do that concurrently??
the distinction between "concurrent use" and "concurrent modification" in go is in no way subtle
there is this whole demographic of folks, including the OP author, who seem to believe that they can start writing go programs without reading and understanding the language spec, the memory model, or any core docs, and that if the program compiles and runs that any error is the fault of the language rather than the programmer. this just ain't how it works. you have to understand the thing before you can use the thing. all of the bugs in the code in this blog post are immediately obvious to anyone who has even a basic understanding of the rules of the language. this stuff just isn't interesting.
> Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.
Which PL do you use then ? Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.
> Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.
Please explain
Not GP but off the top of my head: async cancellation, mutex poisoning, drop+clone+thread interactions, and the entire realm of unsafe (which specific language properties no longer hold in an unsafe block? Is undefined behavior present if there’s a defect in unsafe code, or just incorrect behavior? Both answers are indeed subtle and depend on the specifics of the unsafe block). And auto deref coercion, knowing whether a given piece of code allocates, and “into”/turbofish overload lookup, but those subtleties aren’t really concurrency related.
I like Rust fine, but it’s got plenty of subtle distinctions.
Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.
Anyways, the article author lacks basic reading skills, since he forgot to mention that the Go http doc states that only the http client transport is safe for concurrent modification. There is no "subtlety" about it. It directly says so. Concurrent "use" is not Concurrent "modification" in Go. The Go stdlib doc uses this consistently everywhere.
> Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.
Where are the “subtle linguistic distinctions”? These types do two completely different things. And neither are even capable of being used in a multithreaded context due to `!Sync` (and `!Send` for Rc and refguards)
I did say "runtime borrow checking" ie using them together. Example: `Rc::new(RefCell::new(value));`. This will panic at runtime. Maybe I should have used the phrase "dynamic borrowing" ?
https://play.rust-lang.org/?version=stable&mode=debug&editio...
You don't need different threads. I said concurrency not multi-threading. Interleaving tasks within the same thread (in an event loop for example) can cause panics.
I understand what you meant (but note that allocating an Rc isn’t necessary; &RefCell would work just fine). I just didn’t see the “subtle linguistic distinctions” - and still don’t… maybe you could point them out for me?
https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...
https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...
Yeah, it is a crappy example. Ignore me. I just re-read and the rustdoc has no “subtle linguistic distinctions”.
Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.
If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.
4 ways to demonstrate that the author either knows nothing about closures, structs, mutexes, and atomicity OR they just come from a Rust background and made some super convoluted examples to crap on Go.
“A million ways to segfault in C” and its just the author assigning NULL to a pointer and reading it, then proclaiming C would be better if it didn’t have a NULL value like Rust.
I’m mad I read that. I want a refund on my time.
First sentence:
>I have been writing production applications in Go for a few years now. I like some aspects of Go. One aspect I do not like is how easy it is to create data races in Go.
Their examples don't seem terribly convoluted to me. In fact, Uber's blog post is quite similar: https://www.uber.com/blog/data-race-patterns-in-go/
To me it looks like simple, clear examples of potential issues. It's unfortunate to frame that as "crapping on Go", how are new Go programmers going to learn about the pitfalls if all discussion of them are seen as hostility?
Like, rightly or wrongly, Go chose pervasive mutability and shared memory, it inevitably comes with drawbacks. Pretending they don't exist doesn't make them go away.
Go famously summed up their preferred approach to shared state:
> Don't communicate by sharing memory; share memory by communicating.
Which they then failed to follow, especially since goroutines share memory with each other.
Go is a bit more of a low level language compared to actor languages where the language enforces that programming model. I think the point of the slogan is that you want to make the shared memory access an implementation detail of your larger system.
Threads share the same memory by definition, though. When you isolate these threads from a memory PoV, they become processes.
Moreover, threads are arguably useless without shared memory anyway. A thread is invoked to work on the same data structure with multiple "affectors". Coordination of these affectors is up to you. Atomics, locks, queues... The tools are many.
In fact, processes are just threads which are isolated from each other, and this isolation is enforced by the processor.
Goroutines aren't posix threads. They could've lacked shared memory by default, which could be enforced by a combination of the compiler and runtime like with Erlang.
Who is "they"? This isn't Rust. It's still up to the developer to follow the advice.
Anyway, I would stop short of saying "Go chose shared memory". They've always been clear that that's plan B.
Go's creators said "Don't communicate by sharing memory", but then designed goroutines to do exactly that. It's quite hard to not share memory by accident, actually.
It's not like it's a disaster, but it's certainly inconsistent.
I don't think allowing developers to use their discretion to share state is "certainly inconsistent". Not sure what your threshold is for "quite hard" but it seems pretty low to me.
Goroutines could've lacked shared memory by default, requiring you to explicitly pass in pointers to shared things. That would've significantly encouraged sharing memory by communicating.
The opposite default encourages the opposite behaviour.
Concurrent programming is hard and has many pitfalls; people are warned about this from the very, very start. If you then go about it without studying proper usage/common pitfalls and do not use (very) defensive coding practices (violated by all examples) then the main issue is just naivity. No programming language can really defend against that.
You are completely dismissing language design.
Also, these are minimal reproducers, the exact same mistakes can trivially happen in larger codebases across multiple files, where you wouldn't notice them immediately.
The whole point of not using C is that such pitfalls shouldn't compile in other languages.
> Pretending they don't exist doesn't make them go away.
It's generally assumed that people who defend their favorite programming language are oblivious to the problems the language has or choose to ignore these problems to cope with the language.
There's another possibility: Knowing the footguns and how to avoid them well. This is generally prevalent in (Go/C/C++) vs. Rust discussions. I for one know the footguns, I know how bad it can be, and I know how to avoid them.
Liking a programming language as is, operating within its safe-envelope and pushing this envelope with intent and care is not a bad thing. It's akin to saying that using a katana is bad because you can cut yourself.
We know, we accept, we like the operating envelope of the languages we use. These are tools, and no tool is perfect. Using a tool knowing its modus operandi is not "pretending the problems don't exist".
> Using a tool knowing its modus operandi is not "pretending the problems don't exist".
I said that in response to the hostility ("crap on Go") towards the article. If such articles aren't written, how will newbies learn about the pitfalls in the first place?
While I agree with you in principle, there is a small but important caveat about large codebases with hundreds of contributors or more. It only takes 1 bad apple to ruin the bunch.
I'll always love a greenfield C project, though!
During the short time I was working on a Go project I spent a significant amount of time debugging an issue like the one described in his first example in a library we depended on, so it's definitely not a problem of “super convoluted example”.
I assume you are aware of "the billion dollar mistake" from Tony Hoare?
OT: This page uses the term "Learnings" a lot. As a Murrcan in tech comms in Europe, I always corrected this to something else. But, well, is it some sort of Britishism ? Or is it some weird internet usage that is creeping into general usage ?
Likewise for "Trainings". Looks weird to Murrcan eyes but maybe it's a Britishism.
"Learnings" is a piece of corpspeak derived from Indian English. I believe "trainings" also has the same origin.
Please elucidate! Maybe a URL?
The author is obviously an overcompensating French speaker naively going for the more English-sounding word, i.e. "learnings" instead of "lesson", in this instance an overly literal translation of French "enseignements" as in "tirer des enseignements" meaning "learn a lesson", but since you can also say "tirer des leçons" in French with the same meaning and root, it's just a case of choosing the wrong side haphazardly on the Anglo-Saxon/Latin-Norman-French divide of the English vocabulary, sheep/mutton ox/beef pairs and the like.
Interesting theory! And "Trainings"?
It's pure corporatese.
I'm English and "learnings" is one piece of corporate speak that really annoys me. It just means "lessons", for people apparently unaware that noun already exists. British corpo drones seem to need their verb/noun pairs to be identical like
action/action learnings/learning trainings/training asks/ask strategising/strategy
I gotta say, of all the corpo speak things, the whole verb/noun normalizing thing is maybe the least distasteful to me.
Not that I particularly like it, but compared to all the other stuff it at least seems tolerable. The penchant for deflecting questions and not answering directly, the weasel wording done to cover your ass, the use of words to mean something totally other than the word (e.g. "I take full responsibility" meaning "I will have no personal or professional repercussions"), etc. Some of it seems like it comes out of executive coaching, some of it definitely comes out of fear of lawsuits.
"In the end I did what I believed was right" meaning "I concede I did not do the right thing but accept no blame".
Mind you there are so many expressions like this and we British are masters of them, like "with the greatest of respect,", which conveys meaning slightly more severe than "you are a total fucking idiot and".
lettuce not forget strategerise/strategery
From where I sit (in Norway), it seems to have become standard corporate-speak in any company where English is widely used. They've even started using the directly translated noun "læring" in Norwegian, too. It's equally silly. Both variants are usually spoken by the type of manager who sets out all future directions based on whatever their LinkedIn circle is talking about. It's thus a very valuable word, because the rash it elicits lets me know what people to avoid working with.
I'm not sure if the people who use this word think it's proper English. They rarely seem to care what words mean anyway.
The big question is why is it not proper english, when "teachings" is?
That's a good question. But it's not like English is at all logical in this way ;)
I like to think it dates from 2000 when we had The Teaches of Peaches.
It's not a Britishism particularly. My sense was it is coming in part from Indian Standard English but it may well be European english mistranslation. I rather like it, actually. Not least because it is the reciprocal of "teachings", which is long established usage.
"What are the asks" and "what's the offer" are turning up much more than I'd like, and they annoy me. But not as much as other Americanisms: "concerning" meaning "a cause for concern", "addicting" when the word they are looking for is "addictive", and the rather whiny-sounding "cheater" when the word "cheat" works fine. These things can meet the proverbial fiery end, along with "performant" and "revert back" (the latter of which which is an Americanism sourced from Indian English that is perhaps the only intrusion from Indian English I dislike; generally I think ISE is warm and fun and joyful.)
The BBC still put "concerning" in quotes, because the UK has not yet given up the fight, and because people like me used to write in to ask "concerning what?" I had a very fun reply from a BBC person about this, once. So I assume they are still there, forcing journalists to encase this abuse in quotation marks.
Ultimately all our bugbears are personal, though, because English is the ultimate living language, and I don't think Americans have any particular standing to complain about any of them! :-)
ETA: Lest anyone think I am complaining more about Americanisms than other isms, I would just like to say that one of my favourite proofs of the extraordinary flexibility of English is the line from Mean Girls: "She doesn't even go here!"
The other day the varying meaning of "lolly" came up in a discussion. In the UK, when it's not a slang term for money, a "lolly" is either a sticky sweet (candy) on a stick, or a frozen treat on a stick. From "lollipop" and then a shortening of "ice lolly".
In Australia, a "lolly" is more or less any non-chocolate-based sweet (candy).
British people find this confusing in Australia, but this is a great example of a word whose meaning was refined in the UK long after we started transporting people to Australia. Before that, a "lollipop" was simply a boiled treacle sweet that might or might not have been on a stick; some time after transportation started, as the industrialised confectionary industry really kicked off, the British English meaning of the word slowly congealed around the stick, and the Australian meaning did not.
I dislike some of this article, my impression is similar to some of the complaints of others here.
However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place? If only immutable messages are shared between goroutines, it should be way easier to avoid many of these issues. That is of course not always viable, for instance due to performance concerns, but in theory can be done a lot of the time.
I have heard others call for making it easier to track mutability and immutability in Go, similar to what the author writes here.
As for closures having explicit capture lists like in C++, I have heard some Rust developers saying they would also have liked that in Rust. It is more verbose, but can be handy.
> However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place?
C programmers aren’t supposed to access pointers after freeing them, either.
“Easy to do, even in clean-looking code, but you shouldn’t do it” more or less is the definition of a pitfall.
There is a LOT of demand for explicit capture clauses. This is one thing that C++ got right and Rust got wrong with all its implicit and magic behaviour.
https://www.reddit.com/r/rust/comments/1odrf9s/explicit_capt...
Go is a weird one, because it's super easy to learn -if- you're familiar with say, C. If you're not, it still appears to be super easy to learn, but has enough pitfalls to make your day bad. I feel like much of the article falls into the latter camp.
I recently worked with a 'senior' Go engineer. I asked him why he never used pointer receivers, and after explaining what that meant, he said he didn't really understand when to use asterisks or not. But hey, immutability by default is something I guess.
He must have been a senior in some other sense, not in Go experience.
Does Elixir have any footguns like this? As it is immutable I don't think any of these are possible.
Sorry, this is going to be a slightly longer reply since this is a really interesting question to ask!
Elixir (and anything that runs on the BEAM) takes an entirely different perspective on concurrency than almost everything else out there. It still has concurrency gotchas, but at worst they result in logic bugs, not violations of the memory model.
Stuff like:
Elixir doesn't really have locks in the same sense as a C-like language, so you don't really have lock lifetime issues, and Elixir datastructures cannot be modified at all (you can only return new, updated instances of them) so you can't modify them concurrently. Elixir has closures that can capture values from their environment, but since all values in Elixir are immutable, the closure can't modify values that it closes over.Elixir really is designed for this stuff down to its core, and (in my opinion) it's evident how much better Elixir's design is for this problem space than Go's is if you spend an hour with each. The tradeoff Elixir makes is that Elixir isn't really what I'd call a general purpose language. It's not amazing for CLIs, not amazing for number crunching code, not amazing for throughput-bound problems. But it is a tremendous fit for the stuff most of us are doing: web services, job pipelines, etc. Basically anything where the primary interface is a network boundary.
Edited for formatting.
> I have been writing production applications in Go for a few years now.
sorry, what?
https://gaultier.github.io/blog/a_million_ways_to_data_race_...
this code is obviously wrong, fractally wrong
why would you create a new PricingService for every request? what makes you think a mutex in each of those (obviously unique) PricingService values would somehow protect the (inexplicably shared) PricingInfo value??
> the fix
https://gaultier.github.io/blog/a_million_ways_to_data_race_...
what? this is in no way a fix to the problem.
it's impossible to believe the author's claims about their experience in the language, this is just absolute beginner stuff..
I had a similar reaction to that, glad it's not just me.
Meanwhile with the 4th item, this whole example is gross, repeatedly polling a buffer every 100ms is a massive red flag. And as for the data race in that item, the idiomatic fix is to just use io.Pipe, which solves the entire problem far more cleanly than inventing a SyncWriter.
The author's last comment regarding "It would also be nice if more types have a 'sync' version, e.g. SyncWriter, SyncReader, etc" probably indicates there's some fundamental confusion here about idiomatic Go.
Yeah this whole section of the article threw me all the way off. What even is this code? There’s so many things wrong with it, it blows my mind.
About the only code example I saw in here and thought “yeah it sucks when that happens” is the accidental closure example. Accidentally shadowing something you’re trying to assign to in a branch because you need to handle an error or accidentally reassigning something can be subtle. But it’s pretty 101 go.
The rest is… questionable at best.
That fix really confused me as well. Not only does the code behave differently from the problematic code, but why do they even need the mutex at that point?
Someone write similar for Erlang..
https://www.scss.tcd.ie/jeremy.jones/CS4021/lockless.pdf
On a phone and the formatting of the snippets is unreadable with the 8 space tabs…
That said, i think about all languages have their own quirks and footguns. I think people sometimes forget that tools are just that, tools. Go is remarkably easy to be productive in which is what the label on the tin can claims.
It isnt “fearless concurrency” but get shit done before 5 pm because traffics a bitch on Wednesdays
Author here, thanks for the feedback on legibility, I have now just learned about the CSS `tab-size` property to control how much space tabs get rendered with. I have reduced it, should be better now.
Thanks, much nicer now
> Go is remarkably easy to be productive in which is what the label on the tin can claims.
To feel productive in.
It feels productive because you're not waiting ages for it to compile again after every change.
I would say all the boiler plate and extra typing, while the language not preventing you from shooting yourself in the foot.
TL;DR. Author with “years of experience of shipping to prod” mutates globals without a mutex and is surprised enough to write a blog.
There’s an example of a mutex too…
An example where they’re creating a new mutex every time they call a function and then surprised when multiple goroutines that called that function and got entirely different mutexes somehow couldn’t coordinate the locks together.
That isn’t a core misunderstanding of Go, that’s a core misunderstanding of programming.
In the first one, he complains that one character is enough to cause an issue, but the user should really have a good understanding of variable scope and the difference between assignment and instsntiation if they're writing concurrent go code. Some ides warn the user when they do this with a different color.
Races with mutexes can indicate the author either doesn't understand or refuses to engage with Go's message based concurrency model. You can use mutexes but I believe a lot of these races can be properly avoided using some of the techniques discussed in the go programming language book.
I would argue it doesn't help that all errors are usually named `err` and sprinkled every third line of code in Go. It's an easy mistake to make to assign to an existing variable instead of create a new variable, especially if you frequently switch between languages (which might not have the `:=` operator).
>he complains that one character is enough
He complains that language design offers no way of avoiding it (in this particular case) and relies only on human or ide. Humans are not perfect and should not be a requirement to write good code.
Whatever the case Go's tooling (i.e. IDE part) is one of the best in class and I think it shouldn't be be dismissed in the context of some footguns that Go has.
"best in class"?
I feel like Java's IDE support is best in class. I feel like go is firmly below average.
Like, Java has great tooling for attaching a debugger, including to running processes, and stepping through code, adding conditional breakpoints, poking through the stack at any given moment.
Most Go developers seem to still be stuck in println debugging land, akin to what you get in C.
The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project, and Go has various IDE features that work way slower (like "find implementations of this interface").
The JVM has all sorts of great knobs and features to help you understand memory usage and tune performance, while Go doesn't even have a "go build -debug" vs "go build -release" to turn on and off optimizations, so even in your fast iteration loop, go is making production builds (since that's the only option), and they also can't add any slow optimizations because that would slow down everyone's default build times. All the other sane compilers I know let you do a slower release build to get more performance.
The Go compiler doesn't emit warnings, insisting that you instead run a separate tool (govet), but since it's a separate tool you now have to effectively compile the code twice just to get your compiler warnings, making it slower than if the compiler just emit warnings.
Go's cgo tooling is also far from best in class, with even nodejs and ruby having better support for linking to C libraries in my opinion.
Like, it's incredibly impressive that Go managed to re-invent so many wheels so well, but they managed to reach the point where things are bearable, not "best in class".
I think the only two languages that achieved actually good IDE tooling are elisp and smalltalk, kinda a shame that they're both unserious languages.
> The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project
Okay, come on now :D Absolutely everything around Java consumes gigabytes of memory. The culture of wastefulness is real.
The Go vs Java plugins for VSCode are no comparison in terms of RAM usage.
I don't know how much the Go plugin uses, which is how it should be for all software — means usage is low enough I never had to worry about it.
Meanwhile, my small Java projects get OOM killed all the time during what I assume is the compilation the plugin does in the background? We're talking several gigabytes of RAM being used for... ??? I'm not exactly surprised, I've yet to see Java software that didn't demand gigabytes as a baseline. InteliJ is no different btw, asinine startup times during which RAM usage baloons.
Java consumes memory because collecting garbage is extra work and under most circumstances it makes no sense to rush it. Meanwhile Go will rather take time away from your code to collect garbage, decreasing throughput. If there is ample memory available, why waste energy on that?
Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Also, intellij is a whole IDE that caches all your code in AST form for fast lookoup and stuff like that.. it has to use some extra memory by definition (though it's also configurable if you really want to, but it's a classic space vs time tradeoff again).
refcounting gc is very fast and works fine for most of the references. Java not using a combination of both methods is a flaw.
Refcounting is significantly slower under most circumstances. You are literally putting a bunch of atomic increments/decrements into your code (if you can't prove that the given object is only used from a single thread) which are crazy expensive operations on modern CPUs, evicting caches.
Under most circumstances function local variables aren't passed to other threads, or passed at all.
And? That's a small, optional optimization done by e.g. Swift.
Also, I don't know how it's relevant to Go which uses a tracing GC.
It's not "small" if it accounts for most of the allocations :)
Java best in class? I love java. It is my first love but ill take go ecosystem 1000% of the time.
Mind explaining your debugging setup, i.e. which IDE you use and what tooling you use to be able to step through and reason about code?
It absolutely is. There is not many ecosystems where you can attach a debugger to a live prod system with minimal overhead, or one that has something like flight recorder, visualvm, etc.
The mutex case is one where they're using a mutex to guard read/writes to a map.
Please show us how to write that cleanly with channels, since clearly you understand channels better than the author.
I think the golang stdlib authors could use some help too, since they prefer mutexes for basically everything (look at sync.Map, it doesn't spin off a goroutine to handle read/write requests on channels, it uses a mutex).
In fact, almost every major go project seems to end up tending towards mutexes because channels are both incredibly slow, and worse for modeling some types of problems.
... I'll also point out that channels don't save you from data-races necessarily. In rust, passing a value over a channel moves ownership, so the writer can no longer access it. In go, it's incredibly easy to write data-races still, like for example the following is likely to be a data-race:
For that last example, if 'item' is immutable, there is no issue, correct?
Yeah, indeed.
Developers have a bad habit of adding mutable fields to plain old data objects in Go though, so even if it's immutable now, it's now easy for a developer to create a race down the line. There's no way to indicate that something must be immutability at compile-time, so the compiler won't help you there.
Good points. I have also heard others say the same in the past regarding Go. I know very little about Go or its language development, however.
I wonder if Go could easily add some features regarding that. There are different ways to go about it. 'final' in Java is different from 'const' in C++, for example, and Rust has borrow checking and 'const'. I think the language developers of the OCaml language has experimented with something inspired by Rust regarding concurrency.
Rust's `const` is an actual constant, like 4 + 1 is a constant, it's 5, it's never anything else, we don't need to store it anywhere - it's just 5. In C++ `const` is a type qualifier and that keyword stands for constant but really means immutable not constant.
This results in things like you can "cast away" C++ const and modify that variable anyway, whereas obviously we can't try to modify a constant because that's not what the word constant means.
In both languages 5 += 3 is nonsense, it can't mean anything to modify 5. But in Rust we can write `const FIVE: i32 = 5;` and now FIVE is also a constant and FIVE += 3 is also nonsense and won't compile. In contrast in C++ altering an immutable "const" variable you've named FIVE is merely forbidden, once we actually do this anyway it compiles and on many platforms now FIVE is eight...
Right, I forgot that 'const' in Rust is 'constexpr'/'consteval' in C++, while absence of 'mut' is probably closer to C++ 'const', my apologies.
C++ 'constexpr' and Rust 'const' is more about compile-time execution than marking something immutable.
In Rust, it is probably also possible to do a cast like &T to *mut T. Though that might require unsafe and might cause UB if not used properly. I recall some people hoping for better ergonomics when doing casting in unsafe Rust, since it might be easy to end up with UB.
Last I heard, C++ is better regarding 'constexpr' than Rust regarding 'const', and Zig is better than both on that subject.
AFAICT Although C++ now has const, constexpr. consteval and constinit, none of those mean an actual constant. In particular constexpr is largely just boilerplate left over from an earlier idea about true compile time constants, and so it means almost nothing today.
Yes, the C++ compile time execution could certainly be considered more powerful than Rust's and Zig's even more powerful than that. It is expected that Rust will some day ship compile time constant trait evaluations, which will mean you don't have to write awkward code that avoids e.g. iterators -- so with that change it's probably in the same ballpark as C++ 17 (maybe a little more powerful). However C++ 20 does compile-time dynamic allocation†, and I don't think that's on the horizon for Rust.
† In C++ 20 you must free these allocations inside the same compile-time expression, but that's still a lot of power compared to not being allowed to allocate. It is definitely possible that a future C++ language will find a way to sort of "grandfather in" these allocations so that somehow they can survive to runtime rather than needing to free them.
Rust does give you the option to break out the big guns by writing "procedural" aka "proc" macros which are essentially Rust that is run inside your compiler. Obviously these are arbitrarily powerful, but far too dangerous - there's a (serious) proc macro to run Python from inside your Rust program and (joke, in that you shouldn't use it even though it would work) proc macro which will try out different syntax until it finds which of several options results in a valid program...
Only looked at the first two examples. No language can save you when one writes bad code like that.
You can argue about how likely is code like that is, but both of these examples would result in a hard compiler error in Rust.
A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.
And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.
These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.
Rust concurrency also has issues, there are many complaints about async [0], and some Rust developers point to Go as having green threads. The original author of Rust originally wanted green threads as I understand it, but Rust evolved in a different direction.
As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.
[0]: An example is https://news.ycombinator.com/item?id=45898923 https://news.ycombinator.com/item?id=45903586 , both for the same article.
Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
[1] https://graydon2.dreamwidth.org/307291.html
[dead]
Async and concurrency are orthogonal concepts.
While I agree, in practice they can actually be parallel. Case in point - the Java Vert.x toolkit. It uses event-loop and futures, but they have also adopted virtual threads in the toolkit. So you still got your async concepts in the toolkit but the VTs are your concurrency carriers.
But Rust's async is one of the primary ways to handle concurrency in Rust, right? Like, async is a core part of how Tokio handles concurrency.
Could you give an example to distinguish them? Async means not-synchronous, which I understand to mean that the next computation to start is not necessarily the next computation to finish. Concurrent means multiple different parts of the program may make progress before any one of them finishes. Are they not the same? (Of course, concurrency famously does not imply parallelism, one counterexample being a single-threaded async runtime.)
Async, for better or worse, in 2025 is generally used to refer to the async/await programming model in particular, or more generally to non-blocking interfaces that notify you when they're finished (often leading to the so-called "callback hell" which motivated the async/await model).
If you are waiting for a hardware interrupt to happen based on something external happening, then you might use async. The benefit is primarily to do with code structure - you write your code such that the next thing to happen only happens when the interrupt has triggered, without having to manually poll completion.
You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.
So async enable concurrent outstanding requests.
But even so, the JVM has well-defined data races that may cause logical problems, but can never cause memory issues.
That's not the case with Go, so these are significantly worse than both Rust and Java/C#, etc.
What is your definition of memory issues?
Of course you can have memory corruption in Java. The easiest way is to spawn 2 threads that write to the same ByteBuffer without write locks.
And you would get garbled up bytes in application logic. But it has absolutely no way to mess up the runtime's state, so any future code can still execute correctly.
Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on. If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
So there are objective distinctions to have here, e.g. Rust guarantees that the source of such a corruption can only be an incorrect `unsafe` block, and Java flat out has no platform-native unsafe operations, even under data races. Go can segfault with data races on fat pointers.
Of course every language capable of FFI calls can corrupt its runtime, Java is no exception.
> Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on.
In C, yes. In Rust, I have no real experience. In Go, as you pointed out, it should segfault, which is not great, but still better than in C, i.e., fail early. So I don't get or understand what your next comment means? What is a "less lucky" example in Go?
> If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
Silent corruption of unrelated data structures in memory. Segfault only happens if you are accessing memory outside the program's valid address space. But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc, or cause a million other kind of very hard to debug errors.
Haskell, Erlang/Elixir, and Rust would save you from most of these problems.
Then, of course, there's the languages that are still so deeply single-threaded that they simply can't write concurrency bugs in the first place, or you have to go way out of your way to get to them, not because they're better than Go but because they don't even play the game.
However, it is true the list is short and likely a lot of people taking the opportunity to complain about Go are working in languages where everything they are so excited to complain about are still either entirely possible in their own favorite language (with varying affordances and details around the issues) or they are working in a language that as mentioned simply aren't playing the game at all, which doesn't really count as being any better.