There's one key difference between Go and all the other examples: an uncaught panic in any goroutine, not just the main goroutine, crashes the app.
Often your main goroutine will just be starting other goroutines, in a multiple-services-one-process architecture, and it doesn't care whether they succeed or not, and its code need not be aware of whether they're critical. If they are, they should panic and crash the program. Just like a main function that crashes.
Doing things with the nursery is restrictive which has its benefits, but I don't agree that it's just strictly always the preferred approach. I prefer an approach like Go's, where you can choose whether to use nurseries (errgroups), and the implementation for them is provided by the language authors. Sure, you are able to start and manage goroutines in this way, even if you don't care about their lifetime - as they are effectively individual programs, but you shouldn't HAVE to.
So when you care about goroutines running only while their starter goroutine is at a certain scope, you can do so. But you don't have to, and when you don't need it - you don't need to pay the price. Go appears to take an approach that provides primitives that are as least restrictive as possible to make it easy to read code, but then also has a batteries included approach with its libraries.
Yes, the "nurseries" approach introduces restrictions compared to the "go" construct in the same way as a "for" loop with break/continue is more restrictive compared to "go to." The title of the article strongly hints at it too.
Neither model is more powerful than the other, they can both emulate the other (errgroups to bolt some structure onto unstructured concurrency, global nurseries to leak tasks from the structure).
The difference is which option they encourage, and which option requires an explicit choice.
It seems like the article is, at its most basic, just arguing in favour of using synchronization primitives. Which, yeah, of course. Most of the time, you want to be doing that; I don't think that's controversial. The author likes scoped primitives best, and sure, fair enough.
But it takes a couple strange turns when it suggests that (a) you should ALWAYS use sync primitives, and (b) waitgroups/nurseries are the only sync primitive worth using.
If I'm only spawning off one parallel thread, a simple join statement is all I need. If I'm doing an async task, a promise is plenty.
If there's a true fork in my execution, maybe I don't want to use a sync primitive at all. Go cleans up stray goroutines on process termination, so if my program's logic doesn't demand that my goroutines join back together, why should they? Let them terminate themselves, or let them live forever.
There are more concurrency models in heaven and earth than are dreamt of in this guy's philosophy. It's not that I think waitgroups are a bad primitive or anything, I just think it's a bit much to take a useful primitive and go "this should be the only tool that is ever used to manage concurrency."
Yes, golang.org/x/ is of a "nursery" of sorts for things that are stdlib-ish but not fully baked enough that the maintainers want to support it under the 1.0 compatibility promise [0]
Sure you can install a handler to get another outcome, just like you can catch exceptions/panics/... in additional threads in languages that have these constructs. But this was about "uncaught panic", i.e. default behavior without handlers.
(Also, SEGVs are well behaved enough to implement userspace virtual memory on top of, if that's your fetish... not sure what the spec says, but it's not UB in actual practice. Trying to resume from an assert() is indeed UB though since that'd be returning from a noreturn function.)
It is POSIX UB, as the standard gives no guarantees what happens when the signal returns in such cases, so each OS is free to do whatever they feel like.
You don't have to return from the signal handler. You could also longjmp. longjmp happens to be async-signal-safe, and the standard is careful to permit longjmp out of a signal handler, so long as you're careful to avoid implicating other non-safe behaviors. SIGSEGV is a synchronous signal (that is, always delivered immediately to the same thread that triggered it), and presumably the point at which SIGSEGV is triggered was carefully orchestrated by the application. I've done this once--it permitted me to remove a boundary check on an array, improving runtime performance many fold in a machine generated, statically compiled NFA. On SIGSEGV I would longjmp back to a safe entry point, grow the array, and then restart the NFA.
It was developed on macOS and deployed on Linux/glibc. I've also longjmp'd out of signal handlers to emulate sigtimedwait on OpenBSD and NetBSD . I probably (can't remember) also tested the sigtimedwait implementation on FreeBSD, Linux, and Solaris (and maybe AIX), my typical porting targets.
if not catched in a recover() a panic when reach the top of the stack of the gorutine always cause a crash. I did not changed, as far i know is this way at least since 1.0
It looks like maybe the goroutine in that example doesn't actually get a chance to run before the program stops. If the main function lasts longer, then you'll see the effect of the panic. See https://goplay.tools/snippet/8SFlFkZ2P0y
> Next, any deferred functions run by F's caller are run, and so on up to any deferred by the top-level function in the executing goroutine. At that point, the program is terminated and the error condition is reported, including the value of the argument to panic.
It looks like, assuming it makes it to the top of the current goroutine, then it should be killing the whole program.
Your use of the waitgroup is off; Because the Add instruction is inside the goroutine, it may not run until after wg.Done is called, which will return immediately.
Still, that made it less flaky for a bit, but the it printed again! And I realized what else is going on: There's actually time between when the deferred functions are executed and the program exits! Which is fascinating, but unlikely to matter in practice. It brings me to another question: do deferred statements still get called when there's a panic on a different goroutine? And the answer seems to be no; adding a deferred print to the main function does not print.
You're right about the Add(1) and defer. defer in itself is pretty interesting in how it's called. Basically there's the main body of your function and then a list of things to do before return, of which defers get added to. So, your theory is correct.
I think intuitively I probably knew that a panic in a goroutine will shut down the main thread, it just didn't occur logically to me when I read it; which is an interesting paradox. Maybe that's a me-thing though.
Often your main goroutine will just be starting other goroutines, in a multiple-services-one-process architecture, and it doesn't care whether they succeed or not, and its code need not be aware of whether they're critical. If they are, they should panic and crash the program. Just like a main function that crashes.
Doing things with the nursery is restrictive which has its benefits, but I don't agree that it's just strictly always the preferred approach. I prefer an approach like Go's, where you can choose whether to use nurseries (errgroups), and the implementation for them is provided by the language authors. Sure, you are able to start and manage goroutines in this way, even if you don't care about their lifetime - as they are effectively individual programs, but you shouldn't HAVE to.
So when you care about goroutines running only while their starter goroutine is at a certain scope, you can do so. But you don't have to, and when you don't need it - you don't need to pay the price. Go appears to take an approach that provides primitives that are as least restrictive as possible to make it easy to read code, but then also has a batteries included approach with its libraries.