> a side effecting, async, infallible function that doesn't return anything (so it doesn't return potentially an Err) is IMHO quite a weird thing in this world. You're much better off spawning a task in such case. And spawned tasks are executed eagerly, they don't have to be awaited to run.
I don't know. It just doesn't seem like "all async functions return a future by default" is useful enough feature to warrant the async/await keyword littering. It seems to me that Futures are themselves a sync concept, which is used to fuse the sync world with the async. Future methods (get, await, whatever) are all blocking.
If you discard "true" sync functions (everything that blocks an OS thread), then there's no need to fuse sync world with the async, so everything async can behave like sync, just like we're used to.
> So an occasional single await here and there is a syntactic burden
It's not the frequency of awaits that's the problem - it's the existence of await itself. The language is pushing the burden of differentiating between sync and async methods to the programmer. Of course, you can get more performance that way, but so can you by writing in assembly.
> but having to create explicit channels to communicate the results back, instead of using function return values, is suddenly not a syntactic burden?
You create channels only when you want to communicate with another (concurrent) goroutine. It seems fair to me - wanna communicate? Create a channel. Otherwise, all goroutines keep to themselves.
> Not mentioning a performance impact of it.
Channels are not queues. They are very lightweight and fast. They just work like queues.
> Which works only at runtime and detects only some races
> and also beware of the fact that your coroutines execute in parallel (not just concurrently) and the risk of data races is higher.
You can pass around pointers through channels, implicitly creating the "ownership" pattern that Rust is so famous for. Of course, there's no compiler support for it, so Rust wins this round. But in my experience working with Golang, as long as you're not trying to be too clever, data races are quite rare. Channels are really good for many communication patterns, thanks to the `select` keyword, so most of the tricky patterns that lead to data races don't even have to appear.
> Actually OS threads are suprprisingly fast and lightweight these days and this claim could be challenged.
I think this is completely false. Every new CPU vulnerability discovered slows down context switching. Default thread stack size for Windows, Linux and OS X is 1MB [0], 8MB [1] and 8MB [2], respectively (in contrast, default goroutine stack size is 8KB). Add to that the overhead of pre-emptive scheduling of threads, and you can see how they don't scale at all. After a certain fixed number of threads (depending on your hardware), your system will either run out of memory or grind to a halt, wasting all its time context switching. Meanwhile, on my laptop, Go doesn't have a problem working with hundreds of thousands of goroutines concurrently.
Here's the code if you wanna try it out for yourself:
import "fmt"
func doNothing(ch chan int) { <-ch }
func main() {
i := 0
for {
ch := make(chan int)
go doNothing(ch)
fmt.Println(i)
i++
}
}
It took ~4 million goroutine+channel pairs to fill up my 16GB of RAM. How many OS threads + queues would it take to fill up 16GB?
The optimal way to do concurrent programming is to create as many threads as there are CPU cores, and cooperatively multiplex coroutines on them. That's what the Go runtime does.
> But anyway, Rust has also green threads. Just use tokio::spawn + channels and those are your lightweight threads just as in Go.
Do those use the async/await keywords too, or are they somehow made sync-like?
I think you focus a lot only on one aspect of thread performance - when you need millions of threads. Yes, I agree, if you need millions of concurrent things running, then you quite likely don't want to use millions of threads. But many apps don't need such a high degree of concurrency, and OS threads are very performant up to a few thousand threads.
And in cases where you really want to run millions of concurrent things, and where performance and scalability are the top concern, Rust gives you a plethora of other tools.
> Do those use the async/await keywords too, or are they somehow made sync-like?
Sure they do, but now you seem to be dismissing a useful feature just because it has slightly different syntax than in your favorite language. But this is essentially the same feature.
I'm sorry if it sounded like I'm dismissing it. Async/await is very useful for massive IO in many programming languages. I've personally worked with a lot of async Python, which can be painful to write. Rust's static typing does make things much better.
However, the reason I like Go's approach so much is because it abstracts away all the mundane details about scheduling, and lets us focus on what we want. If we want 1000 threads of execution running concurrently and doing something complex, we can just `go` 1000 functions and pass the corresponding channels to them - the optimal way (MxN coroutine-thread mapping) is already invented and it's been implemented inside the language.
Rust will probably always win when it comes to extreme optimizations. Golang's strength is in simplicity and straightforwardness in writing everyday concurrent code.
I don't know. It just doesn't seem like "all async functions return a future by default" is useful enough feature to warrant the async/await keyword littering. It seems to me that Futures are themselves a sync concept, which is used to fuse the sync world with the async. Future methods (get, await, whatever) are all blocking.
If you discard "true" sync functions (everything that blocks an OS thread), then there's no need to fuse sync world with the async, so everything async can behave like sync, just like we're used to.
> So an occasional single await here and there is a syntactic burden
It's not the frequency of awaits that's the problem - it's the existence of await itself. The language is pushing the burden of differentiating between sync and async methods to the programmer. Of course, you can get more performance that way, but so can you by writing in assembly.
> but having to create explicit channels to communicate the results back, instead of using function return values, is suddenly not a syntactic burden?
You create channels only when you want to communicate with another (concurrent) goroutine. It seems fair to me - wanna communicate? Create a channel. Otherwise, all goroutines keep to themselves.
> Not mentioning a performance impact of it.
Channels are not queues. They are very lightweight and fast. They just work like queues.
> Which works only at runtime and detects only some races
> and also beware of the fact that your coroutines execute in parallel (not just concurrently) and the risk of data races is higher.
You can pass around pointers through channels, implicitly creating the "ownership" pattern that Rust is so famous for. Of course, there's no compiler support for it, so Rust wins this round. But in my experience working with Golang, as long as you're not trying to be too clever, data races are quite rare. Channels are really good for many communication patterns, thanks to the `select` keyword, so most of the tricky patterns that lead to data races don't even have to appear.
> Actually OS threads are suprprisingly fast and lightweight these days and this claim could be challenged.
I think this is completely false. Every new CPU vulnerability discovered slows down context switching. Default thread stack size for Windows, Linux and OS X is 1MB [0], 8MB [1] and 8MB [2], respectively (in contrast, default goroutine stack size is 8KB). Add to that the overhead of pre-emptive scheduling of threads, and you can see how they don't scale at all. After a certain fixed number of threads (depending on your hardware), your system will either run out of memory or grind to a halt, wasting all its time context switching. Meanwhile, on my laptop, Go doesn't have a problem working with hundreds of thousands of goroutines concurrently.
Here's the code if you wanna try it out for yourself:
It took ~4 million goroutine+channel pairs to fill up my 16GB of RAM. How many OS threads + queues would it take to fill up 16GB?The optimal way to do concurrent programming is to create as many threads as there are CPU cores, and cooperatively multiplex coroutines on them. That's what the Go runtime does.
> But anyway, Rust has also green threads. Just use tokio::spawn + channels and those are your lightweight threads just as in Go.
Do those use the async/await keywords too, or are they somehow made sync-like?
[0] https://docs.microsoft.com/en-us/windows/win32/procthread/th...
[1] https://unix.stackexchange.com/questions/127602/default-stac...
[2] https://developer.apple.com/library/archive/documentation/Co...