But remember that this is in contrast to C threads, where 64 kB is a tiny stack and 1-4MB is more common.
Yeah not really. C threads normally have a fixed-size contiguous virtual memory space allocated to them, but the space they don't use is not actually consumed. You can even call madvise() to release memory on returning from a deep stack so you only actually "use" 4k of memory afterwards. The only significant resource consumed is virtual address space, which is effectively unlimited on 64-bit systems (why design a new language around 32-bits systems?). For that matter "C threads" could use segmented stacks as well if they wanted to.
But "goroutines" also have to be concerned with the total stack space they may use. For example, suppose those 100,000 goroutines were network servers and a hostile client could manipulate the connection to block the routine while it was consuming 400 KiB of stack. Now an attacker (or random confluence of events) can cause your program that normally averages 1% memory to use 100x as much as normal and run out of memory.
Space is almost never a reason to use coroutines over threads.
>For example, suppose those 100,000 goroutines were network servers and a hostile client could manipulate the connection to block the routine while it was consuming 400 KiB of stack.
hmm so you're saying a request causes 400KB to be allocated, and an attacker hits your server with 100k requests designed to cause your server to stall? The same criticism could be leveled against... any system where a requestor can allocate memory and stall a handler that is independent of the incoming request loop. How would this be different in a different concurrency model?
>Space is almost never a reason to use coroutines over threads.
I wouldn't necessarily say so. I have a polling service that uses a few thousand standing goroutines. The memory overhead for each individual goroutine is so low that I can just spawn them casually. Each goroutine only actually does work for a few seconds every few minutes; the vast majority of the time, each goroutine is spent sleeping. For this type of problem, I find that organizing things with goroutines and channels makes my code very easy to reason about. That, to me, is the biggest win of Go's concurrency model; that it's very easy to reason about.
> I find that organizing things with goroutines and channels makes my code very easy to reason about. That, to me, is the biggest win of Go's concurrency model; that it's very easy to reason about.
Absolutely agree. I just wish the net.Dial calls would let me specify a timeout and wouldn't hang around for 3mins in case something is down.
How would this be different in a different concurrency model?
That was actually the point. You have the same space considerations. Being coroutine instead of native thread is just a small constant factor... if this factor is important then you are almost certainly doing it wrong.
Yeah not really. C threads normally have a fixed-size contiguous virtual memory space allocated to them, but the space they don't use is not actually consumed. You can even call madvise() to release memory on returning from a deep stack so you only actually "use" 4k of memory afterwards. The only significant resource consumed is virtual address space, which is effectively unlimited on 64-bit systems (why design a new language around 32-bits systems?). For that matter "C threads" could use segmented stacks as well if they wanted to.
But "goroutines" also have to be concerned with the total stack space they may use. For example, suppose those 100,000 goroutines were network servers and a hostile client could manipulate the connection to block the routine while it was consuming 400 KiB of stack. Now an attacker (or random confluence of events) can cause your program that normally averages 1% memory to use 100x as much as normal and run out of memory.
Space is almost never a reason to use coroutines over threads.