The use-case for workers is usually heavy processing.
You would probably not like to do heavy processing in an online transactional service like a frontend web server, so you move that work to a worker...
node stays healthy only if you can keep the event loop unblocked. If you do really heavy processing, the event loop blocks and you can start seeing side effects like I/O starvation and such.
Usually doing heavy processing in JavaScript is not very performant compared to other languages... example: garbage collection, every number is a floating point number, everything is an object, etc... abstractions that are not zero-cost, plus, CPU/Memory profiling tools for node/JavaScript are not as good as the ones you can find for other languages.
Having said that, while the tutorial is fine... it really depends what you want to do with such workers. If it's CPU or memory intensive... I don't recommend node. If you absolutely have to, make sure CPU intensive tasks are distributed across ticks (break down intensive logic using process.nextTick) so the event loop doesn't block, and do some CPU/memory profiling.
Then, if not node, then what? My choice of tech for a problem like this would be Akka. http://akka.io . Why Akka? because if you want to break down a computation into subparts, without the disadvantages of dealing with threads and message passing at a low level, the actor system takes care of that for you.
The actor model is not far from the AMQP pub/sub model. I have talked about this on Stackoverflow [1]. The real advantage for Akka would be not having to go over the wire when forking (aka recursing on a problem).
That being said AMQP has so many advantages particularly because there are so many clients and you could in theory have a thin facade over any AMQP library that does routing locally for certain queues (we actually do this for our Java message bus). It is also because of message reilability.
If you use AMQP you can have a very heterogenous environment. Infact I even wrote some consumers recently in Rust to replace some Java ones but are not production ready (still waiting for heartbeat support).
Yeah you could use Akka as a consumer but here is the shocker it actually isn't that good if your messages need to be reliable. This is because most Akka rabbitmq consumers are auto ACK. This is because RabbitMQ is actually a pull model when using prefetch + manual ack. Akka is not and will drop messages.
The reactive streams model [2] (rxjava2 and future akka stuff) is not as flexible nor as powerful as AMQP as there is no per message ACK but rather an ACK for a group of messages. Yes you could set the request/demand rate to 1 but that would hurt performance. Again AMQP shines for message reliability.
> Well, in Akka you can use journaling and persistence to prevent important messages from being lost in case of error. That is not a problem. [1]
No it is more complicated than just one step guaranteeing. Lots of things do that including Apache Flume and just about any log aggregator. Because RabbitMQ is a broker, has exchanges and transactions it guarantees multiple steps as well as properly load balances.
For example lets say you have a consumer that gets a critical message and that machine (node, vm, docker whatever) gets completely blown away along with its storage than the message is lost.
With RabbitMQ the broker is the single point of failure. Of course this comes at a cost (complexity and partitioning problems). I believe Akka has a server model but from what I heard it is not very battle tested.
The other thing is Akka just like ZeroMQ and just like any other message platform can basically emulate (or imitate) almost any other message style/platform as well as chain them together (your point of akka + camel). The question is how robust and battle tested is the solution for the particular problem domain? Do you want smart pipes and dumb consumers (rabbit) or dumb pipes and smart consumers (akka, zeromq, rest).
You would probably not like to do heavy processing in an online transactional service like a frontend web server, so you move that work to a worker...
node stays healthy only if you can keep the event loop unblocked. If you do really heavy processing, the event loop blocks and you can start seeing side effects like I/O starvation and such.
Usually doing heavy processing in JavaScript is not very performant compared to other languages... example: garbage collection, every number is a floating point number, everything is an object, etc... abstractions that are not zero-cost, plus, CPU/Memory profiling tools for node/JavaScript are not as good as the ones you can find for other languages.
Having said that, while the tutorial is fine... it really depends what you want to do with such workers. If it's CPU or memory intensive... I don't recommend node. If you absolutely have to, make sure CPU intensive tasks are distributed across ticks (break down intensive logic using process.nextTick) so the event loop doesn't block, and do some CPU/memory profiling.
Then, if not node, then what? My choice of tech for a problem like this would be Akka. http://akka.io . Why Akka? because if you want to break down a computation into subparts, without the disadvantages of dealing with threads and message passing at a low level, the actor system takes care of that for you.