Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A RabbitMQ stream models an append-only log with non-destructive consuming semantics

> Streams in RabbitMQ are persisted and replicated. This translates to data safety and availability (in case of the loss of a node)

Let's talk about the elephant in the room: Do streams replicate the same way queues do? If you ever lost a node while you had a few GBs of data in queues you will know that bringing a node back will sync all data over the net while completely blocking ANY operation on the queue until this process finishes. Please don't recommend quorum queues, they lack important features as well.



This was (is) the case for classic queue mirroring. Quorum queues use a raft implementation and can synchronise the delta. As of now, the two major features missing from quorum queues are message TTL and priority. The former will come soon. It is true that QQs have different runtime characteristics but they are much more stable in a clustered environment.


No they do not replicate like classic mirrored queues do. They are much more similar to quorum queues in that they only (asynchronously) replicate the delta after a disconnection. After all both streams and quorum queues use log replication. They are also both quorum systems in terms of availability.

W.r.t quorum queue features set we are working on Message TTLs. Priorities we'll have to see. We want to provide something there but it may not be a priority queue as provided by classic queues as this isn't the best way to do priority based messaging.


Isn't the manual sync mode the solution for blocking for classic queues (assumming mirroring)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: