This topic has not yet been written. The content below is from the topic description.
To solve this there is the notion of a grouping handler. Each node will have its own grouping handler and when a messages is sent with a group id assigned, the handlers will decide between them which route the message should take. There are 2 types of handlers; Local and Remote. Each cluster should choose 1 node to have a local grouping handler and all the other nodes should have remote handlers- it's the local handler that actually makes the decsion as to what route should be used, all the other remote handlers converse with this. Here is a sample config for both types of handler, this should be configured in the hornetq-configuration.xml file. LOCAL jms 5000 REMOTE jms 5000 The address attribute refers to a cluster connection and the address it uses, refer to the clustering section on how to configure clusters. The timeout attribute referes to how long to wait for a decision to be made, an exception will be thrown during the send if this timeout is reached, this ensures that strict ordering is kept. The decision as to where a message should be routed to is initially proposed by the node that receives the message. The node will pick a suitable route as per the normal clustered routing conditions, i.e. round robin available queues, use a local queue first and choose a queue that has a consumer. If the proposal is accepted by the grouping handlers the node will route messages to this queue from that point on, if rejected an alternative route will be offered and the node will again route to that queue indefinitely. All other nodes will also route to the queue chosen at proposal time. Once the message arrives at the queue then normal single server message group semantics take over and the message is pinned to a consumer on that queue. You may have noticed that there is a single point of failure with the single local handler. If this node crashes then no decisions will be able to be made. Any messages sent will be not be delivered and an exception thrown. To avoid this happening Local Handlers can be replicated on another backup node. Simple create your back up node and configure it with the same Local handler.