6.1 Competing Consumers & Load Balancer
Continuous parallel request processingThis pattern is based on two patterns which share the same structure and purpose.
Competing Consumers by Hohpe et al. [4, p. 446, 76], later by Microsoft [77]. It is commonly considered as part of the functionality of message brokers [3, p. 135, 20, p. 92]
Load Balancer by Hohpe [78] and the Service Instance pattern by Rotem-Gal-Oz [13, p. 62], which are commonly implemented by reverse proxies [12, p. 178]
See also Message Broker.
6.1.1 Context
The only condition to be able to utilise this pattern is to have independent tasks and identical processors. There are a number of requirements that can fuel this decision.
The need for increased throughput or decreased latency, even under high load
The need for resiliency – if one consumer fails, the task can be picked up by another
The task is composed of several subtasks which are highly dependent on each other
6.1.2 Solution
Either use a Messaging Queue to distribute tasks to multiple identical consumers (see fig. 9), which “compete” for the tasks (whichever is faster takes the task), or use a [Gateway Router]#sec:gateway-routing) to distribute the tasks with a load balancing mechanism.
Since the processors are independent, their number can easily be scaled up or down, depending on the need.
6.1.3 Potential issues
Since the processors can have different speeds and latencies, messages might be processed in the incorrect order. There are two specific circumstances where incorrect message ordering can be mitigated.
If processing the task does not have any side effects, attach a Correlation Identifier [4, p. 154, 63] such as a timestamp or a sequence number and use Scatter–Gather to ensure the order of messages in the output.
If the messages only need to be ordered within a specific category, use Partitioning and a single consumer per partition
The limit of scaling is not infinite; the router may become a bottleneck if the consumers are faster. On the other hand, if the consumers are not fast enough, the router might be overwhelmed and may have to start dropping messages.
If resiliency measures are implemented, the system might need to be further guarded against poison messages or duplicated processing.
See also § 4.2.3.
6.1.4 Example
ExampleEshop uses a saga orchestrator to handle the distributed transaction it needs to implement orders. However, the system has trouble keeping up with the backlog of orders during large events, even if using threading or asynchronous processing. Since they are already using message queues to increase the reliability of the saga, they decide to employ competing consumers to process the orders in parallel.
6.1.5 Related patterns
If the producers need a response, see Asynchronous Request–Reply (§ 4.3)
To improve performance, consider using a Claim Check to offload the message queue if the messages are heavy (see § 8.4).
To improve reliability and resilience, this pattern can be extended with