Distributed Application Architecture Patterns

3.1 Prerequisites

ACID transactions
A common acronym for the four properties of database transactions: Atomicity, Consistency, Isolation, and Durability [35, 36].
BASE
An alternative acronym to ACID for eventually consistent systems: Basically Available, Soft state, and Eventual consistency [2, p. 132, 37].
CAP theorem
See § 3.1.2.
CRUD
A common acronym for the four basic operations of persistent storage: Create, Read, Update, and Delete [38].
Event-driven architecture
An umbrella term for systems where components communicate by emitting and reacting to events. Fowler separates it into Event Notification, Event-Carried State Transfer, Event Sourcing, and CQRS to improve reasoning about it [39]
Fallacies of distributed computing
See § 3.1.1.
Idempotency
A property of operations that can be applied multiple times without changing the result beyond the initial application [4, p. 469, 15, p. 197].
Message broker, message-oriented middleware
See § 4.2.7.
Poison message
A message that failed to be delivered enough times [40]. It can be moved to a dead-letter queue to wait for further inspection [4, p. 123].
Strict/eventual consistency
The two possible consistency models in distributed systems: strict is a CP system, and eventual is an AP system (see § 3.1.2) [15, p. 127].

3.1.1 Fallacies of distributed computing

Writing distributed systems presents several challenges that are not present in standard monolithic applications. These challenges are often underestimated, which led to the formulation of the following fallacies of distributed computing, first coined by Deutsch and other colleagues from Sun Microsystems in 1994 [2, p. 124, 41].

  1. The network is reliable

  2. Latency is zero

  3. Bandwidth is infinite

  4. The network is secure

  5. Topology doesn’t change

  6. There is one administrator

  7. Transport cost is zero

  8. The network is homogeneous

Richards et al. [2, p. 131] also present additional considerations when creating distributed systems.

3.1.2 CAP theorem

The CAP theorem was formulated by Brewer [42] in 1999 and later proven by Gilbert et al. [43] in 2002. It states that in a distributed system, it is impossible to guarantee all three of the following properties simultaneously:

  1. Consistency – each node gives the same answer

  2. Availability – each request receives a response

  3. Partition tolerance – the system continues to operate despite network failures

Thus, when building a distributed system, one can only choose two of these properties. AP systems (sacrificing consistency, i.e. eventually consistent systems) are easier to build and scale, and CP systems (sacrificing availability) are much harder. Fortunately, these trade-offs do not need to be made on a system level; they can be much more nuanced, such as on a component or operation level. [3, pp. 410–412]