Published Jul 27, 2022
[
 
]
There are two challenges when we try to distribute data:
Consistent Hashing map data to physical nodes and ensures that only a small set of keys move when servers are added or removed.
Consistent Hashing stores the data managed by a distributed system in a ring. Each node in the ring is assigned a range of data.
With consistent hashing, the ring is divided into smaller, predefined ranges. Each node is assigned one of these ranges. The start of the range is called a token. This means that each node will be assigned one token. The range assigned to each node is computed as follows:
Whenever the system need to read or write data, the first step it performs is to apply the MD5 hashing algorithm to the key. The output of this hashing algorithm determines within which range the data lies and hence, on which node the data will be stored.
The Consistent Hashing scheme described above works great when a node is added or removed from the ring, as in these cases, since only the next node is affected. For example, when a node is removed, the next node become responsible for all the keys of the keys stored on the outgoing node. However, this scheme can result in non-uniform data and load distribution. This problem can be solved with the help of Virtual nodes.
Here are a few potential issues associated with a manual and fixed division of the ranges
Node rebuilding: Since each node’s data might be replicated (for fault-tolerance) on a fixed number of other nodes, when we need to rebuild a node, only its replica can provide the data. This puts a lot of pressure on the replica nodes and can lead to service degradation.
Instead of assigning a single token to a node, the hash range is divided into multiple smaller ranges, and each physical node is assigned several of these smaller ranges. Each of these subranges is considered a Vnode. With Vnodes, instead of a node being responsible for just one token, it is responsible for many tokens (for subranges).
Practically, Vnodes are randomly distributed across the cluster and are generally non-contiguous so that no two neighboring Vnodes are assigned tot the same physical node or rack. Additionally, nodes do carry replicas of other nodes for fault tolerance. Also, since there can be heterogeneous machines in the clusters, some severs might hold more Vnodes than others. The figure below shows how physical nodes A, B, C, D, & E use Vnodes of the Consistent Hash ring. Each physical node is assigned a set of Vnode and each Vnode is replicated once.
To ensure highly availability and durability, Consistent Hashing replicates each data item on multiple N nodes in the system when the value N is equivalent to the replication factor.
The replication factor is the number of nodes that will receive the copy of the same data. For example, a replication factor of two means there are two copies of each data items, where each copy is stored on a different node.
Each key is assigned to a coordinator node (generally the first node that falls in the hash range), which first stores the data locally and then replicates to N - 1 clockwise successor nodes on the ring. This results in each node owning the region on the ring between it and its Nth predecessor. In an eventually consistent system, the replication is done asynchronously (in the background).
In eventually consistent systems, copies of data don’t always have to be identical as long as they are designed to eventually become consistent. In distributed systems, eventual consistency is used to achieve hight availability.
Amazon’s Dynamo and Apache Cassandra use Consistent Hashing to distribute and replicate data across nodes.