# Consistent Hashing

To understand consistent hashing, first of all we have to understand traditional hashing and it’s limitations in large scale distributed system.

Hashing in plain terms is nothing but a key value pair store, where given a key, associated value can be found very efficiently. Example : Let’s say we want to find a name of a street in city given it’s zip code. Idea is to store this information as hash as <Zip code, Street Name>.

Problem becomes more interesting when data is too big to store on one node or machine, multiple such nodes or machines are required in system to store it. For example, a system which uses number of web caches. First question: How to decide which key goes on which node? Simplest solution is use modulo function to decide. Given a key, take hash of key, divide it by number of nodes in system and then put that key on to that node. Similarly, when fetching key, hash the key, divide with number of nodes and then go to that node and fetch value. Picture depicts conventional hashing in multi-node system.

Failures are common in commodity hardware driven distributed multi-node system. Any node can die without any prior notice and expectation is that system runs unaffected with slight cost on performance. What happens when in system described above, a node goes down? In traditional hashing, total number of nodes have decreased, so function determining node to put key on or fetch key from, changes. New keys will be working fine. What happens to existing keys? They are all in wrong nodes as per new function.

To fix this problem, we have to redistribute all existing keys on remaining nodes, which may be very costly operation and can have detrimental effects on running system. Again what happens when node comes back? well, repeat what was done when node went down. This may result in thrashing effect where if a node misbehaves regularly, system would do no good work except from re-distribution of keys.

How to solve this challenge? This is where consistent hashing comes into picture.

## Consistent hashing in distributed multi-node systems

Consistent hashing comes up with a very simple idea, that is to have nodes and keys in same id space unlike traditional hashing where node id and keys were in two different id space. Node id can be hash function to IP address and then same hash function is applied to keys to determine which node key goes on or to fetch from.

Critical requirement for consistent hashing implementation is to have a hash function which is consistent irrespective of system view and map keys roughly uniformly on all machines.

Chose any base hash function such that it maps a key space to integers in range [0..M]. Once we divide it with M, it gives us an unit circle. Now, each key once hashed represents a point on this unit circle.

How does key maps to a node exactly? Well, key is hashed and then put key on to first node you find while moving clockwise. Simple enough, huh? To find a key, take hash and go to first node while moving clockwise on to unit circle.

How does it solve the problem of scale? Let’s my system is receiving 5 x load, what happens to nodes and how can I balance load or reduce it? Simple thing is to add more nodes uniformly distributed on unit circle and problem solved. Consistent hashing built of scale.

What happens when a node goes down? All the keys which were on this node are reallocated to next successor node on circle. All other keys remain unchanged. This is far more optimal compared to case when we have re-distribute all keys on failure of one node.

As mentioned earlier, we assume that hash function used will distribute keys on nodes uniformly, which is not realistic. To reduce non-uniformity, virtual nodes are introduced. In this case, each nodes is hashed with K different hash function which maps nodes on different points on circle. Still node is going to get 1/N keys however, virtual nodes reduces key load variance significantly.

One challenge still remains : How to efficiently find successor node for a given key, we want to find source s such that h(s) > h(k) of key k. Intuitive solution is to use hash, but hashes do not maintain any ordering information. Best bet is to use binary search tree which maintain ordering, but the successor function is proportional to depth of tree which is O(N), We can reduce that by using balance binary search trees like red black trees which reduces complexity to log(N).

### Where all consistent hashing is used?

Consistent hashing is used in Memcached, Casandra Amazon Dynamo etc.

If you find this article useful, please share. If there is something missing or wrong in article please share and we will correct it.

Reference:

http://theory.stanford.edu/~tim/s16/l/l1.pdf

http://www8.org/w8-papers/2a-webserver/caching/paper2.html