System Design Interview Patterns-III

SAKSHI CHHABRA
8 min readMay 6, 2022

Incase you haven’t checked out my previous blogs on part I and part II of System Design patterns, i would recommend to check it out here. The blog covers 6, 4 different system design patterns that are frequently used in system design interviews.

In the current blog, i am covering 6 more system design patterns that would be handy for any system design interview.

Write-ahead Log

Definition:

For distributed systems to be resilient and data integrity, before making any data modification, it is first persisted to append-log on disk. This is known as write-ahead log which guarantees system durability, as system will be able to access and finish up the task it had started, incase system crashes before it finishes up.

Context:

Strong durability guarantee is needed for distributed systems even in case of system failures. Once the server agrees to execute a task, it should do so even if it loses all the in-memory data due to server crash in the middle of task execution. But how will the server know the last thing it was doing before system crash?

Solution:

The idea behind write-ahead log is that all requests are first added to append-only log, and then modifications are made based on the request. Each entry log contains enough information to undo or redo the modification. When server restarts post crash, it replays all the log entries to recover previous state.

Each node in distributed environment maintains its own log. WAL is always sequentially appended which simplifies log handling.

Example:

Cassandra: persists all the write requests before writing data in db

Kafka: persists all the messages it receives

Segmented Log

Definition:

Segmented log is breaking down of log into smaller segments for easy management.

Context:

A single log can become difficult to manage. As file size grows, it can become a performance restriction, especially when node recovers from failure. Older logs need to be regularly cleaned up regularly, or combined in some scenarios. Doing these operations on a large file is difficult to implement.

Solution:

A single log file is split into multiple parts such as log data is divided into equal-sized log segments. The system can roll the log into separate segments based on the rolling policy. An example of rolling policy could be splitting the log based on configurable memory(eg 1 GB) or configurable period of time (eg 8 hours).

Example:

Cassandra: It uses segment log strategy to split log based on configurable memory. As soon as a receives write operation, it immediately writes it to commit log. As the commit log reaches its threshold, a new commit log is created. Commit log segments can be deleted, archived or recycled once all of its data is flushed to tables.

Kafka: Log segmentation is implemented for all its partitons.

Bloom Filters

Definition:

Bloom filter is a space efficient probabilistic data structure that helps to quickly identify if an element might be present in a set.

Context:

Suppose we have a large set of all registered username stored in a data file, what is the most efficient way to check if a username is present in a data file? We don’t want to read entire file as that would be very slow.

The best approach that we know is binary search, can we do better?

Solution:

The bloom filter tells us if an element maybe present in set, or if its certainly not present. The errors possible here are false positives, i.e. bloom filter responds with “present”, even if the element doesn’t exist. The probability of receiving incorrect result increases with number of elements in set.

Initially, when no elements are present in set, bloom filter is a bit array of length ‘m’ with all zeros. k different hashes are created, where each maps an element to one of the m-bit array.

Add an element: To add an element to the set, get the hashes of the element which will give k positions. Set the bit at those positions to 1.

Check if element is present: To identify if an element is present, get the hashes of the element which will give k positions. If bit at any of these positions is 0, the element is definitely not in the set. If all the bits are 1, the element maybe present in set.

Example:

Cassandra

Consistent Hashing

Definition:

Consistent hashing is used to distribute data across nodes and ensures that only a small set of data needs to be moved when servers are added or removed.

Context:

Data Partitioning is the technique of distributing data across a set of nodes. But this creates two challenges:

  1. How do we know on which node should particular piece of partitioned data be stored.
  2. When we add/remove nodes, how do we know what data would be moved from existing node to new node. How can we minimize data movement with addition or removal of node.

A natural strategy would be to using an appropriate hash function that maps data key to server. This solves our first problem, but we would have to re-map all the keys and move data based on the new re-mapping, when servers are added or removed. Thats a lot of work, any better approach?

Solution:

Consistent hashing

Consistent hashing is a technique to store data in nodes that form a ring where each node is assigned a small range. Each node is assigned a token value which is the start of the range. Hence, range of data is [token value of current node, token value of next node — 1].

When the system receives data for read or write operation, the system applies hashing function on the data which accepts data of any length and returns fixed length output. This output determines the range in which data lies, and hence the data will be stored on that node. This solves our first problem.

For second problem, consistent hashing works great when node is added or removed from ring, as only the next node is affected. For ex: if node is added, some burden from next node is transferred to the new node. In the same way, in case of node removal, all the burden of removed node will be shifted to the next node.

But this results in uneven distribution of load, which can be efficiently solved by using virtual nodes.

Virtual nodes: Instead of assigning a single token to each node, the hash range is divided into multiple smaller ranges and each node is assigned several of these smaller ranges. Each of these smaller ranges is considered a virtual node. Hence, instead of a node being responsible for just one token, it is responsible for many tokens.V-Nodes are randomly distributed across nodes i.e. no two neighboring v-nodes are assigned to one node.

This helps in cases of node deleting where instead of adding all of the load onto one node, the v-nodes are distributed evenly across many nodes in the system. Similarly, in case of node addition, it receives many v-nodes from the existing nodes to maintain a balanced cluster.

The other advantage of v-node is that each node gets the count of v-nodes based on the node capacity to handle load. Powerful servers get more no of v-nodes, compared to less powerful ones.

without v-nodes
with v-nodes

Example:

Dynamo, Cassandra

Lease

Definition:

Use time-bound lease to grant resource access rights to clients. A lease acts like a lock on the resource for a limited time.

Context:

In distributed system, a lot of clients need specific access to certain resources. For ex: a client might need exclusive rights to the file. One of the approach to achieve this is through distributed locking where first gets exclusive lock to update the file, but the second client will get access lock, only after the first client releases the lock. This is a problem with distributed locked where the lock is granted to a client until it is exclusively released by the client. If the client fails to release lock due to unavoidable situations(eg deadlock, software crash), the file would be locked indefinitely. This leads to resource being unusable until system is reset. Any alternative solution?

Solution:

Client asks for a lease for a limited time, after which the lease expires and client has to renew the lease before it expires.

Example:

Chubby: clients maintains lease with leader, and leader guarantees to not end the lease single-handed.

Request Pipeline

Definition:

Improve latency of the distributed system by sending multiple requests to the connection without waiting for response from previous request.

Context:

Communication between servers using single TCP connection can cause performance issues when requests need to wait to receive response for previous request. If only one request is sent at a time, most of the server capacity is wasted. Any better approach to achieve better throughput and latency?

Solution:

Nodes send requests to other nodes without waiting for response from previous request. This can be done by creating two separate thread, one for sending requests and other for receiving requests over network channel. The sender nodes sends requests over socket channel without waiting for response. Once a request is processed by the server node, it is removed from queue to create space for more requests.

In order to not overwhelm the server node with many requests, an upper limit on how many requests can be stored is maintained. Once the maximum no of requests are received by server, no more requests are accepted and server is blocked.

Example:

Kafka encourages clients to use request pipelining for improved throughput

Thank you for checking my blog!

In the upcoming blog, i will be covering 5 more system design patterns. So, follow along if you want to get notified about my upcoming blog!

--

--

SAKSHI CHHABRA

Master's student in Computer Science from University of Florida. I love to write and help others, so here am i.