This write-up is an in-depth guide on Distributed Cache. It does cover all the frequently asked questions about it such as What is a distributed cache? What is the need for it? How is it architecturally different from the traditional caching? What are the use cases of Distributed Caches? What are the popular distributed caches used in the industry?
So, without further ado.
Let’s get on with.
Before jumping right into distributed caching. It’ll be a good idea to get a quick insight into caching. Why do we use caching in our web applications, after all?
The list of all the real-world software architecture posts on the blog – check them out.
1. What is Caching? & Why Do We Use It in Our Web Applications?
Caching means saving frequently accessed data in-memory, that is in RAM instead of the hard drive. Accessing data from RAM is always faster than accessing it from the hard drive.
Caching serves the below-stated purposes in web applications.
First, it reduces application latency by notches. Simply, due to the fact that it has all the frequently accessed data stored in RAM, it doesn’t has to talk to the hard drive when the user requests for the data. This makes the application response times faster.
Second, it intercepts all the user data requests before they go to the database. This averts the database bottleneck issue. The database is hit with comparatively lesser number of requests eventually making the application as a whole performant.
Third, Caching often comes in really handy in bringing the application running costs down.
Caching can be leveraged at every layer of the web application architecture be it a database, CDN, DNS etc.
1.1 How Does Caching Help In Bringing the Application Compute Costs Down?
In-memory is cheap when compared to the hard drive persistence costs. Database Read Write operations are expensive. You know what I am talking about if you’ve ever used a DBaaS Database as a Service.
There are times when there are just too much writes on the application, for instance, in massive multi-player online games.
Data which has not so much priority can be just written to the cache & periodic batch operations can be run to persist data to the database. This helps in cutting down the database write costs by a significant amount.
Well, this is what I did when I deployed the game I wrote & on Google Cloud. I used the Google Cloud Datastore NoSQL service. The approach helped a lot with the costs.
Now, that we are clear on what cache & caching is. Let’s move on to distributed caching.
Recommended Read: Best handpicked resources to build a solid foundation in software architecture & system design
2. What Is Distributed Caching? What is the Need For It?
A distributed cache is a cache which has its data spread across several nodes in a cluster also across several clusters across several data centres located around the globe.
Being deployed on multiple nodes helps with the horizontal scalability, instances can be added on the fly as per the demand.
Distributed caching is being primarily used in the industry today, for having the potential to scale on demand & being highly available.
Scalability, High Availability, Fault-tolerance are crucial to the large scale services running online today. Businesses cannot afford to have their services go offline. Think about health services, stock markets, military. They have no scope for going down. They are distributed across multiple nodes with a pretty solid amount of redundancy.
Distributed cache, & not just cache, distributed systems are the preferred choice for cloud computing. Solely due to the ability to scale & being available.
Google Cloud uses Memcache for caching data on its public cloud platform. Redis is used by internet giants for caching, NoSQL datastore & several other use cases.
3. How is Distributed Caching Different From the Traditional/Local Caching?
Systems are designed distributed to scale inherently, they are designed in a way such that their power can be augmented on the fly be it compute, storage or anything. The same goes for the distributed cache.
The traditional cache is hosted on a few instances & has a limit to it. It’s hard to scale it on the fly. It’s not so much available & fault-tolerant in comparison to the distributed cache design.
In the article above I already brought this up how cloud computing uses distributed systems to scale & stay available. Well, the cloud is just a marketing term, technically it’s a massively distributed system of services such as compute, storage, etc running under the covers.
Do read Why Use Cloud? How Is Cloud Computing Different from Traditional Computing? To get a good insight into the underlying architecture.
To educate yourself on software architecture from the right resources, to master the art of designing large scale distributed systems that would scale to millions of users, to understand what tech companies are really looking for in a candidate during their system design interviews. Read my blog post on master system design for your interviews or web startup.
4. What Are the Use Cases Of Distributed Caches?
Distributed caches have several use cases stated below:
Database Caching
The Cache layer in-front of a database saves frequently accessed data in-memory to cut down latency & unnecessary load on it. There is no DB bottleneck when the cache is implemented.
Storing User Sessions
User sessions are stored in the cache to avoid losing the user state in case any of the instances go down.
If any of the instances goes down, a new instance spins up, reads the user state from the cache & continues the session without having the user notice anything amiss.
Cross-Module Communication & Shared Storage
In memory distributed caching is also used for message communication between the different micro-services running in conjunction with each other.
It saves the shared data which is commonly accessed by all the services. It acts as a backbone for micro-service communication. Distributed caching in specific use cases is often used as a NoSQL datastore.
In-memory Data Stream Processing & Analytics
As opposed to traditional ways of storing data in batches & then running analytics on it. In-memory data stream processing involves processing data & running analytics on it as it streams in real-time.
This is helpful in many situations such as anomaly detection, fraud monitoring, Online gaming stats in real-time, real-time recommendations, payments processing etc.
5. How Does A Distributed Cache Work? – Design & Architecture
Designing a Distributed Cache demands a separate thorough write-up. For now, I’ll give you a brief overview of how a distributed cache works behind the scenes?
A Distributed cache under the covers is a Distributed Hash Table which has a responsibility of mapping Object Values to Keys spread across multiple nodes.
The distributed Hash table allows a Distributed cache to scale on the fly, it manages the addition, deletion, failure of nodes continually as long as the cache service is online. Distributed hash tables were originally used in the peer to peer systems.
Speaking of the design, caches evict data based on the LRU Least Recently Used policy. Ideally, it uses a Linked-List to manage the data pointers. The pointer of the data frequently accessed is continually moved to the top of the list/Queue. A Queue is implemented using a Linked List.
The data which is not so frequently used is moved to the end of the list & eventually evicted.
Linked-List is an apt data structure where are too many insertions and deletions, updates in the list. It provides better performance as opposed to an ArrayList.
Regarding the concurrency there are several concepts involved with it such as eventual consistency, strong consistency, distributed locks, commit logs and stuff. That will be too much for this article. I’ll cover everything in the write-up where I design a distributed cache from the bare bones.
Also, distributed cache often works with distributed system co-ordinators such as Zookeeper. It facilitates communication & helps maintain a consistent state amongst the several running cache nodes.
6. What Are the Different Types Of Distributed Caching Strategies?
There are different kinds of caching strategies which serve specific use cases. Those are Cache Aside, Read-through cache, Write-through cache & Write-back
Let’s find out what they are & why do we need different strategies to cache.
Cache Aside
This is the most common caching strategy, in this approach the cache works along with the database trying to reduce the hits on it as much as possible. The data is lazy loaded in the cache.
When the user sends a request for particular data. The system first looks for it in the cache. If present it’s simply returned from it. If not, the data is fetched from the database, the cache is updated and is returned to the user.
This kind of strategy works best with read-heavy workloads. The kind of data which is not much frequently updated, for instance, user profile data in a portal. His name, account number etc.
The data in this strategy is written directly to the database. This means things between the cache and the database could get inconsistent.
To avoid this data on the cache has a TTL Time to Live. After that stipulated period the data is invalidated from the cache.
Read-Through
This strategy is pretty similar to the Cache Aside strategy with the subtle differences such as in the Cache Aside strategy the system has to fetch information from the database if it is not found in the cache but in Read-through strategy, the cache always stays consistent with the database. The cache library takes the onus of maintaining the consistency with the backend;
The Information in this strategy too is lazy loaded in the cache, only when the user requests it.
So, for the first time when information is requested it results in a cache miss, the backend has to update the cache while returning the response to the user.
Though the developers can pre-load the cache with the information which is expected to be requested most by the users.
Write-Through
In this strategy, every information written to the database goes through the cache. Before the data is written to the DB, the cache is updated with it.
This maintains high consistency between the cache and the database though it adds a little latency during the write operations as data is to be updated in the cache additionally. This works well for write-heavy workloads like online massive multiplayer games.
This strategy is used with other caching strategies to achieve optimized performance.
Write-Back
I’ve already talked about this approach in the introduction of this write-up. It helps optimize costs significantly.
In the Write-back caching strategy the data is directly written to the cache instead of the database. And the cache after some delay as per the business logic writes data to the database.
If there are quite a heavy number of writes in the application. Developers can reduce the frequency of database writes to cut down the load & the associated costs.
A risk in this approach is if the cache fails before the DB is updated, the data might get lost. Again this strategy is used with other caching strategies to make the most out of these.
7. What Are the Popular Distributed Caches Used in the Industry?
The popular distributed caches used in the industry are Eh-cache, Memcache, Redis, Riak, Hazelcast.
Memcache is used by Google Cloud in its Platform As A Service. It is a high performant distributed key-value store primarily used in alleviating the database load.
It’s like a large hash table distributed across several machines. Enabling data access in O(1) i’e constant time.
Besides having the key-value pair Redis is an open-source in-memory distributed system which supports other data structures too such as distributed lists, queues, strings, sets, sorted sets. Besides caching, Redis is also often treated as a NoSQL data store.
Recommended Read: Master System Design For Your Interviews Or Your Web Startup
Best handpicked resources to build a solid foundation in software architecture & system design
Subscribe to the newsletter by click here, to stay notified of the new posts.
Well, Guys!! This is pretty much it about the distributed cache. If you liked the write-up, share it with your folks. Consider following 8bitmen on Twitter, Facebook, LinkedIn to stay notified of the new content published.
I am Shivang, the author of this writeup. You can read more about me here.
8. More On the Blog
Facebook Real-time Chat Architecture Scaling With Over Multi-Billion Messages Daily
Twitter’s migration to Google Cloud – An Architectural Insight
How Does PayPal Processes Billions of Messages Per Day with Reactive Streams?
Instagram Architecture – How Does It Store & Search Billions of Images
Shivang
Related posts
Zero to Software Architect Learning Track - Starting from Zero to Designing Web-Scale Distributed Applications Like a Pro. Check it out.
Master system design for your interviews. Check out this blog post written by me.
Zero to Software Architect Learning Track - Starting from Zero to Designing Web-Scale Distributed Applications Like a Pro. Check it out.
Recent Posts
- System Design #3: Leveraging the Backends for frontends pattern to avert API gateway from becoming a system bottleneck
- System Design #2: Understanding API gateway and the need for it
- System Design #1: CDN and Load balancers (Understanding the request flow)
- System Design #5: How Actor model/Actors run in clusters facilitating asynchronous communication in distributed systems
- System Design#4: Understanding the Actor model to build non-blocking, high-throughput distributed systems
Follow Me On Social Media