This is my second write-up on Facebook architecture. In the first, I covered the databases, the persistence tech used @Facebook.

In this write-up, I will talk about the real-time chat architecture of Facebook which scales with over multi-billion messages sent every single day.

What technologies does it use on the backend? What is the technology stack? What is the system architecture? What are the primary challenges a platform like Facebook, with such a massive number of users, has to face when rolling out & scaling a feature like this?

Let’s find out.

For a full list of all the real-world software architecture posts on the blog here you go.


1. Introduction

It all started at a hackathon where a few Facebook engineers wrote a chat prototype & presented it before their team. The feature was pretty basic with bare minimum features, the chatbox floated on & could be dragged around the web page, it also persisted through page reloads and other page navigations.

Engineers at Fb took the prototype, evolved it into a full-blown real-time chat feature, one of the most heavily used features in the Facebook services ecosystem.

It facilitates billions of messages sent every single day all across the world. The engineering team of the social platform has scaled it pretty well with response time as less than 100 ms.

The feature is continually improved with the sole aim of providing a top-notch communication service to the users.


2. Why Write & Maintain the Chat Feature from Scratch? Why not Integrate a third-party Chat Service?

Besides having the generic features, the chat module is integrated with the Fb social graph. Users can easily pull out the list of their friends, other relevant information such as games they are playing and stuff.

All the information which is available to a user on the platform, in general, is also accessible on the chat module.

It’s easier, cleaner, secure & provides more control when writing things from scratch as opposed to making it work with third-party code.

If you found the content helpful, check out the Zero to Software Architect learning track, a series of three courses I have written intending to educate you, step by step, on the domain of software architecture and distributed system design. The learning track takes you right from having no knowledge in it to making you a pro in designing large-scale distributed systems like YouTube, Netflix, Hotstar, and more.


3. Real-Time Chat Architecture & Technology Stack

The entire system consists of several loosely coupled modules working in conjunction with each other such the web tier, user interface, chat logger, user presence module & the channel cluster.

Facebook Messenger Chat Architecture

User Interface

The user interface is naturally written in JavaScript with some PHP used for server-side rendering.

Long opened persistent connections are established between the client and the server with the help of Ajax.

Flash was dismissed purely due to two reasons. First, it would ask the users to install a plugin in their browsers which is not a good user experience. Second, Flash is not a preferred choice from a security standpoint.

The message fetch flow is a mix of PULL & PUSH based HTTP models.

Initially, the client sends a PULL request to get the first snapshot of the messages, at the same time subscribing to delta updates which is a PUSH based approach.

Once the user subscribes to the updates, the Facebook backend starts pushing the updates to the client whenever new updates are available.

Backend Web Tier

The web tier is powered by PHP. It deals with the vanilla web requests. Takes care of user authentication, friend’s privacy settings, chat history, updates made by friends & other platform features business logic.

User Presence Module

This module provides online availability information of the connections/friends of a user. It’s written in C++ & is the most heavily pinged module of the system.

The module aggregates the online info of the users in-memory & sends the information to the client when requested.

Channel Servers

Channel Servers take care of message queuing and delivery. The functionality is written using Erlang.

Erlang is a concurrent functional programming language used for writing real-time scalable & highly available systems like instant messaging, fintech apps, online telephony etc.

The run-time system for Erlang has built-in support for concurrency, distribution & fault-tolerance.

The channel servers leverage Mochi Web library. It is an Erlang library for building lightweight HTTP servers. The messages sent by users are queued in the channel servers. Each message has a sequence number to facilitate synchronous communication between any two or more users.

Chat Logger

Logging of chat meta & other information is done via the chat logging module. It’s written in C++ & logs information between UI page loads.

To educate yourself on software architecture from the right resources, to master the art of designing large scale distributed systems that would scale to millions of users, to understand what tech companies are really looking for in a candidate during their system design interviews. Read my blog post on master system design for your interviews or web startup.


4. Service Scalability & Deployment

User Presence & Chat logging data is replicated across all of the data centres at Facebook while the Channel servers data is stored at just one dedicated data centre to ensure a strong consistency of messages.

All the backend modules are loosely coupled as you can see the diagram above. They communicate with each other via Thrift.

Thrift is a communication protocol which facilitates communication between services running on heterogeneous technologies.

It’s a serialization & RPC framework for service communication developed in-house at Facebook. It helps systems running on C++, Erlang, PHP, JavaScript work together as a team.

The Most Resource Intensive Operation

The most resource-intensive operation in the entire system is not sending billions of messages across but keeping the user informed about his connections/friends online status.

This is important as a person would begin a conversation only when he sees a connection of his online.

To achieve this, one option was to send notifications to the users of their connections being online. But this process wasn’t scalable by any means considering the number of users the platform has.

This operation has a worst case complexity of O(average number of friends users have * number users at the time of peak traffic * frequency of users going offline & re-connecting online) messages/second.

During the peak hours, the number of concurrent users on the site is in several million. Keeping all the user presence information up to date was technically just not feasible.

Besides the users who weren’t even chatting put a lot of load on the servers by just asynchronously polling the backend for their connections’ active status.

Real-time systems are hard to scale. Scaling their back-ends need some pretty solid engineering skills.

Also, read how Linked Identify it’s users online. An insight into its real-time messaging architecture.

To scale the user presence backend, the cluster of channel servers keeps a record of users available to chat which it sends to the presence servers via regular batch updates.

The upside of this process is with only one single query the entire list of a user’s connections who are available to chat can be fetched.

Considering the crazy amount of information exchanged between the modules, the channel servers compress all the information before streaming it to the presence servers.

The number of load balancers was increased to manage the sheer number of user connections. The ability of the infrastructure to manage concurrent user connections increased significantly after this. This was one bottleneck which caused chat service outages on & off for a while at peak times.


5. Synchronization Of Messages & Storage

To manage synchronous communication, as I stated earlier, every message has a sequence number.

Besides this Facebook created a Messenger Sync Protocol that cut down the non-media data usage by 40%. This reduced the congestion on their network, getting rid of errors happening due to that.

The engineering team at Facebook wrote a service called Iris, which enables the message updates to be organized in an ordered queue.

The queue has different pointers to it which help track the message updates that have been read by the user & the ones still pending.

Facebook messenger chat Iris architecture

The recent messages are sent from Iris’s memory and the older conversations are fetched from the traditional storage. Iris is built on top of MySQL & Flash memory.

Hbase was used initially as the messenger storage but later the storage was migrated to MyRocks. It’s an open-source database project written by Facebook that has RocksDB as a MySQL storage engine.

For more information on this read, what databases does Facebook use?

Write-up information source

Recommended Read: Master System Design For Your Interviews Or Your Web Startup


Handpicked Resources to Learn Software Architecture and Large Scale Distributed Systems Design
I’ve put together a list of resources (online courses + books) that I believe are super helpful in building a solid foundation in software architecture and designing large-scale distributed systems like Facebook, YouTube, Gmail, Uber, and so on.  Check it out.


Subscribe to the newsletter to stay notified of the new posts.



More On the Blog

Web Application & Software Architecture 101 Course

Twitter’s migration to Google Cloud – An Architectural Insight

What Database Does Twitter Use? – A Deep Dive

Instagram Architecture – How Does It Store & Search Billions of Images

Why Use Cloud? How Is Cloud Computing Different from Traditional Computing?