Google.com is the most visited website on our planet. Followed by YouTube.com. Both services are owned by Google. Besides these two there are other multiple online services owned by Google each with over a billion users like Gmail, Google Ads, Google Play, Google Maps, Google Drive, Google Chrome.
On a day to day basis, Google has to deal with petabytes of data. Just YouTube alone needs more than a petabyte of new storage every single day. Let alone the data storage requirements of all the services collectively.
To manage such an insane amount of data effectively. Over time, Google has come up with state of the art data management & storage technologies such as Google File System, Google Big Table, Dremel, Millwheel & many more.
This writeup is a deep dive into the databases used by Google to manage exabyte scale data. In the later part, we’ll also look into the storage infrastructure at Google.
We’ll begin with a few of the popular services offered by Google having over a billion users and then we’ll delve into the databases, technology and the storage infrastructure powering those services.
So, without further ado. Let’s get on with it.
For a full list of all the real-world software architecture posts on the blog here you go.
Google Services Having Over A Billion Users Each
Google Search is the core service of the company. The search service indexes & caches trillions of web pages, pdfs, images and more containing terabytes of data to enable users to quickly find the information they run a search for. Google search receives approx. 5.4 billion searches every single day. By the year 2010, Google had over 10 billion images indexed in its database.
Google Photos is the photo storage service that enables the users to upload their photos on the cloud. It got pretty popular & has over 1.2 billion photos uploaded to the service every single day. Collectively the data amounts to approx. 14 petabytes of storage. The service has over a billion users.
Google Ads is an advertising service run by Google. It serves ads over various forms of media created by content creators and earns a share from it. This service is the main source of revenue (86%) for the company.
YouTube is a social video sharing platform, the second most visited website on the planet. It has over a billion users. With over 2 billion users, the video-sharing platform is generating billions of views with over 1 billion hours of videos watched every single day. Here is a detailed article on the database and the backend infrastructure of YouTube.
Gmail & Google Drive have over 1.5 billion users. Google Play has over 1 billion users, it has had over 100 billion app downloads and approx. 3.5 million apps published. Google Maps has over 1 billion users. Google Analytics the website analytics service is the most widely used analytics service on the web. Google Assistant is installed on over 400 million devices. Google Chrome is the most used web browser in the world. Besides these, there are several other add on services offered by Google such as Google docs, sheets, slides, calendar etc. For a complete list of products offered by Google, here you go.
Now you can imagine the data stored by all these services collectively. It’s huge. How does Google store so much data? Let’s find out.
Databases Used By Google
If you just need a quick answer, Google uses BigTable, Spanner, Google Cloud SQL, MySQL, Dremel, Millwheel, Firestore, Memorystore Firebase, Cloud Dataflow, BigQuery & many more. It has a polyglot persistence architecture.
If you want to stick around for details here we go. Starting with the search service.
Recommended Read: Best handpicked resources to build a solid foundation in software architecture & system design
Google Search Service Data Processing Infrastructure – The Move From MapReduce To BigTable
The search engine indexes trillions of web pages receiving 5.4 billion searches every single day. This is terabytes of information. The service also creates cached versions of the websites. On Aug 2009 Google released a new search architecture called Caffeine that had a new indexing infrastructure. Google claimed that Caffeine had 50% fresher results as the index was continually updated over time as opposed to the results being indexed periodically in batches.
The indexing infrastructure originally started with MapReduce but transitioned to BigTable during the Caffeine launch.
MapReduce
MapReduce is a framework for processing BigData, in a distributed environment, on a cluster of servers or a grid. It’s a proprietary tech developed by Google but the service transitioned to a superior data processing solution BigTable.
Google used MapReduce to completely regenerate its index of the world wide web. It eventually moved to technologies like Percolator, FlumeJava & Millwheel that provided real-time data streaming features as opposed to batch processing. This enabled the search service to integrate the live search results without rebuilding the entire index.
For an insight into data streaming, data pipelines, how to pick the right database & the right tech for your use case. Check out my Web application & software architecture 101 course here.
FlumeJava
FlumeJava enabled the search service to develop, test and run efficient parallel data pipelines which was tricky with MapReduce. The system faced scalability issues with MapReduce when dealing with petabyte-scale data. FlumeJava is actively used in writing data pipelines at Google.
Does This Mean Google Entirely Ditched MapReduce?
Google still uses MapReduce for Google App Engine log analysis and some other use cases.
Besides this, Bigtable is used with MapReduce with a set of wrappers that allow the Bigtable to be used both as an input source and as an output target for MapReduce jobs. You’ll get the know the MapReduce use cases as you read through the article.
Google BigTable
BigTable is a distributed storage system for managing structured data across thousands of commodity servers. More than 60 Google services like Google Earth, Google Finance, Google Search, Google Analytics, Writely etc. store data in Bigtable.
BigTable efficiently handles the data demands of these services providing scalability, high availability & performance whether it is for indexing urls, processing real-time data or latency-sensitive data serving.
The tech uses Google File System to store log and data files. The BigTable cluster runs on a shared pool of machines & relies on cluster management for scheduling jobs, managing resources on shared machines, dealing with machine failures and monitoring machine status. It uses a distributed lock service called Chubby for locking resources in a distributed environment.
Personalized Search
Google’s personalized search service enables users to browse their search history to revisit old queries and clicks. Users can ask the service for personalized search results based on their usage patterns. All the user data for the personalized search results gets stored in Bigtable. The data is replicated across BigTable clusters to make it highly available and to reduce the latency.
Google Analytics
Google Analytics service helps webmasters analyse the traffic pattern & other relevant statistics on their websites. All the analytics data is stored in BigTable in two tables, one is the click table and the other is the summary table. The click table contains data on user sessions, website’s name, time frame of sessions and other related data. The summary table is generated from the click table by the periodically scheduled MapReduce jobs. Each job extracts session data from the click table.
To educate yourself on software architecture from the right resources, to master the art of designing large scale distributed systems that would scale to millions of users, to understand what tech companies are really looking for in a candidate during their system design interviews. Read my blog post on master system design for your interviews or web startup.
Google Spanner
Spanner is Google’s scalable, globally distributed, strongly consistent database service. It automatically shards and migrates data across machines to balance the load and to deal with failures.
Spanner is used by Google’s services that have complex and evolving schemas and also want strong consistency globally. Achieving strong consistency can be tricky with Bigtable. Spanner also provides general-purpose transactions and a SQL-based query language.
Google Ads Database
Google Ads service originally ran on top of MySQL. After they went LIVE, the management decided to migrate to Oracle. Upon doing that the system became slower, it was reverted to MySQL.
To deal with the custom requirements of their Ads service Google wrote a custom distributed relational database management system called Google F1.
Google F1 Database
Google F1 database was developed with a primary aim of having a database that is highly available, has the scalability of NoSQL and the consistency and the features of traditional relational databases.
It’s a fault-tolerant globally distributed OLTP & OLAP database designed to replace the Google AdWords MySQL implementation. The F1 setup is over 100TB, serves upto hundreds of thousands of requests per second and processes SQL queries scanning tens of trillions of data rows every day.
F1 is built on top of Spanner. Both the databases were developed at Google almost at the same time. Google Ads system required a database that would support ACID transactions. The system had to deal with financial data and required strong consistency.
Powered by F1 & Spanner the AdWords system could update billions of rows per day with parallel operations. The databases provided high-throughput for parallel writes & cut down the bottlenecks the system faced with MySQL implementation. Also, with these databases, there was no need to manually reshard. F1 provides automatic failover, with MySQL master-slave replication failover was difficult and risked downtime and data loss.
F1 has 2 replicas on the west coast of US and 3 on the east coast. Consistent replication across the datacenters is provided by Megastore and Dynamo DB.
Google Cloud DataStore
Google Cloud datastore is a highly scalable low latency NoSQL database. It is built on top of Bigtable and Google Megastore. It provides the scalability of a NoSQL database and features of a relational database providing both strong consistency guarantee and high availability.
Cloud datastore has over 100 applications in production at Google both facing internal and external users. Applications like Gmail, Picasa, Google Calendar, Android Market & AppEngine use Cloud Datastore & Megastore.
Google Trends – Stream Processing With Millwheel
Millwheel is a scalable, fault-tolerant low-latency data stream processing tech widely used at Google. Based on the idea of logical time, it enables the developers to write time-based aggregations.
The Millwheel API provides record processing in an idempotent fashion making the records delivery occur exactly once from the user’s perspective.
Google’s Zeitgeist (now Google Trends) pipeline is used to track trends in web queries. The pipeline ingests search queries performing anomaly detection, spiking or dipping of particular search queries. With the help of Millwheel, a historical trend is developed for every search query. All the records are persisted for both short and long terms.

Besides Google trends, Millwheel is used in other internal projects such as the Ads system, Google Street View.
Large Scale Analytical Data Processing With Dremel
Dremel is an interactive ad-hoc query system for analysis of large-scale data. The framework is capable of running aggregate queries over a trillion-row table in seconds. The system can easily scale to thousands of servers and petabytes of data with the execution of processes parallelly.
Dremel is used in conjunction with MapReduce to analyze the output of MapReduce pipelines. It is used in production at Google since 2006 deployed across tens of thousands of nodes.
Here are some of the use cases of Dremel:
- Analysis of crawled web documents
- Tracking install data for applications on Android Market
- Crash reporting for Google products
- OCR results from Google Books
- Spam analysis
- Debugging of map tiles on Google Maps
- Tablet migrations in managed Bigtable instances
- Results of tests run on Google’s distributed build system
- Disk I/O statistics for hundreds of thousands of disks
- Resource monitoring for jobs run in Google’s data centers
- Symbols and dependencies in Google’s codebase
The framework provides a high-level SQL-like language to run the ad-hoc queries. Dremel scans quadrillions of records per month. Some queries achieve a scan throughput close to 100 billion records per second on a shared cluster.
Google’s BigQuery the REST-based service for the interactive analysis of large-scale data uses Dremel as the data query engine. Dremel is also the inspiration for projects like Apache Drill, Apache Impala & Dremio.
Google Cloud Platform Databases
Google Cloud Platform offers multiple databases as a service like Cloud datastore, Firebase, Spanner, Cloud SQL, Cloud Memorystore, Cloud Dataflow and many more.
All these databases are offered as a service on the platform after years of deployment in production by Google’s internal services. The cloud infrastructure offered by Google is the same that Google uses for its services internally. So, we can count all these databases too when looking for databases used by Google.
Storage
All the data is stored in hard drives in commodity servers in Google data centers, managed by the Google File System. You can read more about the commodity servers in this Youtube database article.
Google File System
Google File System is a distributed file system built by Google to manage data across thousands of servers. The file system provides fault tolerance, delivering high performance to a large number of clients.
Inside A Google Data Center
GFS is widely deployed within Google as the storage platform for processing of data used by the services as well as the R&D projects that require large data sets. GFS is built to handle I/O operations for large objects in multiple GBs. With the GFSs atomic append operation, multiple clients can append concurrently to a file without extra synchronization between them. The largest of the GFS clusters have over 1000 storage nodes with over 300 TB of disk storage and are heavily accessed by hundreds of clients.
Google File System can easily handle large scale data processing workloads. The system provides constant monitoring, replication and automatic recovery.
Recommended Read: Master System Design For Your Interviews Or Your Web Startup
Best handpicked resources to build a solid foundation in software architecture & system design
Subscribe to the newsletter to stay notified of the new posts.
Well, Guys!! This is pretty much it about the databases used at Google and the storage infrastructure. If you liked the write-up, share it with your folks. Consider following 8bitmen on Twitter, Facebook, LinkedIn to stay notified of the new content published.
I am Shivang, the author of this writeup. You can read more about me here.
More On The Blog
Web Application Architecture & Software Architecture 101 Course
Instagram Architecture – How Does It Store & Search Billions of Images
What Database Does Twitter Use? – A Deep Dive
How Does PayPal Process Billions of Messages Per Day with Reactive Streams?
Shivang
Related posts
Zero to Software Architect Learning Track - Starting from Zero to Designing Web-Scale Distributed Applications Like a Pro. Check it out.
Master system design for your interviews. Check out this blog post written by me.
Zero to Software Architect Learning Track - Starting from Zero to Designing Web-Scale Distributed Applications Like a Pro. Check it out.
Recent Posts
- Application architecture explained in-depth with a real-world example
- System Design #3: Leveraging the Backends for frontends pattern to avert API gateway from becoming a system bottleneck
- System Design #2: Understanding API gateway and the need for it
- System Design #1: CDN and Load balancers (Understanding the request flow)
- System Design #5: How Actor model/Actors run in clusters facilitating asynchronous communication in distributed systems
Follow Me On Social Media