Domum

(domum means home in Latin)

High-Availability at Massive Scale: Building Google’s Data Infrastructure for Ads is a fantastic paper. Its primary focus is how to build distributed systems that are both highly available and strongly consistent. This is achieved by building multi-homed systems. As the paper describes them —

Such systems run hot in multiple datacenters all the time, and adaptively move load between datacenters, with the ability to handle outages of any scale completely transparently. [1]

While the paper mostly addresses building multi-homed systems in the context of distributed stream processing systems, the concepts and ideas are general enough that they can be applied to any large scale distributed software system with some modifications.

Before designing a distributed system that is resilient to failures it is paramount to understand what a failure even means in the context of software systems. Section 3 of the paper talks about common failure scenarios and highlights an important fact — partial failures are common, and “are harder to detect, diagnose, and recover from” [1] (compared to total failures). An important takeaway from this section is that when designing a new system (or trying to improve an old/current system) one should always think about what partial failures can occur, and how the system can/would react to it.

The next section motivates the need for multi-homed systems by first talking about singly-homed and failover-based systems. While singly-homed and failover-based systems are common, one typically does not run into multi-homed systems unless one operates at Google-scale (or close to). Building multi-homed systems is hard. But they offer significant benefits over singly-homed and failover-based systems in the face of (partial or total) failure. Google leverages its existing infrastructure, in particular Spanner, to build multi-homed systems with high availability.

Section 5 is the most interesting portion of the paper and talks about the challenges inherent in building multi-homed system. My main takeaway from this section is that it is virtually impossible to build a multi-homed distributed system without a system like Spanner (which is itself a multi-homed system) serving as the foundation — many of Spanner’s features, like global synchronous replication, reads at a particular version, etc. are used to solve the challenges mentioned in this section.

The paper ends with the description of three multi-homed systems at Google: F1/Spanner, Photon, and Mesa. I highly recommend reading the papers for each of these systems as well, as they have a lot more details about how these complex systems were built.

References
[1] http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44686.pdf

Papir

(This post is a summary of two papers I have recently read. Papir is the Norwegian word for paper)

Real-Time Twitter Recommendation: Online Motif Detection in Large Dynamic Graphs is a paper that was presented at VLDB 2016. It combines two of my favorite topics, distributed systems and graph theory, into a short (2 pages!) paper. It presents a simplified version of the algorithm that Twitter uses to detect motifs in real-time in a user’s social graph, which is then used to generate recommendations for the user. One thing I liked about this paper is that it presents naive solutions to the problem at hand before diving into the elegant solution that Twitter uses. The paper then presents their solution to the problem, and explains how it works at Twitter scale by graph partitioning, pruning, and offline data structure generation.

Design patterns for container-based distributed systems is a paper by Google that talks about software design patterns that are emerging from software systems that are built around containers. Software like Docker and CoreOS has made working with containers easier, and more and more companies are moving towards a container based ecosystem. Google was one of the first companies to use containers, and this paper contains design and architecture patterns that they have observed in their container based systems. The design patterns presented are grouped under three main categories of which I enjoyed reading about “Multi-node application patterns” the most. This sections deals with design patterns in distributed systems, where each node holds multiple related containers (called “pods” in the paper). It was interesting to read about how distributed system problems like leader election, scatter-gather, etc. can be dealt with language agnostic containers rather than by language specific libraries. I loved this line from the end of the paper, which made me think of containers in an entirely new light:

In all cases, containers provide many of the same benefits as objects in object-oriented systems, such as making it easy to divide implementation among multiple teams and to reuse components in new contexts. In addition, they provide some benefits unique to distributed systems, such as enabling components to be upgraded independently, to be written in a mixture of languages, and for the system a whole to degrade gracefully.

 

Intelligence

Inspired by a tutorial on TensorFlow that was on HN recently I decided to go and read the TensorFlow paper. This paper has been sitting in my “To Read” folder for quite some time now but for various reasons I never got around to reading it. This is also the first AI/ML paper I’ve read in 2016 so I was excited to dive right in.

At 19 pages long this is one of the longest papers I’ve read. But it is extremely well written, with lots of diagrams, charts, and code samples interspersed throughout the text that make this paper fun to read.

The basic idea of TensorFlow, to have one system that can work across heterogenous computing platforms to solve AI/ML problems, is incredibly powerful. I fell in love with the directed graph API used by TensorFlow to describe computations that will run on it (this may or may not be related to the fact that I also love graph theory). The multi-device (and distributed) execution algorithm explained in the paper is quite intuitive and easy to understand. A major component of multi device / distributed execution of the TensorFlow graph is deciding which device to place a node on. While the paper does explain the algorithm used in section 3.2.1 I wish they had gone into more details and talked about what graph placement algorithms didn’t work, details about the greedy heuristic used, etc.

Sections 5, 6, and 7 were my favorite portions of the paper. Section 5 dives into some of the performance optimizations used in TensorFlow. It would have been awesome if the authors had given more details about the scheduling algorithm used to minimize memory and network bandwidth consumption. I would have also liked to know what other scheduling optimizations were used in TensorFlow as I find scheduling algorithms very interesting.

Section 6 talks about the experience of porting the Inception model over to TensorFlow. While the strategies mentioned in this section are specific to machine learning systems, I feel that some of them can be tweaked a little bit to be generally applicable to all software systems. For instance

“Start small and scale up” (strategy #2)

is directly applicable to any software system. Similarly,

“Make a single machine implementation match before debugging a distributed implementation” (strategy #4)

Can be rephrased as

“Make a single machine implementation work before debugging a distributed implementation”

and be generally applicable to building distributed systems.

Section 7 explains how TensorFlow can be used to speed up stochastic gradient descent (SGD). Again, while the idioms presented in this section are used to speed up SGD, I feel that they are general purpose enough where they can be applied to other algorithms/systems as well. The diagrams in this section are amazing and do a great job of illustrating the differences between the various parallelism and concurrency idioms.

EEG, the internal performance tool mentioned in the paper, sounds very interesting. While it is probably not in the scope of a paper that focuses on TensorFlow I’d love to learn more about EEG. It seems like a very powerful tool and could probably be extended to work with other systems as well.

The paper ends with a survey of related systems. This section proved to be a valuable source for finding new AI/ML and systems papers to read.

I loved this paper.

 

 

Travel

After 24+ hours of traveling I’m back in San Francisco! The long journey gave me a lot of time to think. And read. And sleep (I think my superpower is falling asleep on airplanes).

Here are some of the things I read:

Research Paper: “f4: Facebook’s Warm BLOB Storage System”

(second paper in my quest to distract myself)

f4: Facebook’s Warm BLOB Storage System introduces the reader to f4, a storage system designed and used at Facebook to store “warm” binary large objects (aka BLOBs). The term “warm” is used to denote the fact that these pieces of data are not as frequently accessed as “hot” BLOBs (which are stored in Haystack). The main motivation behind the design of f4 was the desire to lower the replication factor for warm BLOBs, while still maintaining the same fault tolerance guarantees (node, rack, and datacenter failure) that hot BLOBs have.

The first half of the paper dives into warm BLOBs and their characteristics (section 3) and also gives an overview on how Haystack works (section 4).

Section 5 dives into the details of f4. It explains the overall architecture of the system, how it leverages Reed-Solomon coding to reduce storage overhead (compared to raw replication), how the replication factor for BLOBs is 2.1 (compared to 3.6 in Haystack), how fault tolerance works, etc. The architecture section is very well written and does a good job of explaining the different types of nodes that comprise a f4 cell. My favorite section in the paper is the one that talks about fault tolerance (section 5.5); the “Quadruple Failure Example” in this section is extremely interesting and does a good job of showing how the system deals with failures at various levels. Another part of the paper that I really liked was the section on “Software/Hardware Co-Design” in section 5.6.

Overall this paper was fun to read and very interesting. It had been on my “To Read” list for quite some time now and I’m glad I finally got to it.

Research Paper: “Hekaton: SQL Server’s Memory-Optimized OLTP Engine”

(I’ve noticed that when I’m sad I tend to throw myself at whatever activity catches my fancy at the moment. I do this to distract myself, and in general this seems to work pretty well. To deal with my sadness this time around I will be reading research papers. And blogging. Here’s the first paper I read.)

(Hekaton was one of the systems mentioned in the “Red Book” that piqued my interest)

Hekaton: SQL Server’s Memory-Optimized OLTP Engine gives the reader an overview of Hekaton, a database engine that is a part of Microsoft SQL Server. It has been designed to work with data that fits entirely in main memory. The main motivation driving the design and implementation of Hekaton is the dropping cost of memory and the every growing popularity of multi-core CPUs. In order to achieve the best performance and to take full advantage of the multiple cores the Hekaton embraces a lock/latch-free design: all the index structures (a hash table and B-Tree) are lock/latch-free (details on the design are in [1] and [2]) and transactions use MVCC.

While the details of the implementation of the index data structures are in another paper, this paper does go into details of the MVCC design used and the garbage collection mechanism used to delete old records. Sections 6, 7, and 8 go into details of transactions, logging, and garbage collection. These sections are incredibly well written and do a great job of explaining these complex and core components of the system. The logging and checkpointing system is quite unique and I thought the non-usage (I’m sure there is a better term) of WAL is interesting. Section 8, which goes into details of the garbage collection mechanism used in Hekaton is definitely my favorite section in the paper. I think the GC algorithm is, simply put, beautiful.

Another unique aspect of the system: T-SQL queries and procedures are compiled down into native code to achieve high performance. Section 5 goes into the details of how this is done. What is interesting about this conversion process is that the generated code is one big function with labels and goto statements.

This was a great paper to begin 2016 with.

References

[1] Maged M. Michael. 2002. High performance dynamic lock- free hash tables and list-based sets. In Proceedings of the fourteenth annual ACM symposium on Parallel algorithms and architectures (SPAA ’02): 73-82.

[2] Levandoski, J.J.; Lomet, D.B.; Sengupta, S., “The Bw-Tree: A B-tree for new hardware platforms,” in Data Engineering (ICDE), 2013 IEEE 29th International Conference on , vol., no., pp.302-313, 8-12 April 2013

Research Paper: “AsterixDB: A Scalable, Open Source BDMS”

(AsterixDB was one of the systems mentioned in the “Red Book” that piqued my interest)

AsterixDB: A Scalable, Open Source BDMS gives the reader an overview of the AsterixDB system. AsterixDB is an impressive “big data management system” (BDMS) with several interesting features including a flexible data model, a powerful query language, data ingestion capabilities and distributed query execution. Two features that stood out to me were the ability to describe custom index types (B+-tree, R-tree, etc.) on your data, and the ability to query data that “lives” outside the system.

A majority of the paper is on the data definition and manipulation layer. The authors use an example of a social networking website to illustrate the power of AsterixDB’s data model and query language. Most of this section consists of code snippets (to define, load, and query the data) followed by an explanation of what exactly that snippet of code does, and what happens under the hood when that snippet is run. These code snippets make this section of the paper very easy to read and understand.

The data storage, indexing, and query execution components are described in the System Architecture section of the paper. These subsystems have separate papers ([1] and [2]) devoted to them; in this paper we are just given a brief overview of how they function and what their key features are. One piece of information that stood out to me in this section was the software layer described that grants any index data structure LSM update semantics. I thought this was a very novel idea to help speed up data ingestion and index building, while at the same time having the benefit of diverse index data structures based on the type of data being stored and indexed. The secondary index design is also interesting.

I really enjoyed reading this paper. I’ve added [1] and [2] to my “research papers to read next” list, and hope to get to it very soon.

[1] S. Alsubaiee, A. Behm, V. Borkar, Z. Heilbron, Y.-S. Kim, M. Carey, M. Dressler, and C. Li. Storage Management in AsterixDB. Proc. VLDB Endow., 7(10), June 2014.

[2] V. Borkar, M. Carey, R. Grover, N. Onose, and R. Vernica. Hyracks: A Flexible and Extensible Foundation for Data-intensive Computing. ICDE, 0:1151–1162, 2011.