Original URL: https://www.theregister.com/2009/08/12/google_file_system_part_deux/

Google File System II: Dawn of the Multiplying Master Nodes

A sequel two years in the making

By Cade Metz

Posted in Channel, 12th August 2009 02:12 GMT

Updated As its custom-built file system strains under the weight of an online empire it was never designed to support, Google is brewing a replacement.

Apparently, this overhaul of the Google File System is already under test as part of the "Caffeine" infrastructure the company announced earlier this week.

In an interview with the Association for Computer Machinery (ACM), Google's Sean Quinlan says that nearly a decade after its arrival, the original Google File System (GFS) has done things he never thought it would do.

"Its staying power has been nothing short of remarkable given that Google's operations have scaled orders of magnitude beyond anything the system had been designed to handle, while the application mix Google currently supports is not one that anyone could have possibly imagined back in the late 90s," says Quinlan, who served as the GFS tech leader for two years and remains at Google as a principal engineer.

But GFS supports some applications better than others. Designed for batch-oriented applications such as web crawling and indexing, it's all wrong for applications like Gmail or YouTube, meant to serve data to the world's population in near real-time.

"High sustained bandwidth is more important than low latency," read the original GPS research paper. "Most of our target applications place a premium on processing data in bulk at a high rate, while few have stringent response-time requirements for an individual read and write." But this has changed over the past ten years - to say the least - and though Google has worked to build its public-facing apps so that they minimize the shortcomings of GFS, Quinlan and company are now building a new file system from scratch.

With GFS, a master node oversees data spread across a series of distributed chunkservers. Chunkservers, you see, store chunks of data. They're about 64 megabytes apiece.

The trouble - at least for applications that require low latency - is that there's only one master. "One GFS shortcoming that this immediately exposed had to do with the original single-master design," Quinlan says. "A single point of failure may not have been a disaster for batch-oriented applications, but it was certainly unacceptable for latency-sensitive applications, such as video serving."

In the beginning, GFS even lacked an automatic failover scenario if the master went down. You had to manually restore the master, and service vanished for up to an hour. Automatic failover was later added, but even then, there was a noticeable service outage. According to Quinlan, the lapse started out at several minutes and now it's down to about 10 seconds.

Which is still too high.

"While these instances - where you have to provide for failover and error recovery - may have been acceptable in the batch situation, they're definitely not OK from a latency point of view for a user-facing application," Quinlan explains.

But even when the system is running well, there can be delays. "There are places in the design where we've tried to optimize for throughput by dumping thousands of operations into a queue and then just processing through them," he continues. "That leads to fine throughput, but it's not great for latency. You can easily get into situations where you might be stuck for seconds at a time in a queue just waiting to get to the head of the queue."

GFS dovetails well with MapReduce, Google's distributed data-crunching platform. But it seems that Google has jumped through more than a few hoops to build BigTable, its (near) real-time distributed database. And nowadays, BigTable is taking more of the load.

"Our user base has definitely migrated from being a MapReduce-based world to more of an interactive world that relies on things such as BigTable. Gmail is an obvious example of that. Videos aren't quite as bad where GFS is concerned because you get to stream data, meaning you can buffer. Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point."

The trouble with file counts

The other issue is that Google's single master can handle only a limited number of files. The master node stores the metadata describing the files spread across the chunkservers, and that metadata can't be any larger than the master's memory. In other words, there's a finite number of files a master can accommodate.

With its new file system - GFS II? - Google is working to solve both problems. Quinlin and crew are moving to a system that uses not only distributed slaves but distributed masters. And the slaves will store much smaller files. The chunks will go from 64MB down to 1MB.

This takes care of that single point of failure. But it also handles the file-count issue - up to a point. With more masters you can not only provide redundancy, you can also store more metadata. "The distributed master certainly allows you to grow file counts, in line with the number of machines you're willing to throw at it," Quinlan says. "That certainly helps."

And with files shrunk to 1MB, Quinlan argues, you have more room to accommodate another ten years of change. "My gut feeling is that if you design for an average 1MB file size, then that should provide for a much larger class of things than does a design that assumes a 64MB average file size. Ideally, you would like to imagine a system that goes all the way down to much smaller file sizes, but 1MB seems a reasonable compromise in our environment.

Why didn't Google design the original GFS around distributed masters? This wasn't an oversight, according to Quinlan.

"The decision to go with a single master was actually one of the very first decisions, mostly just to simplify the overall design problem. That is, building a distributed master right from the outset was deemed too difficult and would take too much time," Quinlan says.

"Also, by going with the single-master approach, the engineers were able to simplify a lot of problems. Having a central place to control replication and garbage collection and many other activities was definitely simpler than handling it all on a distributed basis."

So Google was building for the short term. And now it's ten years later. Definitely time for an upgrade.

"There's no question that GFS faces many challenges now," Quinlin says. "Engineers at Google have been working for much of the past two years on a new distributed master system designed to take full advantage of BigTable to attack some of those problems that have proved particularly difficult for GFS."

In addition to running the Google empire, GFS, MapReduce, and BigTable have spawned an open-source project, Hadoop, that underpins everything from Yahoo! to Facebook to - believe it or not - Microsoft Bing.

And of course, Quinlin believes that the sequel will put the original to shame. "It now seems that beyond all the adjustments made to ensure the continued survival of GFS, the newest branch on the evolutionary tree will continue to grow in significance over the years to come." ®

Update: This story has been updated to show that Google's new file system is apparently part of the new "Caffeine" infrastructure that the company announced earlier this week