Google has hit out at storage, memory and networking equipment makers with a grimace, a finger wag and a closing wallet.
Two of the ad broker's leading data center researchers have published a paper chastising all of the aforementioned groups of hardware makers for failing to cater to the real needs of customers. Unlike chip manufacturers, the other major infrastructure players have placed little emphasis on tuning their gear for energy efficient use in large data centers. As a result, customers are wasting millions of dollars on electricity, cooling and hardware.
In order to improve this situation, Google engineers Luiz André Barroso and Urs Hólzle have called for the development of "energy-proportional" gear that would cater better to the specific demands of those customers running hundreds and thousands of servers.
Barroso, whose paper appears in the Dec. issue of the IEEE's Computer magazine, has been taking Google's internal power, system usage and component failure research and turning it against the industry. In June, he whacked the reliability claims of disk manufacturers and revealed a couple of techniques Google has discovered for making platters spin longer. Now, he and Hólzle have gone at most of the major data center players, saying they need to step up their game to help out customers.
Many Greenies like to focus in on the processor as the real power problem child of data centers. The Google pair, however, think that it may make more sense now to examine the role that other components play in overall power consumption due to the unique nature of data center loads.
As most of you know, the recent multi-core designs from Intel and AMD have brought about a dramatic improvement in performance per watt metrics. In addition, both vendors and customers have started to pay more attention to power supplies, buying more efficient units for a bit extra cash. The upshot of both trends is that the "engine" part, if you'll forgive us, of the data center is showing power efficiency improvements.
For Barroso and Hólzle, these more fundamental technology shifts will have a greater impact on data center efficiency than some of the moves to apply mobile chip-style power tweaking to server processors. And this is - after more than 300 words and a head held in shame - is where we get to the real meat of the Googlers paper.
Unlike, say, notebooks, servers spend very little time in a truly idle state. Instead, servers usually run here - at between 10 per cent and 50 per cent utilization.
Throttling a chip down in these types of circumstances can prove more troublesome than helpful. For one, the latency penalties associated with waking up from a low power state often exceed the power saving benefits for a server customer. In addition, even servers with low usage are rarely idle. They're performing background jobs, helping present parts of a database or aiding with recovery tasks during failures.
But even with penalties for low power modes, chips tend to present some gains for servers customers in these states that are not offered with other server hardware.
"A processor running at a lower voltage-frequency mode can still execute instructions without requiring a performance-impacting mode transition," the Googlers write. "It is still active. There are no other components in the system with active low-power modes. Networking equipment rarely offers any low-power modes, and the only low-power modes currently available in mainstream DRAM and disks are fully inactive. That is, using the device requires paying a latency and energy penalty for an inactive-to-active mode transition. Such penalties can significantly degrade the performance of systems idle only at submillisecond time scales."
Turning from low power states, Barroso and Hólzle look at the average usage and peak usage of servers. The problem here is that servers show the most energy efficiency under peak usage but sadly spend very little time operating all out. In the average 10-50 per cent usage zone, servers demonstrate only between 20 and 70 per cent energy efficiency.
So, the Googlers have called on hardware designers to produce gear which can hit between 60 and 90 per cent energy efficiency when running in the average usage zone. You can see the idealized curve for such a scenario here.
Reaching this dream state will take work from all parts of the hardware industry, according to Barroso and Hólzle.
"Energy-proportional computers would enable large additional energy savings, potentially doubling the efficiency of a typical server," they write. "Some CPUs already exhibit reasonably energy-proportional profiles, but most other server components do not.
"We need significant improvements in memory and disk subsystems, as these components are responsible for an increasing fraction of the system energy usage. Developers should make better energy proportionality a primary design objective for future components and systems. To this end, we urge energy-efficiency benchmark developers to report measurements at nonpeak activity levels for a more complete characterization of a system's energy behavior."
Google takes its data center research very seriously, and hardware makers should pay attention to the results.
As you all know, Google makes its own servers, hoping to achieve cost and energy efficiency gains. In addition, the company crafts its own switches for similar reasons. (Our congratulations go out again to Nyquist Capital analyst Andrew Schmitt for his investigative work here. We understand a report is coming out soon noting that Steve Jobs has a large ego.)
While Google's rivals appear unwilling to do similar custom work, they will demand that hardware makers produce gear with comparable cost/energy efficiency characteristics in order to remain competitive with the ad broker. Beyond that, any lost opportunity to meet Google's standards means that a component maker faces the proposition of missing out on huge volume sales to Google.
Overall, the work of Barroso and his peers at Google is a huge help to server customers of all shapes and sizes. Google may have unique needs from a scale perspective, but its heft is pushing hardware makers in an energy conscious direction that threatens to benefit data center operators as a whole. ®