Comment Call it OpenStack. Call it Open Compute. Call it OpenAnything-you-want, but the reality is that the dominant cloud today is Amazon Web Services, with Microsoft Azure an increasingly potent runner-up.
Both decidedly closed.
More ReadingPrivate cloud has a serious image problem30 per cent of servers, storage and switches now sold to cloudsOne day all this could be yours: Be Facebook, without being FacebookWall Street tips fedora to Red Hat: Sales up, profit flat, everybody danceTears of a cloud: Don’t be let down by backup and disaster recovery
Not that cloud-hungry companies care. While OpenStack parades a banner of “no lock in!” and Open Compute lets enterprises roll-their-own data centres, what enterprises really want is convenience, and public clouds offer that in spades. That’s driving Amazon Web Services to a reported $50bn valuation, and calling into question private cloud efforts.
For those enterprises looking to go cloud – but not too cloudy – OpenStack feels like a safe bet. It has a vibrant and growing community, lots of media hype, and brand names like HP and Red Hat backing it with considerable engineering resources.
No wonder it’s regularly voted the top open-source cloud.
The problem, however, is that “open” isn’t necessarily what people want from a cloud.
While there are indications that OpenStack is catching on (see this Red Hat-sponsored report from IDG), there are far clearer signs that OpenStack remains a mass of conflicting community-sponsored sub-projects that make the community darling far too complex.
As one would-be OpenStack user, David Laube, head of infrastructure at Packet, describes:
Over the course of a month, what became obvious was that a huge amount of the documentation I was consuming was either outdated or fully inaccurate.
This forced me to sift through an ever greater library of documents, wiki articles, irc logs and commit messages to find the 'source of truth'.
After the basics, I needed significant python debug time just to prove various conflicting assertions of feature capability, for example 'should X work?'. It was slow going.
While Laube remains committed to OpenStack, he still laments that “the amount of resources it was taking to understand and keep pace with each project was daunting".
The problem, as Randy Bias, one of OpenStack’s pioneers, suggests, is that there is no “vanilla OpenStack” that would-be adopters rally around. This lack of a common core has led vendors to build their own scaffolding around OpenStack, leading Bias to conclude that there is always a proprietary element to OpenStack.
The question is how much you can stomach: “Dial into the right level of 'lock-in' that you are comfortable with from a strategic point of view that meets the business requirements.”
So while OpenStack-sponsored surveys regularly tout “no lock in” as the primary driver for OpenStack adoption, and Mirantis CEO Adrian Ionel insists to me that “customers routinely tell us that they chose Mirantis because there was no proprietary agenda, which means they can avoid the lock-in of traditional IT” – the reality is different...and pretty proprietary.
Things are so bad, in fact, that even Rackspace, the originator of OpenStack, now has plans to build services on top of AWS.
Open Compute may not compute
Nor is life much better over in Open Compute Land. While the Facebook project (which aims to open source Facebook’s datacentre designs) has the promise to create a world filled with hyper-efficient data centres, the reality is that most enterprises simply aren’t in a position to follow Facebook’s lead.
As Stanford professor (and data centre expert) Jon Koomey notes: “If the customer is on the ball and is really driving down cost per compute, they should be receptive to what Open Compute offers. But this only happens in places where there is one owner of the data centre and one budget, which is a minority of the enterprises.”
Back in 2012, Bechtel IT exec Christian Reilly lambasted Open Compute, declaring that: “Look how many enterprises have jumped on Open Compute. Oh, yes, none. That would be correct.”
While that’s not true – companies such as Bank of America, Goldman Sachs, and Fidelity have climbed aboard the Open Compute bandwagon – it’s still the case that few companies are in a position to capitalize on Facebook’s open designs.
Indeed, IDC analyst Matt Eastwood indicates that reception to Open Compute remains “mixed,” with the only big customers being service providers and large financials, with “confusion in the enterprise” blocking greater adoption.
This may change, of course. Companies such as HP are piling into the Open Compute community to make it easier, with HP building a new server line based on Open Compute designs, as but one example.
And while some have had false starts in their Open Compute enthusiasm (Cisco is notable in this regard, indicating that: “Our marketing folks [got] a bit ahead of our engineers” in terms of releasing open networking switches for Open Compute – momentum continues to gather.
Over time, the project may well pay off for an increasingly wide audience. Just not yet. The reality is that most private clouds fail, whatever the stripe of openness you prefer.
Indeed, by Gartner analyst Thomas Bittman’s estimation, a colossal 95 per cent of all private clouds fail. When he asked attendees at Gartner’s Datacentre Conference “What is going wrong with your private cloud?”, the responses essentially declared, “Everything!”
The new and the old
One of the biggest problems with the private cloud is the nature of the workloads enterprises are tempted to run within it.
As Bittman writes in separate research, while VMs running in private clouds have increased three-fold in the past few years, even as the overall number of VMs has tripled, the number of active VMs running in public clouds has expanded by a factor of 20.
This means that: “Public cloud IaaS now accounts for about 20 per cent of all VMs – and there are now roughly six times more active VMs in the public cloud than in on-premises private clouds.”
Because, he concludes:
New stuff tends to go to the public cloud, while doing old stuff in new ways tends to go to private clouds.
And new stuff is simply growing faster.
That company-changing app that will make your career? It’s running on AWS. Ditto all the other projects that promise to transform your business and, perhaps, your industry.
Meanwhile, the private cloud is the receptacle of tried-and-true workloads that just need a new home. Helpful to the business, sure. But not transformative.
For those looking to OpenStack and Open Compute, these may prove to be excellent ways to modernize old infrastructure. But they aren’t likely going to be the places you choose to build the future.
That’s going to be on AWS, or Azure, or Google.
These are closed clouds, but customers don’t seem too put out by the proprietary nature of the cloud. As Redmonk analyst Stephen O’Grady says: “Convenience trumps just about everything" when it comes to technology adoption and, in particular, the cloud.
Customer surveys support this.
While a bit dated (2012), Forrester’s findings remain just as true today:
Asking IT to set up a hyper-efficient Facebook-like data centre isn’t the “fastest way to get [things] done". Ditto cobbling together a homegrown OpenStack solution. In fact, private cloud is rarely going to be the right way to move fast.
Sure, there are other reasons, but the cloud that wins will be the cloud that is most convenient. Unless something drastic changes, that means public cloud will emerge triumphant. ®