The Channel logo

Features

By | Adrian Bridgwater 25th August 2015 13:01

The container-cloud myth: We're not in Legoland anymore

Why interconnectivity in the cloud is tougher than just stacking bricks

Everything is being decoupled, disaggregated and deconstructed. Cloud computing is breaking apart our notions of desktop and server, mobile is decoupling the accepted concept of wired technology, and virtualisation is deconstructing our understanding of what a network was supposed to be.

Inside this maelstrom of disconnection, we find this thing we are supposed to call cloud migration. Methods, tools and protocols that guarantee they will take us into the new world of virtualisation, which promises “seamless” migration and robust results.

It turns out that taking traditional on-premises application structures into hosted virtualised worlds is way more complex than was first imagined.

Questions of application memory and storage allocation are fundamentally different in cloud environments. Attention must be paid to application Input/Output (I/O) and transactional throughput. The location of your compute engine matters a lot more if data can be on-premises private, hybrid or public cloud located – or, heaven forbid, some combination of the three.

Essentially, the parameters that govern at every level and layer of IT can take on a different shape. The primordial spirit of IT has changed: decoupling creates a new beast altogether.

But, as you may have heard, the new world of containers, software-defined infrastructure and microservices is supposed to have come to our rescue. If you believe all the hype, it’s like the power of Lego building blocks has arrived – a new way to interlock and assemble our decoupled component elements of computing.

In Legoland (the concept, not the theme park), objects can be built, disassembled and then rebuilt into other things or even combined into other objects. The concept of meshed interlocking connectivity in Lego is near perfect. Or at least it is in the basic bricks and blocks model until the accoutrements come along.

Clicking & sticking slickness

The danger comes about when people start talking about interconnectivity in the cloud (and the Big Data that passes through it) and likening new “solutions” to the clicking and sticking slickness ease we enjoy with Lego. It’s just not that simple.

For all the advantages of microservices, they bring with them a greater level of operational complexity.

Samir Ghosh, chief executive of Docker-friendly platform-as-a-service provider WaveMaker, reckons that compared with a “monolithic” (meaning non-cloud) application, a microservices-based application may have dozens, hundreds, or even thousands of services, all of which must be managed through to production – and each of those services requires its own APIs.

Chris Stolte, chief development officer and co-founder of Tableau Software, said the upcoming 9.1 release of his firm’s data visualisation product is specifically engineered with new connection intelligence.

He claims “significant investments” in enterprise features and a web data connector to connect to a “limitless number” of sources, including Facebook, Twitter, Google Sheets, SAP, Google Cloud SQL, Amazon Aurora and Microsoft Azure.

Yikes, did we forget to connect the cloud?

But of course, Tableau is talking about connecting data sources for applications. Where firms find the real challenge now is in trying to evolve their existing apps and build new apps to take advantage of cloud architectures.

Cloud and the Big Data inside is networked via that thing we call the internet. So it should all connect together naturally, but that’s not what happens. While the connectivity engineering aspect of decoupled IT hasn’t exactly been overlooked, it now presents a significant pain point for anybody trying to make new cloud deployments look slick.

The whole point of Lego (and why it was successful in the first place) was its universal interface – the little “lugs”.

Looking at cloud and big data, the best universal portability standard we have at the moment is, of course, the Virtual Machine itself. And within this, the DMTF Open Virtualization Format (OVF) effort is a very good start – at least according to VMware, one of OVF’s primary backers, along with Microsoft, Dell and others.

But beyond that, splitting things up is a bit more complicated, as people are finding out.

The DMTF’s OVF standard provides a packaging format for software offerings based on virtual systems thereby – supposedly – solving the critical business needs for software vendors and cloud computing service providers.

Joe Baguley, VMware EMEA vice president and chief technology, said: “Looking at the stack very simplistically... you move from software-defined infrastructure in the shape of IaaS, on top of which you create VMs, and into which you then put a variety of things. Then you have containers, which are really just mini OS-dependent VMs built on top of an OS inside a VM. Then you have PaaS, which is what developers really want (e.g. CloudFoundry) which should be built on top of a set of standard VMs or containers – on which people build SaaS, which is what users want, arguably.”

Is this the rise of the APIs?

But when moving from tightly coupled to loosely-coupled architectures (which is what is happening here), the interactions between these components need to be well defined. It is the “rise of the APIs” if you will.

VM-sprawl happened because in the early days of data centre virtualisation, people got greedy on power and freedom and drank too much of it.

“If you imagine the NIST stack (IaaS, PaaS, SaaS) then the job of IT going forward is to look at every service, deconstruct each service into its logical components and then look at those components and ask the question: ‘What is it about this element that is no different from any other, and what is it that makes this different from others.’"

“Different components will come to rest at different points in that stack, even within the same application or service. But without well-defined interfaces, splitting those components up or decoupling them is practically impossible,” said Baguley.

We’ll have to see whether OVF stands the test of time: whether it stands on the basis of its vendor-driven authority or falls to developer-driven alternative that achieves critical mass on the web, as has happened in the past on other fronts.

Whatever happens, the need to think about cloud Big Data interconnectivity, from APIs to containerisation, has come to the fore. ®

comment icon Read 14 comments on this article or post a comment alert Send corrections

Opinion

Baby looks taken aback/shocked/affronted. Photo by Shutterstock

Kat Hall

Plans for 2 million FTTP connections in next four years 'not enough'
Microsoft CEO Satya Nadella
Stranded_ships

Chris Mellor

Thousands of layoffs announced as spinning rust enters its death spiral

Features

Locker room jocks photo via Shutterstock
Best locker-room strategy: Avoid emulating AWS directly
STRASBOURG, JUNE 29, 2016: The seat of the European Parliament. by Marco Aprile for shutterstock. EDITORIAL USE ONLY
Plan b, image via Shutterstock
EU workers, new markets: post-Brexit pressure on May & Co
Tough question, pic via Shutterstock