The Channel logo


By | Trevor Pott 12th May 2015 13:31

Scale Computing: Not for enterprise, but that's all part of the plan

Four months of use have made the Scales fall from my eyes

Review: Scale Computing makes hyper-converged appliances targeted at small and medium-sized businesses (SMB). When you think of headline names for hyper-convergence – EVO:Rail, Nutanix, SimpliVity – you don't tend to think of "SMB". VMware-based name-brand hyper-convergence tends to be in the $150,000+ range, while Scale starts delivering at $25,500. But what, exactly, is Scale delivering?

The short version is that Scale is delivering a complete white glove service. The hyper-converged appliances are just the bait. The hook is the "Datacenter Butler" approach they take to helping you out with every little problem: Scale straddles the line between appliance vendor and managed service provider.

Scale isn't here to go toe-to-toe with VMware's Enterprise plus. They aren't going to come anywhere close to Hyper-V with System Center All The Crazy Brand Names Manager and the Azure pack. It won't even do everything Openstack does.

Scale offers a hyper-converged appliance with enough functionality for an SMB. Nothing more, nothing less.

As an SMB administrator, I'm used to SMB-focused products and services being decidedly half-arsed. They either fall apart when you breathe on them, have horrific security flaws or come with hidden costs that ultimately render the solution unaffordable.

The appliances themselves just work. The Datacenter Butler service has been top notch. On the whole, I have found Scale Computing to be disconcertingly capable. I keep waiting for the other shoe to drop and it stubbornly refuses to do so.


In the interests of full disclosure for this review, I think it's best to get a few things out of the way. The first is that I am not by trade a KVM administrator, and Scale is based on the KVM hypervisor. I am aware of KVM, I have used KVM, but in production I have traditionally relied on Hyper-V and VMware.

The second thing to make clear is that Scale gifted me three HC1000 nodes and a switch from their stock of refurbished units. I need new production hardware and they wanted someone to write up some marketing content, beta test some early releases and serve as a talking head when they needed to trot someone out to speak about their tech. I got the nodes up front, but they intend to make me sing for my supper.

Lastly, these nodes arrived and were installed during a time of particular turmoil for me. On the one hand, having them around has saved my ASCII more than once. On the other, learning the ropes of an entirely new way of working is wrapped up in my mind with some pretty bad months, so separating the emotions out gets a little complex.

I can't say that the above items don't bias me; on some level, I am sure that they do. Despite that, I've been doing reviews of some pretty difficult hardware for some time now. I've let time pass since the unboxing and taken the time to run my SMB's real-world production workloads on these nodes.

Network quandary

The nodes arrived in mid-December, 2014. We racked 'em, stacked 'em, benchmarked 'em and burned them in, only to discover that Scale's networking is a little bit less configurable than I am used to.

For reasons too boring to get into, for ages now WAN connectivity has been physically wired separately from LAN connectivity in the co-location facility I use. It's not all delivered via different VLANs on the same wire, and we've had a physical wire to the WAN subnet jacked into our VMware nodes for ages.

I realise that's not the best way to do things, but change would require tearing up how everyone else in the colo does things too, so it just became "the way it was" and you dealt with it. This is flat out not an option on the current Scale config.

Scale nodes are shipped with 4 network ports, with some nodes having 10GbE and some having 1GbE, depending on the model. All nodes dedicate 2 network ports to LAN and 2 to backplane. Regardless of the node type, they are all lashed together. If you want to work with multiple networks you are going to have to do it through VLANs. This is where Scale sending me a switch that wasn't from the beforetime comes in.

I am not entirely sure I can lay the blame for this at Scale's feet. They designed their appliances to current best practices and just didn't support the old-school approach I was using. It was frustrating and time-consuming to deal with, but once over the hump, things worked fine.

One feature request for the Scale UI is related to the fact that it has no way for me to label VLANs, or pre-enter a list of LANs to choose from when creating a VM. It would be nice to have a pull-down list of all VLANs currently in use, with labels attached, if for no other reason than that I honestly don't change my production infrastructure often enough to remember what the different VLANs are. Scale tells me this feature will be forthcoming.

I am told there are networking updates on the way, including a full Software Defined Networking stack planned for some nebulous time in the future. Considering that Scale's current customer base is SMBs and they are only just now starting to move upmarket, I think that taking the time to get SDN working correctly is a rational business decision.

The KVM problem

With benchmarking, burn-in and initial set-up of the nodes out of the way, it was time to move my workloads over. My workloads were on Hyper-V and VMware and consisted of CentOS 6, Fedora 10, Fedora 12, Server 2003, Server 2008 R2, Server 2012 R2 and Windows 7.

In order to use these workloads on Scale I needed to migrate them from VMware and Hyper-V to KVM. The UI doesn't explain things very well, but there's an import button. I mashed it, fed it a copy of my VMs and tried to import. It churned for a while, but no dice. Scale informs me it is designed only to import .qcow2 formatted VMs.

In order to import my VMs I need to use V2V software. Now, Scale offers this as a service to its customers, but I'm a stubborn git and insisted on doing it myself.

Scale recommends Acronis for Linux workloads. For CentOS 6 and Fedora 12 Acronis worked just fine. Better than fine, actually, as it allowed me to expand the virtual hard disks on import. There was the usual faffing about with udev networking after the conversion, but if I'm being honest I should have set up VMs to not have that problem from the start. It's not hard, I'm just lazy.

Scale Computing boxes with kitty

Server? What server? It's a kitty! The kitty is named Paradox

Acronis is pretty straightforward. You turn your VM off, boot off the Acronis .iso and save a copy of the VM somewhere on your network. You then boot up the Acronis .iso in a blank VM on the destination server and import the copy. Easy peasy: it even handles driver injection for KVM.

Fedora 10 flat out didn't work. I didn't try Acronis with any of the Windows workloads, as Scale informed me they tend to use Double-Take Move. I wanted to do things just like any other customer would, so that's what I chose.

Double-Take Move is completely different. To make this work you need to have a "built" VM on the destination, but the target does not have to be the same as the source. In my case I had Server 2003 on my source and I wanted to not only move from VMware to KVM, I wanted to move from Server 2003 to Server 2008 R2.

I built a Server 2008 R2 VM on the Scale cluster, loaded the software on both VMs and told A to Sync to B. A few minutes later, success! I now had both VMs running in lock step. All the services that were set up on the Server 2003 VM (DHCP, DNS, etc) were installed and configured on the Server 2008 R2 VM. Cross hypervisor and cross OS. I flipped workloads over with zero downtime. Pretty slick.

Sadly, Double-Take doesn't work on client OSes. It took a few frustrating hours to realise that, as Double-Take will install on Windows 7, but it will absolutely refuse to migrate it. I don't know what Scale's solution to this is. Mine was just to rebuild my Win 7 VMs from scratch, as I was tired of learning new V2V software.

Daily use

With workloads moved I was now committed. All my production workloads are now running on this utterly foreign KVM cluster with a custom UI. I've bet my business on Scale's absurdly positive reputation amongst my SMB peers. It's time to learn how to use the smeggling thing.

Learning to use every single feature Scale offers took all of about 5 minutes.

In part this is because there aren't a lot of features. You can create new VMs, take snapshots, make clones, migrate VMs and do imports and exports. You can load ISOs into the cluster's library, change e-mail settings for alerting and send VMs back and forth to remote clusters – think cross-cluster WAN/stretch vMotion.

You can add virtual optical drives, hard disks and NICs to VMs, though if we're being honest I haven't figured out how to remove them. You can rename VMs (but you can't use spaces) and you can send your snapshots to a remote cluster – think automated back-up to a public/hybrid cloud provider.

Everything your average SMB user wants out of their hyper-converged appliance is there. The interface takes a few minutes to adjust to, and it has some quirks, but it's intuitive enough that I think most administrators will adjust very easily.

There doesn't appear to be anything like DRS. Scale will not move VMs around if you unbalance your cluster. There's also no RAM deduplication, so size your VMs wisely. There is thin provisioning, so don't worry about hard drive space.

While the interface is reasonably intuitive, I'm not a fan of pictographs for everything. Perhaps it's my vSphere bias showing, but I prefer the whole "text menu" approach. Its fine as it is, but I am hoping that Scale will start to expand the features on offer. As it does so, I question whether the current pictographic approach will continue to be viable.

The only other complaint I have to level about the user experience is that I would far prefer to have my VMs listed similarly to the "details" view type in Windows Explorer, not the tiles that Scale currently uses. From a UI standpoint, I think OpenNebula is closer to my preference.


The three HC1000 nodes delivered are previous generation quad-core E3 Xeons with 32GB of RAM each and 4 x 10,000 RPM SAS drives. Each node has 4 1GbE NICs. Scale tends to sell more of the other models than it does the HC1000s, but I find it great to test things with the lowest available option, as it gives me an understanding of what the entry level is like. That matters to SMBs.

Scale Computing box with screwdriver in it

Server box ships with bezel, mounting brackets and a screwdriver

Gauging and discussing performance is hard. From a hardware standpoint these are some pretty wimpy nodes. The Neon server in my lab could eat a dozen of those and still have performance to spare.

I got the lowest tier units out of the refurb pile. It's not fair to compare these to some completely ridiculous monstrosity. The only fair test is a like for like against similarly specced units running other hypervisors.

It just so happens that I have a customer with 3 very similar nodes. The HC1000s are Supermicro boxes under the hood (Scale uses Dell for their larger nodes) and I am pretty sure that if there's a difference between the Supermicro nodes I selected for comparison and the Scale nodes they're minor motherboard artefacts.

I'm not going to bore you with numbers, as benchmarking is pretty meaningless in this scenario. What I did note was the following:

Scale's hyper-converged approach costs about 4% of a 2.6Ghz 4-core Xeon v2's CPU power when compared to KVM nodes using iSCSI.

Scale was actually faster than a VMware ESXi + VSAN in more runs than it was slower. The differences were marginal enough to call it a tie.

Scale crushed Hyper-V like a bug. I don't have a three-node, hyper-converged solution for Hyper-V, so I used a three-node Hyper-V iSCSI set-up (with 12 10,000 RPM drives in RAID 6 in the iSCSI array) for testing. Scale migrated workloads faster, delivered about 15% more IOPS and was able to deliver 20% more CPU to the VMs.

I am not sure why the Hyper-V test turned out the way it did. Hyper-V shouldn't perform this badly against KVM. I was using CentOS 6 VMs for testing, and had the relevant virtual tools loaded for all platforms.

Test after test, I came to the conclusion that KVM is not an inferior hypervisor to VMware or Hyper-V. It performs as well or better in almost every test I could come up with. Scale's hyper-converged extensions to KVM – which are just a couple of extra modules, in order to keep ongoing maintenance simple – don't seem to add a lot of overhead.

Put simply: there are no performance reasons not to choose a KVM-based hyper-converged solution over any of the competing hypervisors or storage types. I'll be honest when I say that result surprised me. I always thought of KVM as the poor country cousin of hypervisors, but I was clearly wrong.

Datacenter Butler

I've been through two code updates with scale and one dead hard drive replacement. How Scale deals with this is entirely different from how other SMB vendors I've worked with approach things, and more in line with how enterprise vendors used to behave – and how some still do, if you pay enough money.

Not long after the nodes went up I got an e-mail from a Scale support tech informing me that a hard drive had died. I was intrigued and concerned. I hadn't set up any monitoring on the nodes yet, so I missed the dead drive event, but I was a little concerned that my shiny new hyper-converged appliance was calling home without my knowledge.

After some discussion with Scale's CTO I got an idea of what kinds of information the nodes send back to the mothership. They basically send alerts and some usage statistics, nothing that makes me anxious, and I'm the poster child for not wanting any of my data sent to the USA.

Software updates were handled in a similar fashion. An e-mail arrived in my inbox informing me that there was a new version available and would I please enter the code from that e-mail into the support section of the Scale UI. This causes the Scale cluster to open a VPN tunnel back to Scale HQ, and the Scale support staff perform a completely interruption-free update for you. You get an e-mail when it's all done.

If you prefer to do it all yourself, there is of course the option in the interface to do so. Updates are straightforward and workloads are easily migrated from one node to another in order to do updates in sequence.

Again, consider for a moment Scale's target market. SMBs are lucky if they have an IT guy at all, let alone one who is going to keep on top of everything. Scale are shipping a turnkey hyper-converged appliance with the full white glove treatment for so far below the major enterprise vendors I'm honestly shocked one of the majors hasn't bought them out just to kill them off.

Parting thoughts

I'm now four months into using Scale, three months of having my company's production VMs running on these nodes. There is effort involved with moving from other hypervisors over to KVM, but not nearly so much as the hardcore VMware and Hyper-V fanboys would have you believe.

Scale Computing rail kit label

Rail kit contains link to how to install rail kit. Bloody brilliant

Porting VDI installations over to KVM would be a pain. Heck, Scale doesn't even have much in the way of VDI features. But VDI deployments are almost always separate from server clusters anyways, so it's entirely possible this won't matter, even to the few non-enterprises running VDI.

Scale isn't ready to take on EVO:Rail, Nutanix or SimpliVity directly. The featureset isn't there. But from what I hear, speaking to my network of SMB admins around the world, Scale are doing serious damage to the typical HP/Dell "Servers + array" racket as it has targeted the SMB.

For example, Scale is doing very well here in Canada. We're a nation of SMBs, so that's to be expected. They're also making more headway in Europe than I would have initially thought, and they have been elevated to cult status within the Spiceworks community.

Scale have made a great Toyota. That's fine and good, but can they move upmarket and start selling against the Ferraris or Lamborghinis? And how well will their Datacenter Butler service scale? Is such a high level of service really viable for customers with thousands of nodes, or once Scale have tens of thousands of customers?

Scale aren't fancy, flashy, screaming fast or feature packed. Scale are "good enough" made manifest and affordable to a sizable chunk of the world's billion+ SMBs. Maybe, for now, that's all they really need to be. ®

comment icon Read 9 comments on this article or post a comment alert Send corrections


Frank Jennings

What do you do? Use manual typwriters or live in a Scottish croft? Our man advises
A rusty petrol pump at an abandoned gas station. Pic by Silvia B. Jakiello via shutterstock

Trevor Pott

Among other things, Active Directory needs an overhaul
Baby looks taken aback/shocked/affronted. Photo by Shutterstock

Kat Hall

Plans for 2 million FTTP connections in next four years 'not enough'
Microsoft CEO Satya Nadella


League of gentlemen poster - Tubbs and Edward at the local shop. Copyright BBC
One reselling man tells his tale of woe