You know how these things happen. A representative from a customer is wheeled in by some major vendor to demonstrate the efficacy of their wondrous technology, while the assembled audience smile politely and pray for the coffee.
That’s what happened recently in Raleigh, North Carolina, where IBM was demonstrating the efficacy of its Blade servers, and Gregg Ferguson, Kilo Client manager at Network Appliance, had the job of showing it all in dynamic action.
Somnolence could then easily have come along in its allotted place, had it not been for the fact that a penny dropped with the realization that Kilo Client is actually rather clever and could have a direct interest for bigger applications development shops. There was also the point that Ferguson had recently rumbled this potential interest for himself.
Kilo Client has been developed over the last year by Network Appliances to fill a specific testing gap. The company makes large Storage Area Networks (SANs) amongst other things, and in order to run effective QA tests on such systems needs access to a lot of servers that can be reconfigured into target infrastructures as quickly as possible. The blade server solution was an obvious option to consider and, so long as there was a suitable management environment that could reconfigure servers quickly, an interesting twist on the test rig idea would be possible.
In essence, what the QA team ended up with was a 1,000 Blade system from which large SAN systems could be simulated for testing purposes – hence the Kilo Client name tag. Put simply, the team tells the sysadmin what they want – a stated number of servers running defined operating systems and applications, for two days starting next Tuesday – and admin configures the management environment to provide it. As the configured system is virtual, no new hardware needs to be bought, and once the job is completed it all slides back into the resource pool for subsequent re-use.
There are other factors here, such as the ability to save commonly used configurations for rapid re-use and the ability to boot the servers straight off the SAN, rather than from local disk drives. Groups of 252 servers – known, strangely, as `Poduals’ – share services and each is connected via 1 Gbit Ethernet to a single switch which has eight, 10 Gbit connections.
So you get the picture – very high bandwidth communications and very flexible infrastructure management, all designed for QA testing. The penny that dropped simply asked: “can this be used for applications development teams as well?” The answer – “hmmmm, well, yes. Nobody has asked yet, but yes it can,” spoke volumes.
Here is an infrastructural approach to the needs of applications developers that should be of great interest. Imagine: you realise the project you have been asked to complete (on time and under budget, naturally) needs several servers running processor A, running operating system B and with a list of hardware resources as long as your arm. These days that probably means you venture your neck taking a chit along to your boss – and a probable long wait as the star prize. Or you could email sysadmin with a resource provision request.
Gregg Ferguson would love to hear from development shops that could use a Podual or two as the bedrock of their development environments. It wouldn’t be cheap to start with, but it could give some shops a real edge in delivering the goods. ®