by Asif Khan
What About Converged Infrastructure?
Last year, converged infrastructure (ie EMC’s Vblock, NetApp’s FlexPod, IBM’s CloudBurst, HP’s CloudSystem) made a big splash. The premise is to deliver a self-contained rack with a single SKU and single phone number to call for support whether it be a compute, storage, network or hypervisor issue. Though the concept is powerful for smaller IT shops or for specific applications (like VDI or ERP) within large IT shops, this concept is not likely to catch on in the near future with large enterprises for two BIG reasons: process and people.
Large and sophisticated enterprise IT shops typically have well-documented processes for managing, monitoring and supporting their environments. They have good relationships with their vendors and they are used to getting responsive service. By introducing a converged infrastructure, these processes will have to be re-designed and re-documented. The cost and complexity of re-engineering existing processes outweighs the simplicity of having a single throat to choke.
A bigger issue with large enterprises is the organizational structure. There are empires built around singular functions (network, compute, storage, etc). Implementing converged infrastructure across the board means rethinking the org chart. Who will own this *cloud appliance*? Should they create new teams comprised of engineers to cover each discipline or should they remain separate? Should they embrace a DevOps-like approach to realize the efficiencies of a Google, Facebook or Amazon? What does that do to the management hierarchy? Will middle managers lose their jobs?
I recently took a tour of HP’s Build-To-Order factory in Houston. Years ago, HP eliminated the assembly line in lieu of something called cell manufacturing. Instead of having separate lines where workers specialize in installing only motherboards, hard disks or power supplies all day long, they have teams that assemble the entire unit together at a build station. That way, the teams can communicate, collaborate and approve each unit before sending it to a test station. HP explained that this made it easier to detect failures early and to fix them before they went into the test phase. This process turned out to be far more efficient and reliable than assembly line manufacturing where each component was installed separate from the others.
Maybe IT organizations should be considering such a change in how they deliver IT services? Absolutely. But these things take time.
What About the Datacenter Itself?
I saved the best for last. I recently attended a two day technical briefing at HP’s Customer Experience Center in Houston. On the second day, they took us on a factory tour. Although it wasn’t a part of the tour, someone asked what the sign meant that said “POD Station.” We went over to see these “Performance Optimized Datacenters” up close. That did it. We were all totally hooked!
We cancelled the rest of our agenda and brought in a guy they call “The PODfather” for a deeper dive.
What Is a POD And Why Is It Cool?
A POD is either a 20ft or 40ft datacenter-in-a-box. It is designed to ship on an 18-wheeler trailer bed, cargo ship or large plane to anywhere in the world where a data center is needed in a hurry. These PODs were originally introduced in 1997 for the military to setup IT command stations in warzones that had no power, intense heat and lots of sand (which can wreak havoc on IT equipment). PODs are NEMA-3R certified so they can be placed outdoors and are also wind-, rain- and snow-proof.
PODs are designed for high density compute farms. Depending on the model, you get either 22 or 44 racks in a POD. If you believe any of the predictions above, you can immediately see that the POD is the datacenter of the future. Efficient? A Tier3 data center costs $24/Watt to operate, the POD costs less than $15/Watt. PUE (power usage effectiveness) is usually 2.0 or greater for a Tier3 data center. It is 1.05 to 1.25 for a typical POD installation (PUE = INPUT (street power) / OUTPUT (data center power)…lower is better. 1.0 is ideal).
Cost-effective? The 20C model is less than USD $1M. The 40C is about twice that. There is also a 240A model for around USD $4M which is a self-contained dual 40C configuration with an integrated environmnetal control system. The POD delivers 88% faster datacenter deployments, reduces capital expenditures by up to 75%, and reduces energy waste by up to 95% compared to traditional brick-and-mortar datacenters.
From the time an order is placed, the POD ships within six weeks pre-loaded, pre-configured and pre-tested. The typical data center takes 18-24 months to go live. The value proposition is so compelling that HP has stated that their future datacenter growth will be almost entirely POD-based. In other words, they are placing lots of POD-based compute farms in empty warehouses which are fiber-linked to their existing datacenters. This is being done at a small fraction of the cost of building new datacenters.
Oh, one more thing. The IRS considers the POD to be IT equipment so you can depreciate it over 3 years vs 30 years for a traditional data center! I could go on. I don’t mean to sound like an infomercial for HP but this is really cool! Finally, the datacenter itself has become nimble, agile, flexible and all those other superlatives we use to describe the *cloud*. Several other vendors make modular data centers so if you’re not an HP fan, shop around.
Do you agree with these predictions? I tried to confine the discussion to datacenter infrastructure only. Did I miss anything obvious? Let me know in the comments. Thanks as always for reading the blog.