What Will the Data Center of 2020 Look Like? Pg2

by Asif Khan

The Coming Decade

Diane Greene, co-founder and former CEO of VMware, once said “VMware is the most non-disruptive disruptive technology.” In other words, in the past decade, many customers were virtualizing their physical servers and continuing to run things pretty much the same way they ran it before virtualizing (aka “paving cow paths”). Now, the emerging popularity of IT-as-a-Service (OK, cloud computing) will force IT departments to rethink business-as-usual and start automating manual tasks if they don’t want to lose their customers (and by extension, their jobs) to more nimble providers of these services. The center of gravity has definitely shifted to the end user and that trend will continue to accelerate this coming decade.

One of my interviewees worked at a large law firm. Her CIO didn’t believe in cloud computing but was abruptly forced to rethink his position last year. Their accounting department was investigating several employees for fraudulent use of their corporate credit cards. It was assumed they were making personal purchases from Amazon but as it turned out, they were actually self-provisioning VMs from Amazon Web Services and charging it to their credit cards. This was a lot faster, the accused employees explained, than waiting for their IT department to respond. The CIO quickly realized that he had to adapt or become irrelevant. I bet we’ll be hearing more stories like this.

Another area where cloud will disrupt is in the network itself. Just as smart IT managers maintain a permanent staff just big enough to manage routine workloads and then hire outside consultants to handle the extra work needed for peak workloads and special projects, hybrid cloud infrastructures will allow IT organizations to do the same with their application workloads.

And network virtualization will enable real-time application migrations between data centers. Scheduled downtime, as we know it, will eventually become a fond childhood memory (well, infrastructure downtime anyway…application resiliency and uptime is a whole other issue). The users demanded it and now the technology is almost ready to deliver on that promise (NOTE: I realize that various technologies currently exist to expand layer 2 across geographies–ie to enable long distance live migration–but these technologies are not quite mature enough to recommend to my largest enterprise clients just yet).

Even the last vestige of legacy network technology, the Public Switched Telephone Network will finally be converged. AT&T announced in 2009 that it will replace more than 5000 PSTN central offices with a “handful of data centers” nationwide by 2018 and save $3.5B per month in operating costs by converting to an all IP network for voice, data and wireless.

Servers have gotten commoditized in the last decade. They will likely continue their evolution toward disposable node, high density, scale-out compute farms. The emergence of Map-Reduce-based scale-out architectures will force these types of design changes beyond just compute nodes. Take storage for example.

Storage arrays are the modern equivalent of monolithic mainframes in that the data and the management of that data is centralized and presented to the requestor over a latent network. But scale-out file systems typically require the compute and storage to co-reside. Does that mean increased reliance on DAS (direct-attached-storage) to support the increasing popularity of scale-out architectures, expanded use of PCIe cache engines (FusionIO and EMC VFcache) to minimize network latency, the storage array itself running the hypervisor and hosting VM workloads (if the storage can’t be closer to the compute, bring the compute closer to the storage), or all of the above?

One trend seems to be emerging: centralized MANAGEMENT of data is good but decentralized PLACEMENT of that data is better.

This trend of migrating the data (and sometimes the management of that data) away from centralized arrays is not just for emerging scale-out applications like Hadoop. Look at Exchange 2010 DAG as a precursor of things to come. The array will probably downsize over time and the stateful bits may reside wherever the data is being computed while the control plane remains centralized.

Scott McNealy, the cofounder of Sun Microsystems, was prescient when he said two decades ago that “the network is the computer.” That may finally be true today but within the next decade, the network may also be the storage array. I’m writing a more detailed post on this topic. Stay tuned…

NEXT: What About Converged Infrastructure?

===

Pg1: The Interview

Pg2: The Coming Decade

Pg3: What About Converged Infrastructure?

Advertisements

One thought on “What Will the Data Center of 2020 Look Like? Pg2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s