Believe it or not, virtualization has been around for longer than one might think.
In the early 1970s, IBM offered the first commercial main frame to support virtualization. In 1997, Apple used virtualization technology to run a copy of Windows on a Mac to get around some of the incompatibilities.
But mainstream adoption of server virtualization in the data center didn’t really take off until the late 1990s when VMWare began offering products, followed quickly by several other vendors.
While there will always be applications that require a single server, virtualization has seen exponential growth over the past two decades. Gartner forecasts that the U.S. market is about 68% virtualized and will continue to increase moving forward. Accordingly, virtualization is continuing to make its mark.
While all we hear about these days is the cost of building a data center, the adoption of virtualization is actually decreasing or delaying spending. What used to be thousands of servers operating at 20% or less of capacity in dozens of cabinets to support the business needs is now being drastically reduced tenfold to hundreds of servers operating at 75 to 90% capacity or more.
With additional cabinets and space no longer needed, data center build outs can be delayed. The amount of network connectivity is also decreased due to the ability of the servers to efficiently use network bandwidth at a higher capacity level.
The proof is in the press. In February of this year, Barclays announced that it shut down six data centers and effectively reduced operating expenditures by reducing its number of servers by 6,000. The U.S. General Services Administration (GSA) cites consolidation as the major initiative behind its plan to close 24 of its data centers this year.
Due to virtualization, spending on actual server hardware has decreased or remained relatively flat as shown in purple in the chart.
As indicated by the red and blue lines, the installed base of logical (virtualized) servers has increased while the installed base of physical servers has flattened out considerably since 2007.
As virtualization continues to increase and the amount of servers is reduced, overall energy savings is elevated. However, the power density at the cabinet level is increasing.
What used to be considered an average 5 to 6 kW per cabinet is now reaching upwards of 12 to 15 kW. Consequently, while virtualization can delay build-outs, data centers need enough power and cooling (or the ability to get more) to continue along the virtualization path.
One result of this increasing power density is the use of new cooling strategies like containment, free cooling and even elevating the overall data center operating temperature. So while physical server spending might be on the decline, the increase in virtualization will continue to cause power and cooling costs to trend upward.
As virtualization continues to increase, it will have an ongoing impact on industry trends. Technologies like extremely low energy (ELE) servers and trends like cloud computing are directly impacted by virtualization. ELE servers allow extremely high density server environments with a fraction of the power, and virtualized applications can be more easily moved into a cloud architecture to take advantage of hyperscale data center environments.
The hyperscale environments themselves with millions of virtualized servers are looking to cut costs by shifting to customized purpose-built servers via original design manufacturers (ODMs) that eliminate unnecessary features and components for more efficient operation and energy use. This trend isn’t such great news for big server manufacturers, and already the likes of HP and Dell are reporting lower sales and eroding margins. Early this year, IBM announced that it was pulling out of the high-volume server business and shifting its investment to cloud technologies and services to shore up its fortunes. The next 5 to 10 years will likely reveal even more of virtualization’s impact in this area.
The good news is that whether you’re just starting to virtualize or really vamping it up, Belden has a wide range of connectivity, cabinets and cable management solutions to help you support higher densities at the cabinet level.
Our innovative FiberExpress Ultra High Density fiber connectivity system is ideal of end-of-row high port counts in data center equipment areas, while our Adaptive Enclosure Heat Containment system effectively supports higher heat loads and can be spot applied as you consolidate your data center via virtualization.
Have more questions about consolidating your data center? Schedule a call with one of our experts or post a comment below to get the answers you need.
Mike Salvador is a 28-year industry veteran, living the challenge of operating efficient data centers, optimizing the performance of network devices and delivering highly available, highly agile, low-risk data centers. Mike served as Belden’s technical solutions manager from 2012 to 2015.