In 2012, 25G Ethernet deployment began – since then, the industry’s major players have been innovating and collaborating to fulfill your demands for more bandwidth with higher-performance transmission technology.

Driven by the growing cloud ecosystem, data centers have become the fastest-growing Ethernet market. They’re the center of many technology innovations – especially hyperscale data centers. According to the Cisco VNI Forecast, global data center traffic is growing with a compound annual growth rate (CAGR) of 25% (33% CAGR in cloud data centers and 5% CAGR in traditional data centers).

Global Data Center Space

Source: 451 Research

Server Shipment Split

Source: Dell’Oro, 2015

Data Center Types

Hyperscale data centers are server farms built with huge footprints (think at least the size of a few football fields), offering public cloud services like computing, data processing, storage and social networking. Hyperscale data centers are typically owned by “web 2.0” companies such as Amazon, Google, Facebook, Microsoft and Apple due to the global cloud-based services and applications they offer to enterprise and individual customers.

Multi-tenant data centers (also known as MTDCs or colocation data centers) rent space, power, cooling and an Internet connection to enterprise and cloud service providers that bring in their own networking equipment. Large MTDC service providers (think Equinix and Digital Realty) can also sell highly efficient computing and network capacity as a service. Some MTDC providers are beginning to offer more elaborate services, such as direct cross-connect to cloud service providers, or specialized services such as fast trading platforms.

The traditional enterprise data center segment is much larger than hyperscale data centers and MTDCs in terms of total data center space and server shipments. After investing in building and managing their own data centers, many enterprises are now migrating from the in-house private data center model to the leasing model seen in multi-tenant data centers. They’re also investigating pay-as-you-grow public cloud services like Amazon Web Services, Microsoft Azure and Google Cloud. Although public cloud services offer many advantages – fast provision time and lower cost of ownership, to name a few – many enterprise types, such as media, research, healthcare, financial institutions and government agencies, still prefer to keep mission-critical and confidential applications in their own private data center facilities.

Enterprise IT Infrastructure

Enterprise IT Infrastructure survey (Belden/DatacenterDynamics, August 2016)

According to a recent survey we conducted as part of a webinar with DatacenterDynamics, many enterprise data center owners still use their in-house IT infrastructure or private cloud while others have already migrated to an MTDC or the public cloud.

Ethernet Applications and Deployment in Enterprise Data Centers

In enterprise data centers, best practices for Ethernet network infrastructure deployment follow IEEE standards and ANSI/TIA and ISO/IEC structured cabling guidance. By installing standards-compliant active gear and structured cabling systems, you get to reap the benefits of mature technology, low costs and product availability from many interoperable vendors.

In enterprise access networks (servers, storage, access switches, etc.), current mainstream speeds are migrating from 1G to 10G. Depending on the active gear interface (RJ45, SFP+, QSFP+, etc.) and the data center topology, 10G Ethernet interconnect solutions can include 10GBASE-T for a reach of up to 100m, SFP+ direct-attached copper (DAC) for a reach of up to 7m reach and multimode transceivers and multimode fiber for a reach of up to 400m with OM4 cable.

Get Ready: New Fiber Infrastructure for Data CentersData Center Topology and Technologies

There are many data center topology and technology possibilities that determine Ethernet speed:

  • In centralized data center topology, 10GBASE-T twisted pair and multimode fiber solutions are the most suitable
  • In MoR (middle-of-row) or EoR (end-of-row) topology, 10GBASE-T and multimode fiber solutions, including active optical cable (AOC), are the most suitable
  • In ToR (top-of-row) topology, the 10GBASE-T direct connect or DAC are the most cost-effective solutions

Some switches equipped with 40G QSFP+ ports can be configured as 4× 10G ports; in this case, high-speed assemblies such as DAC and AOC breakouts can offer cost-effective, easy-to-install server-to-switch interconnect solutions.

Data Center Topologies

Source: Anixter

Data Center Interconnect Solutions vs. Topology

In the enterprise aggregation network, the mainstream speed is migrating from 10G to 40G Ethernet. 40G QSFP+ transceivers and multimode fiber solutions are the most cost-effective way to support a reach of up to 150m in OM4.

In some upgrade scenarios, the upgrade from 10G to 40G can be achieved smoothly with BiDi transceivers without replacing installed LC-duplex multimode fiber pair with MPO trunk. Care must be taken, however; installed legacy fiber cable may not be able to support the bandwidth and reach required for new 40G Ethernet speeds.

Data Center Interconnect

Ethernet Deployment in MTDCs

MTDCs host not only enterprise customers, but also web portals and cloud service providers like Amazon, Facebook, AT&T and LinkedIn to help extend their geographic coverage and service quality. Typically defined by customer requirements, 10G and 25G Ethernet is deployed in the access network; 40G and 100G Ethernet are deployed in the aggregation network.

MTDCs also offer direct cross-connect with dedicated lines to Internet service providers and cloud service providers, providing a secure, highly available service with consistent high performance. The design of these data centers and their components is still predominantly standards-based, but can be adapted to special requirements from customers or follow Open Compute protocol.

Ethernet in Hyperscale Data Centers

Hyperscale data centers are an exciting hotbed for new technologies, new data center topology and new networking and connectivity approaches. Bandwidth requirements for these data centers are the most demanding and challenging of all, and have adopted the leaf-spine architecture with different application-optimized designs.

To tackle port-density challenges, the server and top-of-rack (leaf) switch I/O speeds have migrated from 10G and 40G to 25G/50G and 100G. High-speed assemblies like DAC and AOC are widely installed in these gigantic server farms for the best power and cost efficiency possible.

As 25G/100G Ethernet deployment picked up in 2015, singlemode optics replaced multimode optics in hyperscale data centers for long-term fiber infrastructure sustainability for two reasons:

  1. The size of these data centers is constantly growing larger; there’s a need for link distance of up to 500m (or, in some cases, 2km). Multimode optics can only support 100m at 25G/100G.
  2. Multimode fiber cable is limited by its modal bandwidth, chromatic dispersion and relative higher attenuation.

Many new 100G singlemode optical transceiver modules, such as CWDM (2km reach) and PSM4 (500m reach), are designed under multi-source agreements (MSAs), and customized in performance and cost for specific data center applications.

Data Center Migration Path

This blog provides a good summary of where we’re at today – and just how far we’ve come. In an upcoming post, we’ll cover next-generation Ethernet in data centers. Don’t miss it by making sure you’re subscribed to our blogs.