As data center speeds increase, fiber optic cabling systems need to be upgraded to support larger data center footprints and growing data traffic bandwidth.

Since 2015, 25 Gbps and 100 Gbps Ethernet have been deployed in cloud data centers to support quick-growing data traffic between the server-to-switch and switch-to-switch interfaces. IEEE 802.3 taskforces are developing 50 Gbps- and 100 Gbps-per-lane technologies for next-generation Ethernet speeds from 50G to 400G. Moving from 10 Gbps to 25 Gbps, and then to 50G and 100 Gbps per lane, creates new challenges in semiconductor integrated circuit design and manufacturing processes, as well as in high-speed data transmission.

Ethernet Speeds blog

Ethernet Speeds Roadmap (Source: Ethernet Alliance)

Before new fiber infrastructure is deployed in data centers, there are four essential checkpoints to keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate link budget based on link distance and number of connection points

In a series of blogs, we will cover each of these checkpoints in detail, describe current technology trends and the latest industry standards for data center applications. This blog covers checkpoint No. 1: determining the active equipment I/O interface based on application types.

Get Ready: New Fiber Infrastructure for Data CentersHigh-Speed Protocols

In modern data centers, high-speed protocols, such as Ethernet, Fibre Channel and InfiniBand, are deployed to support networking, storage and high-performance computing (HPC) applications.

Modern DC Protocols

Protocols used in modern data center deployment   *=convergence protocol

Other convergence protocols, such as Fibre Channel over Ethernet (FCoE) and Ethernet over InfiniBand (EoIB), are also used in different data center areas to support emerging applications.

Ethernet is the most widely deployed protocol that supports a variety of networking applications, while Fibre Channel protocol serves mostly storage area networks (SANs). InfiniBand protocol offers the lowest latency and the highest I/O interface throughput performance for HPC.

DC Ports blog

Data center switch revenue split in 2015 (Source: Crehan Research 2016)

High-Speed I/O Interface

The 10 G SFP+ multi-source agreement (MSA) specification was first released in 2006 by the Small Form Factor Special Interest Group (SFF-SIG). Because 10G optical transceivers were relatively expensive, 10G SFP+ ports were enabled by direct-attach copper (DAC) as a cost-effective, short-reach solution for up to 7 m.

Active optical cables (AOCs) can be used as a good alternative for inter-rack interconnects, which can support a reach of more than 100 m reach. Today, 10G Ethernet has been in the market for more than 10 years; faster server connection speeds are required to optimize network efficiency, rack-space density, power consumption and overall per-gigabit-transmission costs.

Pluggable Module Dimensions

Pluggable module dimensions

The edge pluggable quad-small-form-factor (QSFP) is considered the most popular form factor for the switch port. QSFP is a four-lane duplex hot-pluggable module mainly designed for datacom applications. Although other form factors have existed, such as the 100G CXP with 10× 10G lanes, QSFP offers the best solution for faceplate density, interoperability and thermal management, as well as reduced cabling costs.

QSFP transceivers, paired with low-cost parallel fiber connectivity with one-row MPO-12 or MPO-8 connectors, can support flexible breakout or trunk cabling connections. Today, more than 95% of installed 40G transceivers are QSFP+ modules.

48-port fiber switch

1-RU switch comparisons (48 ports RJ45, 48 ports SFP+ and 32 ports QSFP+)

Driven initially by the 25G/50G Ethernet Consortium, IEEE 802.3 working groups have developed 25G and 100G Ethernet standards to upgrade 10G and 40G in single-lane and four-lane configurations. Moving from 10G- to 25G-per-lane technology, the switch silicon could support 2.5 times the total data flow on the same ASIC; server uplinks could gain 2.5 times the bandwidth. Furthermore, 25G lanes enable significant cost savings on active equipment, cabling, power consumption and space.

Today, 50G server uplinks are also emerging in data center applications, using two 25G lanes mainly to support Ethernet adapters with PCI Express Gen 3.0 ×8 connections. There is also a rise in interest from enterprise professionals for 25G server ports instead of 40G ports as a default option; the increase in cost from a 10G port to a 25G port is incremental.

Next-Generation High-Speed Roadmap

Moving beyond 25G-per-lane speed, the switch ASICs in BGA (ball grid array) packages will require eventual migration to higher electrical lane speeds to support higher bandwidth; switch ASIC connectivity is mainly limited by the serializer-deserializer (SERDES) I/O interface.

The industry recognizes the value of leveraging common technology developments across multiple applications by implementing multiple lane configurations, such as 50G in single lane, 100G in two lanes and 200G in four lanes.

Merchant switch silicon evolution

Merchant switch silicon evolution and port counts (6.4 Tbps, 9.6 Tbps and 12.8 Tbps are estimates)

OBO = on-board optics

µQSFP = micro QSFP

QSFP-DD = QSFP double density

To prepare for the next wave of system upgrades, IEEE has already initiated 802.3cd, the 50G and 200G taskforce. Development of new Ethernet speeds is in accordance with the merchant switch silicon roadmap, while keeping the same mechanical form factors as SFP in one lane and QSFP in four lanes.

In the tables below, the speed roadmaps for Ethernet, Fibre Channel and InfiniBand protocols are displayed. While the speeds of these protocols are different, their physical layer technologies are quite common.

Ethernet blog table

Fiber Channel Roadmap

Infiniband Roadmap

Ethernet, Fibre Channel and InfiniBand Speed Roadmaps

In addition to the efforts of standards bodies, there are a few industry standard alliances and MSA groups working on disruptive technologies independently.

For example, the QSFP double-density (QSFP-DD) form factor has been developed to support 200G and 400G Ethernet applications with eight parallel 25G or 50G lanes; OSFP (Octal-SFP) is another MSA form factor developed for 400G Ethernet applications with eight parallel 50G lanes; µQSFP is yet another MSA form factor developed to support four times the lane density of SFP in a similar form factor.

On-board-optics (OBO) specifications are under development by the Consortium for On-Board Optics (COBO), which will allow attaching high-density MPO connectors directly at the faceplate.

Make sure you’re signed up to receive our weekly blog updates. In future posts, we’ll be covering the remaining three checkpoints to consider when deploying new fiber infrastructure:

  1. Choose optical link media based on reach and speed
  2. Verify optical fiber standards developed by standards bodies
  3. Validate link budget based on link distance and number of connection points

If you’re considering a new fiber infrastructure deployment, Belden can help you make the right choices by taking all of these checkpoints into consideration, establishing a data center solution that provides the speed and longevity you need.