Open Compute has turned into much more than just a few large companies like Facebook sharing their views on how to build data centers. The concept is continuing to grow among enterprises, with financial organizations taking the lead.
Before Open Compute, if you bought a switching system from a manufacturer, the hardware came with its own proprietary software. If you liked what another manufacturer came up with a few years later, it would be difficult to make the switch because of the large investment you already made in the first manufacturer’s gear (not to mention the training to go along with it).
With Open Compute, instead of being proprietary, hardware and software information is shared and completely open. For example, anyone who wants to write a patch or write new features into a switch is welcome to do so. In fact, you can already find full “recipes” at OpenCompute.org for building servers and associated software.
This allows for full compatibility between network components – servers, storage and switches. No matter the brands, the gear can be built to talk back and forth very easily. Based on software, you’ll be able to rearrange your entire network.
The Open Rack – an initiative of the Open Compute Project – is a good example of what Open Compute stands for and how its hardware and software offerings are different.
Although a rack is just one small component of a data center, an entire group of people is dedicated to discussing and developing Open Compute racks. (Learn more about implementing Open Compute in your data center here.) When it first got together, the group had several goals for the Open Rack:
Challenging the notion of the rack’s traditional design, the Open Rack group started by taking a look at rack widths and heights.
Wanting to fit three half-width motherboards in one chassis width, the traditional 19-inch-wide rack wasn’t going to be big enough. To remedy this, the Open Racks are 21.1 inches wide (537 millimeters). That width also allows for a large fan, which is more efficient at moving air and cooling bigger, more powerful computing chips.
The Open Rack also has bus bar DC power distribution in the center of the rack, allowing the gear to “mate” with the bus system. When you push the equipment in from the front of the rack, it slides in and matches up to the bus bar to provide power to the equipment. With all networking cables and gear connecting in the front, technicians no longer need to access the rear of the rack where hot air is exhausted.
As a result, the supply air temperature can be raised; instead of operating at 60 degrees, for example, supply air can be set at 75 or 80 degrees. Air can be pumped in from the outside for free cooling instead of paying to run equipment to keep the space near 60 degrees.
The Open Rack also allows for:
If you’re investigating Open Compute as an option, you can convert your racks whenever you’re ready since the Open Rack bridges standard EIA and Open Compute equipment. It can be converted in the field from EIA to Open Compute or vice versa.
Remember: The transition to Open Compute will occur in phases over time. Even if it’s not on your radar now, there’s no doubt that Open Compute will impact your data center at some point in the future. (Read about the potential benefits of Open Compute for data center managers and consultants here.) Buying rack components that allow standard security with front and rear doors, and can be converted when needed to Open Compute, helps the transition occur quickly, seamlessly and cost effectively.
Hardware, such as the Open Rack discussed in this blog, is just one small component of Open Compute. The initiative is also about open software, too. Stay tuned for future blogs on that topic. In the meantime, learn more about how Belden can help you prepare your data center for the future.
Were you aware of Open Racks before reading this post?
Let us know what you have heard in the comments section below!
Denis is a product line manager for Belden R&E folio. Denis hold a BSc in Mech Eng (1989); Denis’s focus on helping data center managers find solutions to density challenges, (cable mng, heat, power). He has been involved with deployment of over 3 million square feet of white space. In his spare time Denis enjoy golfing and MTB.