Hyperscale data centers
Part 1: Hyperscale data center means different hardware needs, roles for IT
Remember when data centers had separate racks, staffs and management tools for servers, storage, routers and other networking infrastructure? Those days seem like a fond memory in today's hyperscale data center. That setup worked well when applications were relatively separate or they made use of local server resources such as RAM and disk and had few reasons to connect to the Internet.
But, with the intersection of intensive virtualization and the growth of Internet applications that touch on Web, database, and other cloud-based services, things have gotten more complex. A wider range of computing services is available to satisfy these various applications, including more compute-intensive applications that involve big data and higher-volume and higher-density server configurations. So, today there are many more options for data center server and network architecture -- which will trickle down to impact overall data center design.
There is no longer a single right way to build out your data center. With so many more choices available, data centers have more flexibility.
The traditional data center
These data centers employ purpose-built name-brand servers from tier-one vendors with storage area networks running Windows or Linux. This is the gear in the vast majority of data centers today, with roots that trace back to the early days of the client/server era. The key advantage of such an architecture is that it's well known and IT staffers have extensive experience with it. In terms of networking, traditional servers are connected via commodity network switches running 10 Gigabit Ethernet(GbE) or InfiniBand. Servers and switches use proprietary management software but can be easily upgraded or swapped out for other vendors' gear. Each equipment rack has its own network switch, and these switches are connected to an overall backbone core switch.
The hyperscale servers
The hyperscale server label refers to new kinds of servers that are customized for particular data center needs -- even the racks are wider than the traditional 19-inch mounts that were the standard. This accommodates more components across the motherboards of these PCs. They are also assembled from common components that can be easily swapped out when failures occur. Think of having a single server with a dozen hard drives on a motherboard with multiple power suppliers for redundant operations.
Hyperscale needs a new kind of network
While the Open Compute Project has lots of information on server and "open rack" designs for an all-DC power, white-box collection of computers, it is noticeably silent on how these computers are going to be networked together -- a big concern for a hyperscale data center.
In a blog post in May, project organizers stated that for the most part, "we're still connecting [these new computer designs] to the outside world using black-box switches that haven't been designed for deployment at scale and don't allow consumers to modify or replace the software that runs on them."
To that end, the Open Compute Project is "developing a specification and a reference box for an open, OS-agnostic top-of-rack switch. Najam Ahmad, who runs the network engineering team at Facebook, has volunteered to lead the project." They mention plans for a wide variety of organizations to join in, including Big Switch Networks, Broadcom, Cumulus Networks, Intel and VMware. That is quite a lineup.
But it is also quite a challenge. The new network switches will have to run on DC power, like the rest of the gear sitting below them in a hyperscale stack. They will need to support software-defined networking, which is itself an evolving collection of standards. And it has to fit a wide variety of workloads and use cases too, all while being vendor-neutral.
This is the architecture of choice in 100% cloud-based businesses such as Facebook, Amazon and Google, and also for building a new breed of supercomputers. Typically, these servers run some form of Linux and are now sold by both traditional server vendors such as Hewlett-Packard, with its ProLiant DL2000 Multi Node Server, along with components available from several suppliers.
Hyperscale data center networks can use some traditional top-of-rack network switches, butFacebook's networking standards are part of its Open Compute Project call for New Photonic Connectors (NPC) and embedded optical modules. Both Facebook and Google are building new data centers in Iowa using similar designs. The advantage of these new server and network designs is that you can reduce power losses with direct current (DC)-powered drives and save time on troubleshooting failed components, since every server is uniform. You can also scale up capacity in smaller increments.
Virtual or converged clusters
These systems use proprietary server hardware that isn't interchangeable and only upgradable in large increments of storage or compute power. The servers have very high internal memory usage of hundreds of gigabytes of RAM. Cisco's Unified Computing System, Dell's Active Infrastructure and IBM's PureSystems are examples of converged infrastructures.
Virtual or converged clusters use integrated network switches, computing and storage blade servers and specially made chassis to assemble all components. The model typically runs over multiple 10 GbE connections, and the advantage is that you can eliminate many steps to provision and bring online new capacity and quickly connect to your infrastructure. The disadvantage is mostly cost.
In the future, the chances are that your data center will combine two or three of these approaches. "There are brand-new workloads that we have never seen before," said Andrew Butler, an analyst at Gartner Inc. "Amazon and Google's requirements aren't like banks or traditional IT customers. They want to run these workloads with large in-memory databases or over large-scale clusters, and these don't fit the traditional data center architectures.
This new breed of cloud apps is even more challenging, because it changes the focus from raw computing power to a more efficient use of electric power. "Power is everything these days," said the CTO of a major cloud services hosting provider. "We design our data centers for the lowest [power usage effectiveness, or PUE] ratings possible." Hyperscale serversuse entirely new power distribution methods for the maximum power efficiency possible. For example, hyperscale servers use 480 volt (V) DC power supplies and 48 V DC battery backups, along with direct evaporative cooling, meaning that no chillers or compressors are needed for cooling these data centers. Organizations that have deployed these kinds of hyperscale architectures include Rackspace and several Wall Street firms, relying on 1U to 3U dual-socketed AMD-based servers with up to 96 GB of RAM and direct-attached hard drives.
Part of the challenge of the hyperscale data center is real estate. "If you can build facilities where real estate costs are low, and if you are running monolithic business applications or cloud services, these low-density data centers make sense," said Jack Pouchet, the VP of business development for Emerson. "However, when it comes to HPC [high-performance computing], big data, Hadoop and other massive number-crunching analytics, we likely will see the continued push towards higher-density architectures."
Let us know what you think. Write to us at editor@moderninfrastructure.com
About the Author:David Strom is an expert on network and Internet technologies and has written and spoken on topics such as VoiP, network management, wireless and Web services for more than 25 years. He has had several editorial management positions for both print and online properties in the enthusiast, gaming, IT, network, channel and electronics industries.
Comments
Post a Comment