Saturday, January 31, 2015

Hypescale data center perlu hardware lain




Hyperscale data centers

Part 1: Hyperscale data center means different hardware needs, roles for IT
Remember when data centers had separate racks, staffs and management tools for servers, storage, routers and other networking infrastructure? Those days seem like a fond memory in today's hyperscale data center. That setup worked well when applications were relatively separate or they made use of local server resources such as RAM and disk and had few reasons to connect to the Internet.
But, with the intersection of intensive virtualization and the growth of Internet applications that touch on Web, database, and other cloud-based services, things have gotten more complex. A wider range of computing services is available to satisfy these various applications, including more compute-intensive applications that involve big data and higher-volume and higher-density server configurations. So, today there are many more options for data center server and network architecture -- which will trickle down to impact overall data center design.
There is no longer a single right way to build out your data center. With so many more choices available, data centers have more flexibility.
HP's ProLiant DL2000 Multi Node Server

The traditional data center

 These data centers employ purpose-built name-brand servers from tier-one vendors with storage area networks  running Windows or Linux. This is the gear in the vast majority of data centers today, with roots that trace back to the early days of the client/server era. The key advantage of such an architecture is that it's well known and IT staffers have extensive experience with it. In terms of networking, traditional servers are connected via commodity network switches running 10 Gigabit Ethernet(GbE) or InfiniBand. Servers and switches use proprietary management software but can be easily upgraded or swapped out for other vendors' gear. Each equipment rack has its own network switch, and these switches are connected to an overall backbone core switch.

The hyperscale servers

The hyperscale server label refers to new kinds of servers that are customized for particular data center needs -- even the racks are wider than the traditional 19-inch mounts that were the standard. This accommodates more components across the motherboards of these PCs. They are also assembled from common components that can be easily swapped out when failures occur. Think of having a single server with a dozen hard drives on a motherboard with multiple power suppliers for redundant operations.

Hyperscale needs a new kind of network

While the Open Compute Project has lots of information on server and "open rack" designs for an all-DC power, white-box collection of computers, it is noticeably silent on how these computers are going to be networked together -- a big concern for a hyperscale data center.
In a blog post in May, project organizers stated that for the most part, "we're still connecting [these new computer designs] to the outside world using black-box switches that haven't been designed for deployment at scale and don't allow consumers to modify or replace the software that runs on them."
To that end, the Open Compute Project is "developing a specification and a reference box for an open, OS-agnostic top-of-rack switch. Najam Ahmad, who runs the network engineering team at Facebook, has volunteered to lead the project." They mention plans for a wide variety of organizations to join in, including Big Switch Networks, Broadcom, Cumulus Networks, Intel and VMware. That is quite a lineup.
But it is also quite a challenge. The new network switches will have to run on DC power, like the rest of the gear sitting below them in a hyperscale stack. They will need to support software-defined networking, which is itself an evolving collection of standards. And it has to fit a wide variety of workloads and use cases too, all while being vendor-neutral.
This is the architecture of choice in 100% cloud-based businesses such as Facebook, Amazon and Google, and also for building a new breed of supercomputers. Typically, these servers run some form of Linux and are now sold by both traditional server vendors such as Hewlett-Packard, with its ProLiant DL2000 Multi Node Server, along with components available from several suppliers.
Hyperscale data center networks can use some traditional top-of-rack network switches, butFacebook's networking standards are part of its Open Compute Project call for New Photonic Connectors (NPC) and embedded optical modules. Both Facebook and Google are building new data centers in Iowa using similar designs. The advantage of these new server and network designs is that you can reduce power losses with direct current (DC)-powered drives and save time on troubleshooting failed components, since every server is uniform. You can also scale up capacity in smaller increments.

Virtual or converged clusters

These systems use proprietary server hardware that isn't interchangeable and only upgradable in large increments of storage or compute power. The servers have very high internal memory usage of hundreds of gigabytes of RAM. Cisco's Unified Computing System, Dell's Active Infrastructure and IBM's PureSystems are examples of converged infrastructures.
Virtual or converged clusters use integrated network switches, computing and storage blade servers and specially made chassis to assemble all components. The model typically runs over multiple 10 GbE connections, and the advantage is that you can eliminate many steps to provision and bring online new capacity and quickly connect to your infrastructure. The disadvantage is mostly cost.
In the future, the chances are that your data center will combine two or three of these approaches. "There are brand-new workloads that we have never seen before," said Andrew Butler, an analyst at Gartner Inc"Amazon and Google's requirements aren't like banks or traditional IT customers. They want to run these workloads with large in-memory databases or over large-scale clusters, and these don't fit the traditional data center architectures.
This new breed of cloud apps is even more challenging, because it changes the focus from raw computing power to a more efficient use of electric power. "Power is everything these days," said the CTO of a major cloud services hosting provider. "We design our data centers for the lowest [power usage effectiveness, or PUE] ratings possible." Hyperscale serversuse entirely new power distribution methods for the maximum power efficiency possible. For example, hyperscale servers use 480 volt (V) DC power supplies and 48 V DC battery backups, along with direct evaporative cooling, meaning that no chillers or compressors are needed for cooling these data centers. Organizations that have deployed these kinds of hyperscale architectures include Rackspace and several Wall Street firms, relying on 1U to 3U dual-socketed AMD-based servers with up to 96 GB of RAM and direct-attached hard drives.
Part of the challenge of the hyperscale data center is real estate. "If you can build facilities where real estate costs are low, and if you are running monolithic business applications or cloud services, these low-density data centers make sense," said Jack Pouchet, the VP of business development for Emerson. "However, when it comes to HPC [high-performance computing], big data, Hadoop and other massive number-crunching analytics, we likely will see the continued push towards higher-density architectures."
Let us know what you think. Write to us at editor@moderninfrastructure.com
About the Author:David Strom is an expert on network and Internet technologies and has written and spoken on topics such as VoiP, network management, wireless and Web services for more than 25 years. He has had several editorial management positions for both print and online properties in the enthusiast, gaming, IT, network, channel and electronics industries.

Pengeluaran IT beralih ke Hyper Scake data center




Screen Shot 2015-01-13 at 2.03.04 PM
The rising value of the U.S. dollar along with modest declines in demand for IT and telecommunications services will slow the global pace of IT spending in 2015, a market watcher is forecasting.
Meanwhile, Gartner Inc. is forecasting a “tipping point” in the next three years as digital businesses shift from on-premise IT to hyperscale datacenters.
Gartner predicted this week that worldwide IT spending would grow to $3.8 trillion this year, a 2.5 percent increase over 2014. An earlier projection pegged the annual IT growth rate for this year at 3.9 percent. That forecast was revised downward as the dollar rises against the euro and other currencies. (The market watcher said plummeting oil prices would likely have had little impact on its quarterly projections.)
“The rising U.S. dollar is chiefly responsible for the change—in constant currency terms the downward revision is only 0.1 per cent,” explained John-David Lovelock, Gartner’s research vice president explained. “Stripping out the impact of exchange rate movements, the corresponding constant-currency growth figure is 3.7 percent, which compares with 3.8 percent in the previous quarter’s forecast.”
While currency fluctuations resulted in a 1.4 percent downward revision in Gartner’s quarterly IT spending forecast, the market analyst noted that enterprise software continues to grow at a 5.5 percent clip. A key shift is away from software licensing to software-as-a-service, an option that brings with it monthly rather than annual payments. Lovelock noted that the “SaaS first” model provides greater agility as the Internet of Things emerges.
Gartner forecasts that SaaS will account for more than half of annual software deals this year.
Telecom services represent the lion’s share of annual IT spending, totaling more than $1.62 trillion last year. That total is expected to increase by only 0.7 percent in 2015, Gartner forecast.
IT services represent the second-largest spending category, reaching $956 billion in 2014. Gartner is forecasting that global spending on IT services will increase 2.5 percent this year to $981 billion even as datacenter infrastructure spending remains essentially flat. The market analyst cited lower growth rates in some regions for enterprise software while economic uncertainty in the Russian and Brazilian markets squeezes short-term growth rates.
The Asia-Pacific region is the fastest growing market at 8 percent annual growth.
Overall, enterprise software spending shows the biggest annual increase, with Gartner predicting a 5.5 percent jump in spending this year to $335 billion. Gartner said it expects greater price erosion and vendor consolidation in 2015 as competition heats up between cloud and on-premises software providers.
Spending on datacenter systems is projected to reach $143 billion in 2015, a 1.8 percent increase from 2014, Gartner projected. Spending on datacenter infrastructure—servers, storage, networks—is projected to remain flat. But datacenter services and application support continues to soar as enterprises steadily shift from on-premise facilities to hyperscale datacenter providers.
Indeed, Gartner forecasts a tipping point in 2018 when traditional datacenters will no longer be able to meet the demands of a “digital business” era. Hence, its sees a continuous shift away from on-premise processing as more enterprises embrace hyperscale datacenters.
While projections for enterprise communications applications and network equipment spending were increased from the previous quarter’s forecast, the market analyst said growth for the servers and storage segments was lowered. It cited “extensions in replacement life cycles and a higher than previously anticipated switch to cloud-based services.”
Garter also foresees a continuing price war among cloud vendors such as Amazon Web Services, Google and Microsoft Azure heavily discount their cloud offering to maintain their customer base. Key areas of price competition include database management systems, application infrastructure and middleware, the market watcher predicted.

Refleksi tren data center 2014



IT experts predicted what 2014 would bring to the data center, but they weren't always on the money.
We looked back at 2014's data center trends and forecasts, and compared them to end-of-year reports. Some were in the ballpark, while others were way off.
All data center systems spending was expected to grow 0.4% in 2014, according to the Gartner Worldwide IT Spending Forecast. Actual growth for the year was a little higher at 0.8%, with total data center spending up $1 billion from 2013 (reaching $141 billion). So where did the money go?
Experts expected the x86 server market to experience the most growth in 2014. The most recent data from Stamford, Conn.-based Gartner Inc. shows an increase in x86 server unit purchases in the first three quarters, averaging 1.4% growth quarterly over the same span in 2013. In contrast, unit sales of RISC/Itanium Unix servers declined an average of 11.6% in the same period, as enterprises migrate from high-cost platforms toward lower-cost alternatives. Numbers from Q4 have not been released.

Rise of the third platform

The convergence of cloud, big data, social business and mobile -- the third platform -- got a slow start, said Matt Eastwood, group vice president and general manager of enterprise platforms at analyst firm IDC. Traditional enterprises just weren't quick to make this data center transformation.
"There are three types of businesses: the businesses that think of technology as their business [like Google]; more traditional businesses that think technology is a strategic differentiation; and businesses that think technology is an enabler," Eastwood said. "[This last group] is three to five years behind [the first]."
Consumer behavior and mobile drove the onset of the platform conversion, as expected.

The year of colocation

Clive Longbottom, co-founder and service director at analyst firm Quocirca, based in the U.K., predicted that 2014 would involve an owned or colocation data center working beside infrastructure, platform and software-as-a-service markets -- and that came to fruition.
Colocation became a $5 billion industry at the start of 2013, but is projected to hit $30 billion by 2017, according to Synergy Research Group.
Moving to a colocation facility makes more sense than staying on-premises or pushing capacity wholly into cloud, Longbottom said, leading to this big colocation growth. The hybrid-cloud model manages resources in-house and via a colocation facility and combines all three deployment types.
"A vast majority of vendors in 2013 were fighting the hybrid model," Longbottom said. "But we saw a lot more of them embracing this throughout 2014."

Hyperscale IT

Companies like Google, Microsoft, Amazon and Facebook spent billions of dollars on new facilities and server infrastructure to power their workloads in 2013 and 2014.
"Hyperscale represents 30% of computing in the world," Eastwood said. He expects investment in Webscale and hyperscale data centers to grow by 20% in the next four to five years.
Pressure from hyperscale IT companies was expected to drive server types, variety and complexity in 2014, and this change is occurring.
Companies began to build their own servers and forced manufacturers to reevaluate what they needed to refresh, said David Cappuccio, managing VP and chief of research on infrastructure at Gartner.
The movement of SDx
The software-defined everything (SDx) movement made most IT trend watchlists in 2014.
"We've seen people starting to realize some evolution of tools on the server side and applying the same tenants to hardware," said Pete Sclafani, CIO of 6connect Inc., a data center consulting firm in Palo Alto, Calif.
Vendors got on the software-defined product train in 2014.
"Software-defined everything is coming from every single vendor, even vendors who don't sell these kinds of products," Longbottom said.
For the most part, however, SDx remained a lab concept in 2014, not a real mainstream data center technology.

Converging upon CI

"In 2014, converged infrastructure [CI] became the go-to alternative to standalone systems," said Christian Perry, senior analyst and content manager at Technology Business Research (TBR) Inc., in Hampton, N.H.
While experts predicted that CI would go mainstream in 2014, Perry said that the method of integrating compute, storage and networking for data centers was mainstream in the year prior, and that meaningful adoption began well before 2013.
TBR's research found that from 2013 to 2014, the opportunity for CI adoption was $3.8 billion in the U.S. That number will jump seven to eight billion dollars in the U.S. between 2014 and 2015, and total $17.8 billion in the global market, TBR reports.
"Customers are excited about their environments and doing IT in a different way," Perry said.
But IT shops that adopted CI didn't necessarily replace legacy infrastructure. TBR reports that 35% of U.S. companies still run the bulk of enterprise workloads on legacy data center infrastructures, complementing it with converged systems for new projects and workloads. In the U.S., about 27% of converged infrastructure users replaced existing systems with the new technology; 38% do a bit of both.
Limited adaptability was one of the concerns about CI in 2014, a danger that depends on the use case.
"Systems aren't deployed to handle all workloads," Perry said, with the exception of VCE's VBlock. If converged systems are purchased for specific workloads, then it's not limited, he said.

All about the data

Data generation shows no signs of slowing, according to an IDC study, spurring more big data storage and analytics in 2014. The abundance of data led the data center into a storage spin.
The price of solid state drives (SSDs) decreased in 2014, Sclafani said, and the price drop provides more quick storage expansion options for data centers.
But in the next two years, expect to see a broader strategy around solid state storage, Sclafani said.
In 2014, IT pros saw hybrid local-storage options -- some hard disk and some solid state -- as a budget-friendly way to tackle data storage and retrieval speed. But hybrid storage would have been even more popular if SSDs had been more expensive, Sclafani said. Tiered/hybrid is still in the running, but now you can put more data on SSDs without busting the budget, making them more desirable for the data center.
Enterprise SSD use was strong in 2014, and flash prices stayed weak after dramatic drops in 2013, according to DRAMeXchange, a research division of TrendForce Corp., a company that tracks memory technologies.
"In the coming year, the price will drop more against consumer technology," Sclafani said. "People are negotiating harder for SSDs because they know the benefits and the price comparisons."

Saturday, January 24, 2015

"Congratulations to the winners of the 2014 Data Center Excellence Award

Norwalk, CT, December 18, 2014 - TMC, a global, integrated media company helping clients build communities in print, in person and online, announced today the winners of  the 2014 Data Center Excellence Award, presented by infoTECH Spotlight.

"Congratulations to the winners of the 2014 Data Center Excellence Award," said Rich Tehrani, CEO, TMC (News - Alert). "Data centers are critical to the success of any businesses today.  Small or large, every business relies on data centers to host their applications and data, either on their own premises or in the cloud. The award recipients are innovators within their space and we look forward to seeing their continued excellence."The 2014 infoTECH Spotlight Data Center Excellence Award recognizes the most innovative and enterprising data center vendors who offer infrastructure or software, servers or cooling systems, cabling or management applications.

2014 Data Center Excellence Award Winners
Company
Product
Verne Global Green Data Center Campus
For more than 20 years, TMC has been honoring technology companies with awards in various categories. These awards are regarded as some of the most prestigious and respected honors in the communications and technology sector worldwide. Winners represent prominent players in the market who consistently demonstrate the advancement of technologies. Each recipient is a verifiable leader in the marketplace.
About InfoTech Spotlight
InfoTech Spotlight brings extensive daily content focused on information technology. Visitors will find free industry news, communities, channels, blogs, feature articles, videos, whitepapers and other resources.  The site keeps readers informed about developments across topics including software, hardware, security and networking. InfoTech Spotlight is powered by TMCnet, the leading communications and technology site in the World attracting two million unique visitors monthly according to Webtrends. Please visit: infoTECH Spotlight for more information.
About TMC
TMC is a global, integrated media company that supports clients' goals by building communities in print, online, and face to face. TMC publishes multiple magazines including Cloud ComputingM2M EvolutionCustomer, and Internet TelephonyTMCnet is the leading source of news and articles for the communications and technology industries, and is read by as many as 1.5 million unique visitors monthly. TMC produces a variety of trade events, including ITEXPO, the world's leading business technology event, as well as industry events: Asterisk (News - Alert) World; AstriCon; ChannelVision (CVx) Expo; Cloud4SMB Expo; Customer Experience (CX) Hot Trends Symposium; DevCon5 - HTML5 & Mobile App Developer Conference; LatinComm Conference and Expo; M2M Evolution Conference & Expo; Mobile Payment Conference; Software Telco Congress, StartupCamp; Super Wi-Fi & Shared Spectrum Summit; SIP Trunking-Unified Communications (News - Alert) Seminars; Wearable Tech Conference & Expo; WebRTC Conference & Expo III; and more. Visit TMC Eventsfor additional information.
TMC Contact                                                                                                      
Rebecca Conyngham
Marketing Manager
203-852-6800, ext. 287
rconyngham@tmcnet.com