Monday, February 9, 2015

Komoditas dalam Data Center mendatang

The Future of Commodity Systems in the Data Center

We’re at a very interesting point in time during the cloud infrastructure era. The modern data center continues to evolve from physical to virtual, numerous management stacks being abstracted into the logical, or virtual, layer. Is another data center evolution around the corner? Will new kinds of compute platforms allow for a more open data center model? We’ve already begun to see a shift in the way data centers provide services. A new kind of commodity architecture is making its way into the consumer, cloud provider, and even service provider data center.
Customers are being given much greater options around what they want deployed and how they want it controlled. With all of this in mind, it’s important to see how commodity server platforms are already making an impact in your cloud architecture.


Although the conversation has certainly picked up recently, white-box and commodity offerings from a few data center providers is actually a reality. In a
 recent article on DCK we outlined Rackspace’s dedicated servers which behaved like cloud VMs. The offering, called OnMetal, provides cloud servers which are single-tenant, bare-metal systems. You can provision services in minutes via OpenStack, mix and match with other virtual cloud servers, and customize performance delivery. Basically, you can design your servers based on specific workload or application needs. This includes optimizations around memory, IO, and compute. As far as the base image goes, Rackspace has CentOS, CoreOS, Debian, Fedora, and Ubuntu images available.

Bare Metal Cloud and Commodity Servers

It’s important to note that Rackspace isn’t alone in this space. Internap has been offering powerful metal servers, and so has SoftLayer, now an IBM company. The servers provide the raw horsepower you demand for your processor-intensive and disk IO-intensive workloads. From there, you can configure your server to your exact specifications via a portal or API and deploy in real time to any SoftLayer data center. With all of that in mind, the amount of bare metal customization that you can do within the Softlayer cloud is pretty impressive. Storage, memory, network uplinks, power supplies, GPUs and even mass storage arrays can be customized. You can even get an entire customized physical rack.

Cloud-Ready Platforms as Commodity

Big server vendors have certainly heard the message. Cloud, data center, and service providers are all looking at better ways to control performance, price, and the compute platform. So why not get in the game and help out? Recently, HP and Foxconn formed a joint venture to create a new line of cloud-optimized servers specifically targeting service providers. According to the press release, the new product line will specifically address compute requirements of the world’s largest service providers by delivering low total cost of ownership (TCO), scale, and services and support. The line will complement HP’s existing ProLiant server portfolio, including Moonshot. The idea is to cut out the software as well as the bells and whistles while still keeping HP support involved. From there, these servers aim at large service providers to help them address the data center challenges of mobile, cloud, and Big Data.
The cool part with HP’s Cloudline servers is that these are full, rack-scale systems, which are optimized for the largest cloud data centers and built on open industry standards. Vendors within the enterprise vendor community are offerings options as well. Storage solutions from X-IO Technologies focus on absolutely pure performance at 100 percent capacity. They build in high availability and redundancy,but don’t offer snapshotting, dedup, replication, thin provisioning, and a few other software-level storage features. It does, however, carry a five-year warranty on the appliance.
Of course, there will still be places where this doesn’t work. However, for a large number of organizations moving toward a more logically controlled storage platform this is very exciting. In some cases the hypervisor or software-defined storage layer can deliver enterprise storage features like encryption, dedup and more directly from the virtual control layer.

Future Cloud Ecosystem Will Be More Diverse

The growth of cloud computing has also allowed for greater diversity within the data center platform. We now have more hosting options, greater delivery capacities, and more support from powerful systems located all over the world. Adoption of bare metal and commodity systems will certainly continue to grow. Fueled by new concepts around the Internet of Things and mobility, data center will simply have to support more users, carrying a lot more data.
Consider this from a recent Cisco Service Provider forecast: globally, 54 percent of mobile devices will be smart devices by 2018, up from 21 percent in 2013. The vast majority of mobile data traffic (96 percent) will originate from these smart devices by 2018. As everything in technology, we will continue to see systems evolve to meet modern demands. Vendors like Cisco, HP, Dell and others – who serve the more traditional server market – will need to evolve alongside organizations seeking a more “commoditized” approach to data center architecture.
As modern organizations take on new challenges around cloud and content delivery, more options will make the design and architecture process a bit easier. In many cases, you simply need raw power, without any software-based bells and whistles. This is becoming more and more the case as software-defined solutions and virtualization help abstract the logical layer from the physical platform. We can now control resources, route traffic, and manage users from the hypervisor and the cloud. This allows the underlying hardware to solely focus on resource delivery, leaving the management layer elsewher

Sunday, February 8, 2015

Arsitektur Leaf-Spine untuk data center

Setelah bertahun-tahun menggunakan Tree Architecture, sekarang beralih ke Leaf-Spine Architecture, khususnya untuk data center.

Distributed Core/Leaf-Spine Network Architecture: An Intro

By Rajesh K
Distributed Core/Leaf-Spine Network Architecture is catching up with large data center/cloud networks due to its scalability, reliability, and better performance (vs. 3-tier Core-Aggregation-Edge Tree Networks). Maybe it’s time for enterprises and smaller networks to consider implementing Distributed Core/Leaf-Spine Networks, as the architecture enables companies to start small and scale up massively. Here’s a short introduction.

Distributed-Core-Leaf-Spine-Network-Architecture

A basic architecture diagram for Distributed Core/Leaf-Spine Networks is shown above. As you can see, the top layer has Spine Switches and the layer below has Leaf Switches. The Servers/Storage equipment (or) Top of the Rack (ToR) Switches connect to the Leaf Switches, as shown in the bottom of the diagram.
As you can see, All Leaf Switches connect to All Spine Switches, but Leaf and Spine Switches are not connected directly to each other.
It is possible to have a simple Distributed Core network with 2 Leaf switches and 4 spine switches (as shown above). If each Leaf Switch has 48 Ports and 2 Uplinks, the total number of Servers that you can connect in this configuration will be 48 x 4 = 192. You can expand the network quickly by adding Leaf and Spine Switches – More than 6000 servers can be connected to multiple Leaf/Spine Switches with massive backplane capacity.
The capacity/expandability of the network will depend on the No. of ports on the Spine Switches and No. of uplinks on the Leaf Switches. With Leaf-Spine/Distributed Core networks, you can either design for a non-blocking architecture or over-subscribed architecture, depending on your requirement and budget.
The number of links (between Leaf and Spine switches) = No. of Leaf Switches x No. of Spine Switches. As the network expands, the No. of links increases tremendously. In Distributed Core/Leaf-Spine architecture, all the links are utilized to send data traffic, unlike Core-Distribution-Access networks where redundant links are disabled using STP. This network can be implemented in L2 using TRILL/SPB (Shortest Path Bridging) protocols; but more commonly, it is implemented in L3 using ECMP, BGP or OSPF protocols.
Advantages:
  1. It is possible to use low-cost 1U or 2U Spine Switches Vs. Expensive Chassis-based Core Switches.
  2. It is possible to start small and expand the Spine/Leaf network by adding more switches, when required, without discarding the existing setup.
  3. There are networking vendors who make specialized Leaf/Spine switches.
  4. It is possible to configure the Distributed Core network to offer maximum redundancy/resiliency. Even if a Spine Switch fails, there will only be a performance degrade Vs. Service outage.
  5. It is possible to achieve higher throughput/bandwidth & connect more servers with Distributed Core networks Vs. Core-Aggregation-Edge Networks.
  6. Leaf/Spine networks can handle both East-West traffic (Server to Server: Cloud computing, Hadoop, etc.) and North-South traffic (Web content, Email, etc.) efficiently. The traditional networking model is more suitable for the latter, and expansion is limited.
  7. It is possible to use Standards-based protocols (even in a multi-vendor setup) to implement Leaf-Spine networks. But some vendors have developed their own proprietary protocols/fabrics, as well.
  8. Distributed Core networks enable Containerized (and Expandable) Data Centers.
  9. Networks can scale up/down/out massively and quickly.
  10. Can handle East-West (Server to Server) traffic efficiently.

Monday, February 2, 2015

Bagaimana ROI dari DCIM ?

Where is the ROI for DCIM?

By , 27-Jan-2015

Many CIOs are still hesitant to deploy a DCIM solution for their enterprise datacenters because they find it difficult to determine the ROI on it. However, those who have gone for a DCIM solution felt that it met or exceeded their ROI expectations.

Where is the ROI for DCIM?, Advisor, GreenField SoftwareGreenField SoftwareAdvisor
CIOs owning and operating legacy datacenters are facing increasing pressure to reduce costs, while at the same time increase availability. Some of them opt to collocate to third party datacenter service providers. However, a big majority are bucking the trend and opting to expand or do a new build. Here then is an opportunity for them to innovate by introducing Datacenter Infrastructure Management (DCIM) software and derive benefits similar to what many services providers have got by adopting DCIM.
Many CIOs are still hesitant to deploy a DCIM solution for their enterprise datacenters because they find it difficult to determine the ROI on it. However, those who have gone for a DCIM solution felt that it met or exceeded their ROI expectations.  
Briefly, DCIM software mitigates risks of failures while at the same time helps to avoid over-provisioning. Fortunately, DCIM investment pays back in 12-18 months. Besides ensuring almost zero failures, organizations deploying DCIM have derived tremendous financial benefits—by way of reduced capital costs and lower power consumption. As a bonus, organizations have also reported higher asset utilization and longer life of their equipment and datacenter.
By considering only three of the many challenges faced by datacenter managers, decision makers can arrive at a business value of the DCIM solution. 
Putting a Number on the Savings
The challenge of putting a dollar figure to improving productivity, efficiencies, and business agility is that they can be increased only when inefficiencies are eliminated. While these are indeed worthy goals, many business managers equate these with soft-savings and not real money that can be credited to their bank, even though customers have gone on record to admit that they experienced a ROI – like a telecom company in the US which got returns in less than 11 months in the first phase. 
However, those efficiency-related savings are real – so it is important that the DCIM evaluation team reaches an agreement with key decision makers and business managers on how they should account for this crucial element and take soft-savings into account. 
Where are the Savings in DCIM?
When a DCIM solution is implemented it solves three business challenges making ROI calculation simpler—reducing operating expenditure, sweating out existing investments, and higher availability of business IT services.
Reducing Operating Expenditure:  The quickest payback can come by reducing power consumption. Datacenters are one of the highest consumers of power in any organization. Many organizations lack a structured power management practice for their datacenters. 
DCIM solutions give a detailed visibility on how each device is connected in the power chain, then monitor and report the efficiencies (or inefficiencies) of power consumption in terms of Power Usage Effectiveness (PUE) at various stages in the power chain map. This helps indicate areas where there could be maximum impact on power savings if actions are taken to plug those inefficiencies in design or equipment or management of the setup.
The savings can be measured in the reduced PUE and the ability to increase datacenter temperature leading to reduced cooling power cost.
Sweating Out Existing Investments: By seeing all the IT and facilities infrastructure and their utilization, the ability to manage assets across the full life-cycle increases. This enables the management to plan for growth of IT infrastructure required for business growth in a more informed manner, avoiding over-provisioning, and savings on capital investments or deferring investments to later date.
DCIM solutions’ capabilities—asset management, capacity management, and reporting—convert into savings as they utilize existing infrastructure to its fullest and provide data to plan for future investments.
The Functionalities of DCIM:
  • Asset management for IT and critical facilities infrastructure.
  • Capacity management planning and threshold-based reporting.
  • “What-if” functionality enables operations and business teams to get visibility of a could-be situation for various scenarios.
Savings Example:
  • Deferred or avoided new datacenter construction or retrofit investment versus a previously considered new build or retrofit capital expenditure. 
  • Savings in cost of capital.
  • Benefits from power utility provider and company interaction – peak time shaving, better utility pricing agreements due to more accurate power use forecasting.
Higher Availability of Business IT Services: IT enablement is one of the biggest drivers for business growth. Need for higher availability of business IT Services over 24x7 has now become an absolute necessity in running businesses. Managers have to use what they already have albeit better to provide better, faster, and more reliable IT services within the same budget. Hence data center IT and facilities uptime is crucial to deliver this business services uptime at all times.
DCIM solutions proactively alert the datacenter operations and management teams on parameters related to utilization, system error,s and violation of threshold parameters as defined by business and operational needs.
Timely maintenance of all systems and sub-systems of datacenter facilities results in higher uptime of those devices. DCIM solution provides pro-active alerts and regular reporting for maintenance schedules and adherence.
The Functionalities of DCIM:
  • Asset Management with Maintenance Management System
  • Monitoring and reporting of critical infrastructure health status
Savings Example: 
  • Lower reporting/analysis time. 
  • Reduced audit time (Third party costs).
  • Decreased reporting (capacity and regulatory) times (reports how often, on what, to whom).
  • System administrator’s time decreased (improved productivity), dollars system administrator – administration versus reporting times.
  • Increased availability – cost of downtime (dollars per minute).
  • Number of faults detected, faster diagnosis for shorter downtimes.
The key is to find a partner experienced in integrating DCIM into datacenter and its business processes and who will work to understand how your datacenter operates.
The right solution associated with simpler and purposeful deployment would yield faster and desired results. This needs to be associated with collaborative approach between the IT department and the facilities function to maximize the ROI from this initiative.

The author Moiz Vaswadawala is an Advisor to GreenField Software, an intelligent Infrastructure Management firm focused on DCIM. Moiz has managed and led Datacenter and Cloud Computing business for a global IT services company.

Mengapa memilih lokasi Data Center Anda menjadi sangat penting ?

Why Your Data Center's Total Cost of Ownership and Location Matter

Selecting a good location for your data center is a critical element of data center planning, as deciding where to build and maintain a facility directly impacts the total cost of ownership (TCO) over the lifetime of the data center.
In looking to either purchase an existing facility or build a new data center, there's an exhaustive list of factors to be weighed and analyzed before you select a site. To that end, CIO.com spoke to industry leaders to learn more about considerations that range from the probability of basing your data center in an area where a natural disaster could occur to the availability of utilities and the cost of energy.

Key Data Center Location Considerations: Expenses, Expenses and Expenses

According to data center solutions provider Lee Technologies, a subsidiary of Schneider Electric, one basic mistake organizations make is failing to take TCO into account. In its report, The Top 9 Mistakes in Data Center Planning: The Total Cost of Ownership Approach, Lee Technologies recommends that the best approach to is to focus on three basic TCO parameters: capital expenses, operations and maintenance expenses, and energy costs.
Keith Lambert, senior vice president of design, build and construction for Lee Technologies, says the company looks at potential sites for clients looking to build new or retrofit an existing structure. Depending on the organization's needs, there's a number of different ways to approach site selection.
"We're mainly interested in site selection if there are tax incentives in the area—and only if they are true incentives," Lambert says. Many communities, especially those in rural areas, offer incentives aimed at attracting investment and creating jobs in construction as well as information technology.
Utility costs matter as well, Lambert adds. "For example, we want to know the cost of water per gallon, the electricity rate and also the cost of [sewer] discharge."
There's also real estate, infrastructure, materials and labor. At the moment, says David Eichorn, data center practice head for Akibia, which offers services to improve the availability, reliability and performance of data centers, Oregon is a popular data center location. That's due to a combination of a highly skilled labor force, favorable climate and lower cost of living than, say, neighboring California. (For many of the same reasons, Canada is also attracting the attention of firms looking to build a data center.)
Finally, during the site selection process, it's critical to examine network connectivity in the area and find out how close to the facility it runs. Depending on the complexity of the site and redundancy levels, the availability of multiple power sources may be a key factor for some companies.

Don't Let Energy Costs Overwhelm Your Data Center

Power doesn't come cheap. Rob Woolley, senior vice president of critical environment services for Lee Technologies, says energy costs—and the types of deals you can get from various providers in the area—have become increasingly important over the past 10 years.
"The cost of energy and availability of utilities…is at the top of everyone's list of selection criteria," Woolley says, adding that green initiatives such as free cooling can have "a major impact on savings."
Make no mistake, utility costs can be the deal breaker when choosing a site. For Lee Technologies, energy costs is also a leading indicator to get a client to build in a specific area—especially if there's hydroelectric power or another source of energy in the area that drives operational costs down. "Hydro is great power. Not only is it relatively inexpensive compared to other sources, but it's also very clean. There's very little carbon associated with hydroelectric power," Woolley says.
Akibia's Eichorn, for his part, says that green IT is one of the biggest changes in the industry. "In the past couple years, it's a positive trend that has taken hold in the data center industry."
Eichorn agrees that power has become an increasingly a cost driver in data centers. At the same time,green initiatives have people talking how to better manage power consumption. As a result, he says, there are many new techniques available to companies today.
"Companies use green, and they use it for different reasons," Eichorn says. "There is an emphasis on being environmentally conscious, but there's also the [monetary] value…that being green brings to the table."

Backup Data Center Shouldn't Be Too Close—Or Too Far

Most companies don't plan for just one data center at a time, Eichorn notes. Usually, it's two: a primary facility and a business continuity and disaster recovery redundant facility.
One of the biggest concerns, Eichorn says, is the proximity of the two data centers. Putting one facility in an area that's prone to natural disasters is risky enough, but if your data centers are too close, a hurricane, tornado or other big storm could take out both facilities, he says.
At the same time, if the data centers are too far apart, turn-around time suffers. In addition, putting facilities in another state, province or territory will increase the overall cost (and complexity) of capital and labor, which is an important consideration for small and medium-sized companies. (Larger firms with multiple offices, of course, have more options for where to base a data center.)
In the end, Lambert says, most businesses establish a perimeter or short-list of ideal regions and go from there. If you narrow your site selection down to a few regions, then you can analyze the benefits of each location and plan your facility.
The good news for companies looking to expand facilities or invest in new data center builds is that the industry is more open today than it was in the past. Far from being proprietary and secret, today's data centers show that there is more interest and more willingness from companies to share their experiences and knowledge with other companies.
"When people bring knowledge and technology advancements to the table, it brings forth a more open environment and accessibility to green initiatives for everyone," Eichorn says.
Based in Nova Scotia, Canada, Vangie Beal has been covering small business, electronic commerce and Internet technology for more than a decade. You can tweet with her online @AuroraGG.