Monday, June 26, 2017

How to benefit from the same network structure used by social media giants

How to benefit from the same network structure used by social media giants

How to benefit from the same network structure used by social media giants
If you’ve heard the buzz in the networking world lately, or if you’ve been paying attention to the back-to-back launches by Cumulus Networks as of late, then you’ve probably heard the term, “web-scale networking.”
But what does that actually mean?
The term web-scale networking is inspired by data center giants like Facebook and Google. The industry looked at data centers like theirs and asked, “what are they doing that we can mimic at a smaller scale?” By analyzing these organizations and the benefits they receive from their tactics, the term “web-scale” was born. Essentially, web-scale refers to the hyperscale website companies that have built private, efficient and scalable cloud environments.
Web-scale networking: a definition
Web-scale networking is simply a modern architectural approach to infrastructure. The differentiating components are taken from the key requirements that large data center operators use to build smart networks. Businesses can design cost-effective, agile networks for the modern era by adhering to these three constructs:
  • Open and modular – Edgecore
  • Intelligence in software – Cumulus Linux
  • Scalable and efficient – Edgecore + Cumulus Linux
These three constructs essentially comprise web-scale networking.
While compute has advanced through leaps and bounds with the convergence to private, public and hybrid clouds, networking has notoriously lagged behind. An open networking philosophy brings traditional networking up to par with the advancements of a web-based approach. Open, web-scale networking provides automation, accuracy and cost-savings to the data center.
Tactical benefits of open, web-scale networking with Edgecore
Automation is more accessible, and more powerful, than ever
Open networking allows organizations to choose the ideal hardware and software for their budget and needs. This means you can use existing automation software or integrate something new. With an open environment, it’s easy to standardize protocols, identify issues and create a unified stack that communicates efficiently.
No more vendor lock-in
Have you ever wanted to upgrade certain components of your network to another vendor’s product, but refuse to go the “rip and replace” route because of both cost and risk? Most organizations are faced with the problem of “vendor lock” because it is too costly to switch systems and vendors. Web-scale networking allows you to choose the network switches, cables, optics, applications and more — based on your needs and your budget.
Operational efficiency
By unifying the stack, customizing hardware and automating software, organizations can completely streamline their processes. Engineers can identify and fix issues quicker. Operators can deploy faster. And organizations can multiply the amount of Edgecore switches managed per operator. All of these benefits result in better DevOps, greater efficiency and lower TCO.

Thursday, June 22, 2017

Selamat Hari Raya Idul Fitri 1438H

Friday, June 9, 2017

Pertimbangkan Fungsi Utama DCIM

Key DCIM Functionality Considerations

The ecosystem of the data center has many potential points to monitor.
This is the second entry in a Data Center Frontier series that explores the ins and outs of data infrastructure management, and how to tell whether your company should adopt a DCIM system. This series, compiled in a complete Guide, also covers implementation and training, and moving beyond the physical aspects of a facility.
The following are key DCIM functionality considerations to take into account when choosing a system for your business or customers.

Energy Efficiency Monitoring

The ecosystem of the data center has many potential points to monitor. While it would be ideal to monitor everything, cost and value becomes part of the decision process. The focal point of what will be monitored is typically related to which stakeholders or departments are driving the project. It also depends on the age of the data center and how much or little monitoring is already in place. From the facility side, the basic PUE information can be derived by instrumenting only 2 points in the power chain; the utility input energy and the output energy of the UPS (IT energy).
While PUE was originally based on power (kW) draw, which is an instantaneous measurement. In 2011 PUE was updated to be calculated based on annualized energy (kWh measured or averaged over 12 months of operation). This reflects a more accurate picture of the yearly performance, rather than spot power measures which will vary widely depending on when they were taken. As can be seen by the figure below, this requires energy metering at the utility input, as well as the 3 possible points of IT energy measurement, beginning at the output of the UPS (PUE category 1).
PUE points of measurement
In 2011 PUE was updated to be calculated based on annualized energy (kWh measured or averaged over 12 months of operation). This reflects a more accurate picture of the yearly performance.
From an IT perspective, there are also many advantages to monitor power distribution downstream from the UPS, such as at the floor level PDUs (PUE2), or at the rack (PUE3), including identifying cascading failures and PDU overloads. However, this is also where the age of the data center infrastructure becomes a factor.
In newer data centers, floor level PDUs typically have branch circuit monitoring, which can be remotely polled by DCIM (or BMS). Many older data centers do not have this functionally in the floor level PDU. This leaves two options; retrofit branch circuit monitoring or utilize so called “intelligent” rack level PDUs (power strips). The first option, to retrofit the PDUs, falls under the jurisdiction of the facilities department and can be difficult and disruptive and in some cases, may require a power shutdown.
The ecosystem of the data center has many potential points to monitor. #DCIMCLICK TO TWEET
The second option has long been the more popular option, typically driven and deployed by the IT group. In many cases these rack PDUs have Ethernet connectivity and can be easily polled by DCIM systems. In other cases, there may only be lower cost, locally metered rack PDUs or simple “power strips”, neither of which have any remote connectivity. This leaves the option of replacing these with intelligent PDUs or installing “in-line” power monitoring with Ethernet connectivity, which can be polled by the DCIM platform.

Power Distribution Monitoring

Hidden Exposure of Cascade Failure

Many of the more basic power monitoring functions may already be done to one degree or another by some BMS systems. However, in many older data centers, there is no branch circuit monitoring installed, resulting in the need for periodic manual branch circuit surveys (by electricians with clamp-on ammeters and a clipboard). This is typically done to try to avoid circuit breaker overloads or perhaps as a rudimentary form of power capacity planning to see if or how much IT equipment could be added to a cabinet. Even in a relatively well-organized and managed data center, this information may not be readily available, communicated or cohesively correlated within and between the facilities, operations and IT departments. The lack of real-time power and energy monitoring at the rack can delay or disrupt a technical refresh or, worse yet, expose the rack to failure if the branch circuit protection trips when more or new IT equipment is installed.
This hidden exposure can be seen in the figure below, which depicts a potential scenario wherein the typical manual “clamp-on” ammeter is used to measure (A-B) redundant branch circuits to a rack at one point in time, while the plot lines show continuous current measurements over time for the A and B circuits, as well as the sum (A+B) of both.
In the figure below, at the time the manual readings were taken, it would seem as if the total current drawn across (A and B) circuits were only 14 amps (7A+7A). However, the continuous current plot over time shows that at multiple times during the day, the sum of the (A+B) circuits actually exceeded the 16 amp (80%) threshold (which is the maximum current that should be safely drawn from a 20 amp branch circuit, per US National Electrical Code “NEC”). While under normal circumstances, when both circuits are active, there would be no problem in the example below. However, should a problem occur, such as the loss of either one of the branch circuits (either accidentally or during a maintenance procedure), the remaining active circuit could trip during the peak current excursions, since it would now be carrying the entire load (slightly above 18 amps). This represents a lurking exposure to cascade failure.
Manual survey vs. continuous current monitoring
As can be seen by the example above, these peaks would be very difficult to discover, even with regular manual survey snapshot readings. This exposure to cascade failure of redundant power paths can only be revealed by continuous monitoring and recording of current on each branch circuit (A-B) and then setting threshold alarms when the sum exceeds the prescribed limits. DCIM can help monitor and manage these thresholds and alerts, minimizing these potential cascading failures.

High-Power Rack PDU Overloads

While the figure above illustrates the hidden exposure of manual branch current surveys, there is also another concealed risk contained in high-power rack PDUs (which contain multiple circuit breakers to prevent grouped outlet banks from overloading).
While many data centers may take regular weekly or monthly readings of rack power draw, intermittent short term peak current draws and potential exposure to branch circuit overloads will not be detected. Without knowing how much current is being drawn in real-time and trended continuously, just adding a single server would be like playing “Russian Roulette,” since it could result in a tripped circuit breaker.
The ability for DCIM to provide continuous real-time power monitoring and detect and display these peak power conditions can help the mitigate risk of an outage.


5 Hal yang harus diperhatikan dalam desain Airflow Containment

5 Key Considerations for Airflow Containment Design

By implementing an effective airflow containment strategy, it is possible to see optimized cooling system efficiency, improved Power Usage Effectiveness (PUE) and additional equipment capacity, all without having to expand a facility’s footprint. (Photo: Simplex)
In this week’s Voices of the Industry, Ward Patton, Critical Environment Specialist at Simplex Isolation Systems, unpacks five key considerations for airflow containment design. 
Data center containment systems can provide great benefits­ over traditional open data center designs. By implementing an effective airflow containment strategy, it is possible to see optimized cooling system efficiency, improved Power Usage Effectiveness (PUE) and additional equipment capacity, all without having to expand a facility’s footprint.
Ward Patton, Critical Environment Specialist at Simplex
Like any successful system, airflow containment design must consider many different factors to ensure a solution that achieves the required cooling, while remaining flexible for expansion or changes and integrates well with the current infrastructure. Here are five key considerations to keep in mind when planning a data center containment project.
  1. Hot Aisle or Cold Aisle Containment
Hot air and cold air containment are the two high-level methods of an airflow containment strategy, and there is an ongoing discussion in the industry about whether it makes more sense to isolate the hot aisle or the cold aisle. Different data center experts advocate different theories, but realistically, the approach should be dictated by the existing infrastructure.
For example, what is the current air distribution type in the facility? This will play a critical role in deciding which approach is the best fit. Data centers with targeted return and flooded supply air distribution would benefit more from hot aisle containment, whereas data centers with targeted supply and flood return air distribution would see better results with cold aisle containment.
Ward Patton - Data center containment systems can provide great benefits­ over traditional open data center designs.CLICK TO TWEET
Additional factors that can dictate whether to isolate the hot aisle or the cold aisle include the depth of the raised floor plenum, the presence of overhead cabling, varied ceiling heights and support column locations. These are only some of the infrastructure constraints that need to be addressed and every case is site specific. An assessment of the existing conditions of the facility is essential to choosing the right containment solution for any data center.
  1. IT Equipment Arrangement
Once the infrastructure has been evaluated, the next step is to review the current IT equipment arrangement. While most data centers are depicted having rows and rows of server racks of the same brand, shape and size, this is rarely the case. It is not uncommon to see offset rows consisting of multiple server rack brands in all number of shapes and sizes. This is especially true for legacy data centers that have undergone expansion, or are the result of the consolidation of multiple facilities.
It is also recommended to go one step further and review any ergonomic challenges. Factors such as clearance or personnel traffic should be reviewed and planned for accordingly. End-of-aisle doors for instance, should be configured to best fit the traffic patterns within the data center making the space easier to work in when moving racks and other items in and out.
When evaluating an airflow containment system, it is critical to assess the amount of customization that will be needed for the containment system to perform optimally, especially if there is a lack of uniformity with the IT equipment.  While it may seem daunting to find a containment system that can accommodate all levels of variation, most innovative manufacturers can offer a customized and flexible solution to fit any new or retrofit application.
  1. Fire Detection and Suppression
 Fire suppression in the data center is complicated and involved. It is a good policy to involve the local Fire Marshal as soon as possible when planning an airflow containment system. The Fire Marshal enforces the local codes and standards in the jurisdiction and can play a critical role in providing the insight needed to achieve compliance from the very beginning.
Fire suppression in the data center is complicated and involved. #coolingCLICK TO TWEET
Jurisdictions will require compliance with the National Fire Protection Association (NFPA) 75 and 76, which ensure that fire suppression systems are in place and meet specific testing requirements. From a containment standpoint, these standards require data center facilities who use an airflow containment system to either have a fire suppression system that covers all areas of contained aisles or a containment system that integrates with the fire detection system. The former option can be a costly and daunting task, so many data center managers will opt for a containment system that works with the current fire suppression infrastructure.
There are many reputable containment systems available today that are designed specifically for use under a fire suppression system, though potential implications associated with the containment approach will still need to be considered. Vertical containment systems that incorporate softwall curtains, for example, would need to account for the required clearance space below the sprinkler level if the facility is equipped with a sprinkler-based fire suppression system. This would insure full dispersal of water in the event of a fire. Such systems would also need to consider what softwall material is being implemented. It’s not uncommon to receive demands from the Fire Marshal to source curtain materials that meet the stringent ASTM E-84 Class 1 rating for flame and smoke generation, so researching and selecting an appropriate softwall material is important.
Systems that feature ceiling partitions rather than softwall curtains will need to address how the facility’s fire suppression system will be accommodated. These structures generally include ceiling panels that are retractable, are equipped with a soft drop system, or are designed to shrink and fall away when exposed to temperatures that reach 15 to 20 degrees lower that the temperature at which the fire sprinklers would activate.
Whatever the containment approach, there are additional factors to consider when evaluating which system is the best fit.
  • A fail-safe system is key. Look for a containment system equipped for a facility power outage or Emergency Power Off (EPO) event. While many systems may tie into a dedicated emergency back-up supply, relying on a supplemental power system can’t be deemed fail-safe. Instead, consider a system that is inherently fail-safe, such as a gravity-reliant, electromagnetic droplink that would drop away when the power source is disconnected.  That way the fire suppression system would still operate successfully in the event of a power outage.
  • Consider equipment and personnel safety. Look for a system that won’t cause additional damage to equipment or prove harmful to personnel if deployed. Curtain systems for instance, should be equipped with a lanyard drop system and ceiling structures should retract or have a soft-drop feature to prevent damage when they are utilized.
  • Testability. Look for a system that can be tested and reset in the event of deployment. This is a requirement for NFPA 75 compliance.
When it comes to prevention against a fire within the data center, any costs related to risk mitigation will be justified when compared to the cost of damage that would be incurred in the event of an actual fire.
  1. Electrical Utility Incentives
There is a good chance that your electrical utility provider has a system of rebates and incentives for companies that take proactive measures to decrease power usage in their data centers. The utility might have certain stipulations in place such as approved contractors, approved equipment and other requirements. Take these considerations into account as you begin the design process, rather than after the airflow containment project is underway. There are often specific time windows to take advantage of these incentives. 
  1. Future Growth
 Understand that change is unavoidable and design the airflow containment based on that reality. Eventually changes in the data center will be required to accommodate growth or reconfigure the layout. When evaluating an airflow containment system, it makes sense to look for components and structures that are modular in design. Modular mounting hardware for curtains and modular end-of-aisle doors, for example, can be relocated or expanded upon if needed. Ultimately, containment systems that are modular in design will have a lower cost of ownership.
Overall, data centers are complicated and there are many factors that come into play when designing an effective airflow containment solution. When making plans for airflow containment, it is recommended to work with reputable manufacturers that can analyze the space and align airflow and containment objectives with the data center’s site-specific infrastructure and equipment requirements to secure the greatest efficiencies and enable modular, scalable and affordable growth.
Ward Patton is the Critical Environment Specialist at Simplex Isolation Systems. Simplex Isolation Systems designs and manufactures custom data center containment systems that are modular, expandable and high-performing. Simplex’s Containment Resource Guide can serve as an essential tool for planning a hot or cold aisle containment system.