Saturday, December 2, 2017

PRTG untuk monitoring ruang server Smart City

Why Are We Making Cities Smart?

Everything seems smart these days: smart phones, smart homes, smart watches and smart cities. It seems humans are devising more and more “Things” to think for us. Does that mean humans will become more stupid? Whatever your stance on this, the technological advancement behind it is recognizably impressive.

What Makes A City Smart?
According to Wikipedia, a city is smart if it integrates information and communication technology (ICT), and Internet of Things (IoT), in a secure fashion to manage its assets.
The interconnectivity of these assets allows the city to monitor what’s going on in the city, how it’s evolving and how to enable a better quality of life.
Examples of things monitored in smart cities are:
  • Waste Management – monitoring the fullness of public waste bins around the city, so they are only emptied when full (saving costs and reducing congestion).
  • Parking Sensors – these show you availability of parking spots in a city. There are apps that tap into this data, making it easier for drivers looking to park. Not only saving us time, it saves on fuel, and reduces emissions and congestion.
  • Security – integrated sound sensors can detect gun shots and automatically report it to the authorities, reducing necessary involvement of citizens while making the city feel safer.
The possibilities are endless and the technology advances minute-by-minute.

Who Builds Smart Cities And Why?

It’s city leaders who are recognizing the potential of technology to make their cities safer, convenient and more comfortable for its residents. In some instances, it may also be for prestige and branding.
Whatever the agenda, it’s changing the lives of city-dwellers and putting additional pressure on IT infrastructures supporting the interconnectivity of ‘Things’. There’s increased traffic and data being transferred, which impacts load and bandwidth.

What Are We Doing In This Space?

Our partner Daya Cipta Mandiri is funding a Smart City Center show room in Mangga Dua Square Jakarta, Indonesia, supporting the concept of smart cities as a means to enhance the quality of life.
Their mission is to educate, train and assist the local government in developing smart cities, and to raise public awareness.
Most of the smart city projects, they say, “start with infrastructure projects like CCTV and datacenters, which all require monitoring.” They install PRTG to monitor such infrastructures and develop custom dashboards to manage the volume of data they get back.

Here is one of the dashboards developed using PHP/ Java and PRTG’s API.
smart-city2.jpg
It’s clear that as cities get smarter, IT must too. Smart cities have sensors monitoring things like parking spaces, waste bin capacity and security cameras, but who is monitoring the monitors?
Smart cities need to be equipped to manage the data load and connectivity of IT assets on the network if they are to uphold the convenience and security they promise their residents. It needs a human to recognize this and take action; A smart one.
source: https://blog.paessler.com/why-are-we-making-cities-smart

Wednesday, November 29, 2017

Top 10 Data Center Best Practices

Load density, air distribution, floor tile positioning; data center design is more complicated than ever, but with some best practice considerations, creating an efficient, reliable data center design is within your grasp. 
Let's explore 10 Important Data Center Best Practices:
Data-Center-Best-Practices
  1. When designing your data center, consider initial and future loads; in particular part and low-load conditions.
  2. Lower data center power consumption and increase cooling efficiency by grouping together equipment with similar heat load densities and temperature requirements. This allows cooling systems to be controlled to the least energy-intensive set points for each location.
  3. As a long-time data center professional, I encourage all my customers to reference the "2011 ASHRAE Thermal Guidelines for Data Processing Environments" to review the standardized operating envelope for the recommended IT operating temperature. 
    • Identify the class of your data center to determine the recommended and allowable environment envelopes:
      • "Recommended" combines energy-efficient operation with high reliability
      • "Allowable" outlines boundaries tested by IT equipment manufacturers for functionality.
      • Keep in mind that operating outside the recommended envelope may cause server fans to operate at higher speeds and therefore consume more power.
      • Higher return air temperatures extend the operating hours of air economizers
      • Higher return air temperature improve cooling infrastructure efficiency, saving both energy and money.
  4. Implement effective air management to minimize or eliminate mixing air between the cold and hot air sections. This includes configuration of equipment's air intake and heat exhaust paths, location of air supply and air return and the overall airflow patterns of the room.  Remember to create barriers and seal openings to eliminate air re-circulation.  Supply cold air exclusively to cold aisles and pull hot return air only from hot aisles. 
  5. Under-floor and over-head cable management is important to minimize obstructions within the cooling air pattern.
  6. Using fan speed control to supply only as much air as IT equipment requires can reduce energy use by up to 66%
  7. Carefully consider the location of floor tiles to optimize air distribution and prevent short-circuiting.
  8. Managing a uniform static pressure in the raised floor, by careful placement of the A/C equipment, allows for even air distribution to the IT equipment.
  9. Create a low pressure drop design to minimize fan power consumption by keeping ducts as large and short as possible, in addition to a generous raised floor. 
  10. Incorporate some form of economizer whenever possible to save money, energy, and the environment. 
By utilizing these data center best practices, I believe you are well on your way to a successful design! 
source: https://blog.stulz-usa.com/data-center-best-practices

Sunday, August 20, 2017

Apa beda data center dan cloud ?

What is the difference between data center and cloud?
A Data center can be defined as a facility which incorporates components such as servers, communication media and data storage facilities. Along with this it also contains various components which are essential to run a data center like power supply, backup systems, redundant communication connection, HVAC systems, security devices etc. It’s an on-premise hardware solution where all the resources are locally present at access which is typically run and maintained by in-house.
On the other hand, a Cloud is a virtual infrastructure that is accessed or delivered with a local network or accessing the remote location through internet. The cloud services can be accessed on-demand whenever the user requires on a pay per use basis or a dedicated resource, this model is known as Infrastructure as a Service (IaaS). Within this environment, the user can access computing resources, networking services and storage which the users can access on-demand without any requirement of physical infrastructure. It is an Off-premise form of computing which can be accessed from the internet, it’s maintenance and updates is maintained and controlled by the third-party.


DIFFERENCE SIMPLIFIED


Sunday, July 23, 2017

3 Hal penting di ruang server anda

Dalam beberapa kesempatan, saya mengunjungi para perusahaan pemilik usaha data center di Jakarta dan sekitarnya, dan selalu ada 3 hal yang harus menjadi perhatian utama mereka.

Tegangan




Di area ruang server atau data center, baik ukuran skala kecil hingga besar, tegangan tentu akan menjadi hal penting. Tegangan yang stabil menjadi syarat agar perangkat bisa berjalan normal.
Tegangan yang stabil umumnya memperhatikan ketersediaan UPS. Perangkat UPS sekarang telah sangat canggih dan dilengkapi dengan stabiliser, harmonizer dan beragam teknologi lainnya. Tapi pastikanlah bahwa perangkat UPS anda memiliki kemampuan monitoring, terutama dengan SNMP.

Dengan adanya kemampuan monitoring melalui SNMP, maka informasi tegangan dan parameter lain dari UPS dapat dipantau dengan mudah. Ada beberapa UPS yang mengharuskan menggunakan program aplikasi dari mereka sendiri, tapi sangat disarankan memiliki UPS yang memiliki keterbukaan protokol dan umumnya menggunakan SNMP atau ModBus.

Baik SNMP dan Modbus akan melaporkan kondisi tegangan, arus dan kemampuan baterai yang ada. Semuanya akan sangat mudah dirangkum dan disajikan dalam dashboard monitoring data center. Selain dashboard, kita juga memerlukan data tren tegangan dan kondisi baterai dalam jangka waktu tertentu. Karena umumnya baterai akan menurun kemampuannya dalam jangka panjang.

Selain faktor tegangan masuk ke data center, ada juga yang memperhatikan sangat detail tegangan di tiap rak yang ada. Mengapa ini penting ? Pengukuran tiap rak juga akan menunjukkan arus dan tegangan yang digunakan per perangkat. Dalam beberapa kasus, hal ini bisa digunakan untuk memastikan server yang harus segera diganti atau maintenance oleh karena faktor usia. Untuk mendapatkan pengukuran per rak, kita akan menggunakan PDU (Power Distribution Unit) yang memiliki kemampuan monitoring per port. Umumnya juga menggunakan protokol SNMP dan Modbus untuk diakses oleh aplikasi lain di luar PDU.

Suhu dan Kelembaban



Mungkin di ruang server, suhu menjadi parameter penting, tapi dalam beberapa kasus, kelembaban juga penting kita ukur. Ada tersedia perangkat pengukuran suhu dan kelembaban yang simple, sehingga cukup ditempel di dinding dan bisa terbaca dengan mata. Tapi sekarang ini semakin banyak yang menggunakan sistem EMS (Environment Monitoring System) yang digunakan untuk mengukur suhu dan kelembaban.

Related image

Server untuk bekerja dengan optimal memerlukan batas kerja suhu. Dan apabila mencapai suhu tinggi tertentu, beberapa perangkat memiliki kemampuan auto-shutdown agar menghindarkan kerusakan parah. Suhu perangkat diatas 30 derajat tentu sudah sangat mengkhawatirkan. Untuk Indonesia, suhu 20-21 derajat merupakan standar untuk penggunaan ruang data center kapasitas besar. Untuk beberapa ruang server tertentu maksimal hingga 26 derajat.




Sangat baik memisahkan perangkat yang memberikan sumbangsih panas lebih tinggi dibandingkan server. Jadi dalam beberapa ruang server / data center, mereka memisahkan ruang UPS.

Untuk pendekatan efisiensi, kami juga menjumpai dan membantu melakukan efisiensi dengan menggunakan pendekatan cold-aisle dan hot-aisle dengan menggunakan containtment. Sejenis sangkar atau ruangan khusus dimana suhu udara dingin dialirkan masuk maksimal ke dalam perangkat server yang ada di rak.

Image result for data center containment

Pendekatan ini memaksimalkan suhu untuk perangkat dengan baik. Meskipun menggunakan AC Split, seringkali kita menginginkan kinerja suhu optimal, sehingga pendekatan ini juga tetap bisa digunakan.


Akses Sekuriti

Akses terhadap ruang server / data center sekarang menjadi hal penting. Demikian juga akses terhadap rak. Sekarang ini banyak tersedia akses terhadap ruangan, tapi belum banyak kontrol terhadap rak.

Kebutuhan akses terhadap rak bisa disikapi dengan memasang perangkat khusus di tiap rak, yang juga merupakan akses door rak, tetapi juga bisa dikontrol secara terpusat.


Dengan kemampuan monitoring, management dan protect, produk diatas bisa memproteksi ruang server / data center dan rak anda secara terintegrasi. Selain itu, produk ini juga mendukung integrasi dengan EMS, sehingga memudahkan akses informasi terkait suhu / kelembaban hingga tiap rak yang ada.

Dengan menggunakan perangkat EMS terintegrasi,  ketiga hal diatas, bisa dimonitoring, kemudian dibuat threshold dan diberikan alert kepada kita, baik menggunakan email ataupun SMS. Kemampuan monitoring dengan menggunakan protokol SNMP juga memudahkan EMS terintegrasi dengan NMS yang ada.

Kami banyak mengintegrasi EMS AKCP dengan beragam NMS. Salah satu NMS yang paling sering digunakan untuk monitoring data center adalah OpManager, dan PRTG.








Monday, June 26, 2017

How to benefit from the same network structure used by social media giants

How to benefit from the same network structure used by social media giants

How to benefit from the same network structure used by social media giants
If you’ve heard the buzz in the networking world lately, or if you’ve been paying attention to the back-to-back launches by Cumulus Networks as of late, then you’ve probably heard the term, “web-scale networking.”
But what does that actually mean?
The term web-scale networking is inspired by data center giants like Facebook and Google. The industry looked at data centers like theirs and asked, “what are they doing that we can mimic at a smaller scale?” By analyzing these organizations and the benefits they receive from their tactics, the term “web-scale” was born. Essentially, web-scale refers to the hyperscale website companies that have built private, efficient and scalable cloud environments.
Web-scale networking: a definition
Web-scale networking is simply a modern architectural approach to infrastructure. The differentiating components are taken from the key requirements that large data center operators use to build smart networks. Businesses can design cost-effective, agile networks for the modern era by adhering to these three constructs:
  • Open and modular – Edgecore
  • Intelligence in software – Cumulus Linux
  • Scalable and efficient – Edgecore + Cumulus Linux
These three constructs essentially comprise web-scale networking.
While compute has advanced through leaps and bounds with the convergence to private, public and hybrid clouds, networking has notoriously lagged behind. An open networking philosophy brings traditional networking up to par with the advancements of a web-based approach. Open, web-scale networking provides automation, accuracy and cost-savings to the data center.
Tactical benefits of open, web-scale networking with Edgecore
Automation is more accessible, and more powerful, than ever
Open networking allows organizations to choose the ideal hardware and software for their budget and needs. This means you can use existing automation software or integrate something new. With an open environment, it’s easy to standardize protocols, identify issues and create a unified stack that communicates efficiently.
No more vendor lock-in
Have you ever wanted to upgrade certain components of your network to another vendor’s product, but refuse to go the “rip and replace” route because of both cost and risk? Most organizations are faced with the problem of “vendor lock” because it is too costly to switch systems and vendors. Web-scale networking allows you to choose the network switches, cables, optics, applications and more — based on your needs and your budget.
Operational efficiency
By unifying the stack, customizing hardware and automating software, organizations can completely streamline their processes. Engineers can identify and fix issues quicker. Operators can deploy faster. And organizations can multiply the amount of Edgecore switches managed per operator. All of these benefits result in better DevOps, greater efficiency and lower TCO.
sumber: http://www.miro.co.za/benefit-network-structure-used-social-media-giants/

Thursday, June 22, 2017

Selamat Hari Raya Idul Fitri 1438H

Friday, June 9, 2017

Pertimbangkan Fungsi Utama DCIM

Key DCIM Functionality Considerations

The ecosystem of the data center has many potential points to monitor.
This is the second entry in a Data Center Frontier series that explores the ins and outs of data infrastructure management, and how to tell whether your company should adopt a DCIM system. This series, compiled in a complete Guide, also covers implementation and training, and moving beyond the physical aspects of a facility.
The following are key DCIM functionality considerations to take into account when choosing a system for your business or customers.

Energy Efficiency Monitoring

The ecosystem of the data center has many potential points to monitor. While it would be ideal to monitor everything, cost and value becomes part of the decision process. The focal point of what will be monitored is typically related to which stakeholders or departments are driving the project. It also depends on the age of the data center and how much or little monitoring is already in place. From the facility side, the basic PUE information can be derived by instrumenting only 2 points in the power chain; the utility input energy and the output energy of the UPS (IT energy).
While PUE was originally based on power (kW) draw, which is an instantaneous measurement. In 2011 PUE was updated to be calculated based on annualized energy (kWh measured or averaged over 12 months of operation). This reflects a more accurate picture of the yearly performance, rather than spot power measures which will vary widely depending on when they were taken. As can be seen by the figure below, this requires energy metering at the utility input, as well as the 3 possible points of IT energy measurement, beginning at the output of the UPS (PUE category 1).
PUE points of measurement
In 2011 PUE was updated to be calculated based on annualized energy (kWh measured or averaged over 12 months of operation). This reflects a more accurate picture of the yearly performance.
From an IT perspective, there are also many advantages to monitor power distribution downstream from the UPS, such as at the floor level PDUs (PUE2), or at the rack (PUE3), including identifying cascading failures and PDU overloads. However, this is also where the age of the data center infrastructure becomes a factor.
In newer data centers, floor level PDUs typically have branch circuit monitoring, which can be remotely polled by DCIM (or BMS). Many older data centers do not have this functionally in the floor level PDU. This leaves two options; retrofit branch circuit monitoring or utilize so called “intelligent” rack level PDUs (power strips). The first option, to retrofit the PDUs, falls under the jurisdiction of the facilities department and can be difficult and disruptive and in some cases, may require a power shutdown.
The ecosystem of the data center has many potential points to monitor. #DCIMCLICK TO TWEET
The second option has long been the more popular option, typically driven and deployed by the IT group. In many cases these rack PDUs have Ethernet connectivity and can be easily polled by DCIM systems. In other cases, there may only be lower cost, locally metered rack PDUs or simple “power strips”, neither of which have any remote connectivity. This leaves the option of replacing these with intelligent PDUs or installing “in-line” power monitoring with Ethernet connectivity, which can be polled by the DCIM platform.

Power Distribution Monitoring

Hidden Exposure of Cascade Failure

Many of the more basic power monitoring functions may already be done to one degree or another by some BMS systems. However, in many older data centers, there is no branch circuit monitoring installed, resulting in the need for periodic manual branch circuit surveys (by electricians with clamp-on ammeters and a clipboard). This is typically done to try to avoid circuit breaker overloads or perhaps as a rudimentary form of power capacity planning to see if or how much IT equipment could be added to a cabinet. Even in a relatively well-organized and managed data center, this information may not be readily available, communicated or cohesively correlated within and between the facilities, operations and IT departments. The lack of real-time power and energy monitoring at the rack can delay or disrupt a technical refresh or, worse yet, expose the rack to failure if the branch circuit protection trips when more or new IT equipment is installed.
This hidden exposure can be seen in the figure below, which depicts a potential scenario wherein the typical manual “clamp-on” ammeter is used to measure (A-B) redundant branch circuits to a rack at one point in time, while the plot lines show continuous current measurements over time for the A and B circuits, as well as the sum (A+B) of both.
In the figure below, at the time the manual readings were taken, it would seem as if the total current drawn across (A and B) circuits were only 14 amps (7A+7A). However, the continuous current plot over time shows that at multiple times during the day, the sum of the (A+B) circuits actually exceeded the 16 amp (80%) threshold (which is the maximum current that should be safely drawn from a 20 amp branch circuit, per US National Electrical Code “NEC”). While under normal circumstances, when both circuits are active, there would be no problem in the example below. However, should a problem occur, such as the loss of either one of the branch circuits (either accidentally or during a maintenance procedure), the remaining active circuit could trip during the peak current excursions, since it would now be carrying the entire load (slightly above 18 amps). This represents a lurking exposure to cascade failure.
Manual survey vs. continuous current monitoring
As can be seen by the example above, these peaks would be very difficult to discover, even with regular manual survey snapshot readings. This exposure to cascade failure of redundant power paths can only be revealed by continuous monitoring and recording of current on each branch circuit (A-B) and then setting threshold alarms when the sum exceeds the prescribed limits. DCIM can help monitor and manage these thresholds and alerts, minimizing these potential cascading failures.

High-Power Rack PDU Overloads

While the figure above illustrates the hidden exposure of manual branch current surveys, there is also another concealed risk contained in high-power rack PDUs (which contain multiple circuit breakers to prevent grouped outlet banks from overloading).
While many data centers may take regular weekly or monthly readings of rack power draw, intermittent short term peak current draws and potential exposure to branch circuit overloads will not be detected. Without knowing how much current is being drawn in real-time and trended continuously, just adding a single server would be like playing “Russian Roulette,” since it could result in a tripped circuit breaker.
The ability for DCIM to provide continuous real-time power monitoring and detect and display these peak power conditions can help the mitigate risk of an outage.

sumber: https://datacenterfrontier.com/key-dcim-funtionality-considerations/

5 Hal yang harus diperhatikan dalam desain Airflow Containment

5 Key Considerations for Airflow Containment Design

By implementing an effective airflow containment strategy, it is possible to see optimized cooling system efficiency, improved Power Usage Effectiveness (PUE) and additional equipment capacity, all without having to expand a facility’s footprint. (Photo: Simplex)
In this week’s Voices of the Industry, Ward Patton, Critical Environment Specialist at Simplex Isolation Systems, unpacks five key considerations for airflow containment design. 
Data center containment systems can provide great benefits­ over traditional open data center designs. By implementing an effective airflow containment strategy, it is possible to see optimized cooling system efficiency, improved Power Usage Effectiveness (PUE) and additional equipment capacity, all without having to expand a facility’s footprint.
Ward Patton, Critical Environment Specialist at Simplex
Like any successful system, airflow containment design must consider many different factors to ensure a solution that achieves the required cooling, while remaining flexible for expansion or changes and integrates well with the current infrastructure. Here are five key considerations to keep in mind when planning a data center containment project.
  1. Hot Aisle or Cold Aisle Containment
Hot air and cold air containment are the two high-level methods of an airflow containment strategy, and there is an ongoing discussion in the industry about whether it makes more sense to isolate the hot aisle or the cold aisle. Different data center experts advocate different theories, but realistically, the approach should be dictated by the existing infrastructure.
For example, what is the current air distribution type in the facility? This will play a critical role in deciding which approach is the best fit. Data centers with targeted return and flooded supply air distribution would benefit more from hot aisle containment, whereas data centers with targeted supply and flood return air distribution would see better results with cold aisle containment.
Ward Patton - Data center containment systems can provide great benefits­ over traditional open data center designs.CLICK TO TWEET
Additional factors that can dictate whether to isolate the hot aisle or the cold aisle include the depth of the raised floor plenum, the presence of overhead cabling, varied ceiling heights and support column locations. These are only some of the infrastructure constraints that need to be addressed and every case is site specific. An assessment of the existing conditions of the facility is essential to choosing the right containment solution for any data center.
  1. IT Equipment Arrangement
Once the infrastructure has been evaluated, the next step is to review the current IT equipment arrangement. While most data centers are depicted having rows and rows of server racks of the same brand, shape and size, this is rarely the case. It is not uncommon to see offset rows consisting of multiple server rack brands in all number of shapes and sizes. This is especially true for legacy data centers that have undergone expansion, or are the result of the consolidation of multiple facilities.
It is also recommended to go one step further and review any ergonomic challenges. Factors such as clearance or personnel traffic should be reviewed and planned for accordingly. End-of-aisle doors for instance, should be configured to best fit the traffic patterns within the data center making the space easier to work in when moving racks and other items in and out.
When evaluating an airflow containment system, it is critical to assess the amount of customization that will be needed for the containment system to perform optimally, especially if there is a lack of uniformity with the IT equipment.  While it may seem daunting to find a containment system that can accommodate all levels of variation, most innovative manufacturers can offer a customized and flexible solution to fit any new or retrofit application.
  1. Fire Detection and Suppression
 Fire suppression in the data center is complicated and involved. It is a good policy to involve the local Fire Marshal as soon as possible when planning an airflow containment system. The Fire Marshal enforces the local codes and standards in the jurisdiction and can play a critical role in providing the insight needed to achieve compliance from the very beginning.
Fire suppression in the data center is complicated and involved. #coolingCLICK TO TWEET
Jurisdictions will require compliance with the National Fire Protection Association (NFPA) 75 and 76, which ensure that fire suppression systems are in place and meet specific testing requirements. From a containment standpoint, these standards require data center facilities who use an airflow containment system to either have a fire suppression system that covers all areas of contained aisles or a containment system that integrates with the fire detection system. The former option can be a costly and daunting task, so many data center managers will opt for a containment system that works with the current fire suppression infrastructure.
There are many reputable containment systems available today that are designed specifically for use under a fire suppression system, though potential implications associated with the containment approach will still need to be considered. Vertical containment systems that incorporate softwall curtains, for example, would need to account for the required clearance space below the sprinkler level if the facility is equipped with a sprinkler-based fire suppression system. This would insure full dispersal of water in the event of a fire. Such systems would also need to consider what softwall material is being implemented. It’s not uncommon to receive demands from the Fire Marshal to source curtain materials that meet the stringent ASTM E-84 Class 1 rating for flame and smoke generation, so researching and selecting an appropriate softwall material is important.
Systems that feature ceiling partitions rather than softwall curtains will need to address how the facility’s fire suppression system will be accommodated. These structures generally include ceiling panels that are retractable, are equipped with a soft drop system, or are designed to shrink and fall away when exposed to temperatures that reach 15 to 20 degrees lower that the temperature at which the fire sprinklers would activate.
Whatever the containment approach, there are additional factors to consider when evaluating which system is the best fit.
  • A fail-safe system is key. Look for a containment system equipped for a facility power outage or Emergency Power Off (EPO) event. While many systems may tie into a dedicated emergency back-up supply, relying on a supplemental power system can’t be deemed fail-safe. Instead, consider a system that is inherently fail-safe, such as a gravity-reliant, electromagnetic droplink that would drop away when the power source is disconnected.  That way the fire suppression system would still operate successfully in the event of a power outage.
  • Consider equipment and personnel safety. Look for a system that won’t cause additional damage to equipment or prove harmful to personnel if deployed. Curtain systems for instance, should be equipped with a lanyard drop system and ceiling structures should retract or have a soft-drop feature to prevent damage when they are utilized.
  • Testability. Look for a system that can be tested and reset in the event of deployment. This is a requirement for NFPA 75 compliance.
When it comes to prevention against a fire within the data center, any costs related to risk mitigation will be justified when compared to the cost of damage that would be incurred in the event of an actual fire.
  1. Electrical Utility Incentives
There is a good chance that your electrical utility provider has a system of rebates and incentives for companies that take proactive measures to decrease power usage in their data centers. The utility might have certain stipulations in place such as approved contractors, approved equipment and other requirements. Take these considerations into account as you begin the design process, rather than after the airflow containment project is underway. There are often specific time windows to take advantage of these incentives. 
  1. Future Growth
 Understand that change is unavoidable and design the airflow containment based on that reality. Eventually changes in the data center will be required to accommodate growth or reconfigure the layout. When evaluating an airflow containment system, it makes sense to look for components and structures that are modular in design. Modular mounting hardware for curtains and modular end-of-aisle doors, for example, can be relocated or expanded upon if needed. Ultimately, containment systems that are modular in design will have a lower cost of ownership.
Overall, data centers are complicated and there are many factors that come into play when designing an effective airflow containment solution. When making plans for airflow containment, it is recommended to work with reputable manufacturers that can analyze the space and align airflow and containment objectives with the data center’s site-specific infrastructure and equipment requirements to secure the greatest efficiencies and enable modular, scalable and affordable growth.
Ward Patton is the Critical Environment Specialist at Simplex Isolation Systems. Simplex Isolation Systems designs and manufactures custom data center containment systems that are modular, expandable and high-performing. Simplex’s Containment Resource Guide can serve as an essential tool for planning a hot or cold aisle containment system.
 sumber: https://datacenterfrontier.com/5-key-considerations-airflow-containment-design/