The American Society of Heating, Refrigerating and Air-Conditioning Engineer (ASHRAE) Technical Committee 9.9 published in 2008 the “Best Practices for Datacom Facility Energy Efficiency,” which covers all aspects of technology spaces, data centers, and electronic equipment. ASHRAE has since published several books discussing data center energy reduction.
Aside from the challenge of increasing energy efficiency, data centers must deal with several physical and environmental threats:
– Water Leaks
– CRAC or Air Handler Failure
ASHRAE also provides guidelines regarding data center practices that can improve facility efficiency and mitigate external threats.
ASHRAE Technical Team Committee(2016) Data Center Power Equipment Thermal Guidelines and Best Practices. Retrieved 01 December 2020 from https://tc0909.ashraetcs.org/documents/ASHRAE_TC0909_Power_White_Paper_22_June_2016_REVISED.pdf
The first decade of the 21st century led to a sudden surge in computer usage. Due to the increase in computer use, there was an unexpected demand for data centers. ASHRAE produced five editions of ASHRAE Standard 90.1: Energy Efficiency for Buildings Except Low-Rise Residential Buildings. However, the publications contained a minimal amount of data center-specific language. Emerging state-of-the-art data centers found it a challenge to meet energy compliance requirements. There were several limitations for data centers and it is more prescriptive-based rather than performance-based. It largely focused on what a company needs to do instead of the criteria they need to meet. The provisions in ASHRAE 90.1 about other building types also apply to data centers. In some instances, it did not showcase the intended operations or design as formulated by the owner and design engineer regarding data center exceptions on outdoor air economizer and the defined HVAC system type. It also introduced some design application ideas that are not practical for a data center, such as requiring an economizer to be included in the overall facility design.
It also contained little information about data centers that are larger than a telecom closet. This was crucial due to the continuous expansion of data centers. An industry collaboration was formed with end-users who implemented internal energy efficiency programs regarding the established ASHRAE guidelines. They contributed essential operational data that played a huge role in the development of the ASHRAE 90.4 standard.
ASHRAE 90.1 became widely popular as a basis for ensuring energy compliance for commercial buildings. It was integrated into many jurisdictions as a part of their building codes. In the ASHRAE 90.1-2007 edition, ASHRAE solicited proposals from the public on how to further improve the 90.1 guidelines. The changes would then reflect in the 2010 version. It received a lot of proposals from the public, including a response from the ASHRAE Technical Committee 9.9. The proposal by TC 9.9 advised changes that would enhance technical requirements. They also recommended clear and consistent data center efficiency language. It addressed many modeling and design issues that were in the previous versions. In the release of the 2013 version of ASHRAE 90.1, the recommendations proposed by TC 9.9 committee were published.
Updates in ASHRAE 90.4
Due to limitations in the previous guidelines, the technical committee formulated a new standard that would be more relevant for the data center industry. One of the priorities was to produce a standard where calculations would be based on the relative components of the design rather than the Power Usage Effectiveness (PUE) metric. They intended to develop a standard that does not coincide with the innovations in the data center industry while providing criteria that will lead to further energy savings.
The ASHRAE 90.4 Energy Standard for Data Centers is applicable to data centers with a floor area greater than 20 W/ft2 and IT equipment loads greater than 10 kW. It also specifies requirements for electrical and mechanical systems that operate in new data centers or in alterations that require new systems.
Chairs of the committees responsible for ASHRAE 90.1, 90.4, and the ASHRAE Standards Committee convened to address any conflicts laid out in the standards. The goal for ASHRAE 90.4 was to define data center energy efficient requirements while using the energy compliance for “non data center” components indicated in ASHRAE 90.1.
ASHRAE 90.4 refers to ASHRAE 90.1 for the following:
– Building envelope
– Service water heating
– Mechanical cooling equipment efficiencies
– Other equipment criteria
The 2016 ASHRAE guidelines became more performance-based by declaring the minimum energy efficiency requirements for data centers. It enumerates criteria that data center designers need and explains how to do energy-efficient, compliant calculations. Operation, maintenance, design recommendations are also included in the guidelines.
Under the new standard, calculations need to be equal or less than the values indicated for the specific climate zones. It is assumed that the facility will be more energy-efficient if the least-performing elements of each system meet the minimum efficiency or maximum loss indicated.
PUE VS ASHRAE 90.4
The primary metric used by the data center industry is PUE. In 2016 it became an ISO standard. ASHRAE 90.1 is often incorporated in many state or local building codes in the US. However, due to the overtly prescriptive tone of the 90.1 guidelines, the 90.4 Energy Standard for Data Centers was introduced to address this issue.
PUE is not a design metric. It was intended to measure baseline and optimize operating energy efficiency. Still, it was referred to before construction for building designs. It also tends to be used as a reference in colocation contractual SLA performance or energy cost schedules. Meanwhile, ASHRAE 90.4 is mainly used as a design standard applied when submitting approval plans before constructing a data center facility. It also tackles 10% or greater facility capacity upgrades.
In terms of data measurement and energy calculation, the 90.4-2016 is more complicated than the PUE metric. One of the downsides of the PUE metric is that it does not contain a geographic adjustment factor. Since the cooling system energy often stands for a significant percentage of energy usage in a facility, similarly constructed date centers would reflect a different PUE if one had a different geographic location.
The electrical power chain losses due to cooling system energy efficiency calculation are separated in the 90.4 standard. It indicates the limits and total maximum electrical losses across the power chain from the handoff in the utility, through the distribution system and finally at the cabinet power strips that give energy to the IT Equipment.
The mechanical load component (MLC) or the section about cooling system calculation indicates location as a factor in meeting the cooling system energy compliance. A table with US climate zones is listed in ASHRAE Standard 169 with its own respective Maximum Annualized MLC compliance factor.
Best practices for data center temperature and humidity monitoring
Data centers need to be operated within a safe temperature zone to remain functional. An overheated server can cause downtime that may lead to thousands of dollars in financial loss, which is why many managers opt to undercool their data centers. However, should data centers be one degree closer to the specified temperature safe zone, they would save around 4% on energy costs.
ASHRAE TC 9.9 should be referred to for the servers’ optimal operating temperatures instead of using PUE levels as the sole metric. ASHRAE advises to include three sensors per rack. To more accurately monitor the surrounding temperature levels, these sensors should be placed at the top, middle, and bottom of the rack. Meanwhile, a sensor at the back of a cabinet would be able to provide significant data in a hot or cold aisle.
To further increase efficiency and prevent downtime, rack cabinet exhaust metrics, internal temperatures, and server temperatures should also be tracked. The recorder readings will help a response engineer better address issues in real-time before they lead to a significant costly outage.
Humidity levels should also be monitored as per ASHRAE guidelines. High humidity can lead to increased levels of condensation and produce electrical shortages and equipment failure. In contrast, when humidity levels are too low, data centers may encounter electrostatic discharge (ESD). To handle these issues, managers should prevent uncontrolled temperature increases that lead to humidity levels that fall out of the specified range. Data center and server room humidity levels should be kept in between 40%-50% relative humidity (rH). This range will help prevent the occurrence of an ESD, reduce the risk of corrosion caused by excessive condensation and overall prolong the life expectancy of IT equipment.
Kami memastikan Data Center dan Ruang Server anda bekerja optimal dengan :
- Mengukur daya resource yang digunakan (Data Center Infrastructure Management)
- Memastikan perkabelan yang baik dan terdata (Cabling Restructure & Documentation)
- Memastikan penggunaan Pendingin yang baik
- Melakukan pemasangan & perawatan AC PRESISI
- Melakukan pemasangan & perawatan FM 200 / FIREPRO Fire Extinguisher
- Menggunakan Environment Monitoring System (EMS) AKCP
- Melakukan pemasangan Timer AC Split
Kami membantu melakukan pemasangan :
- Kabel Fiber Optic Singlemode / Multimode untuk area pabrik, perkantoran
- Kabel UTP / STP / FTP
- Kabel Sensor
- Kabel Listrik / Eletrical
- Panel Listrik
- Pemasangan Akses Door
- Pemasangan CCTV
Silahkan hubungi Tim kami untuk mendapatkan penawaran Terbaik terkait kebutuhan anda.
HUBUNGI KAMI: email@example.com