Skip to main content

Posts

Showing posts from September, 2014

Data center Facility management, saatnya berubah

Jeff O'Brien   February 4, 2014   2 Comments » Asset management can have a significant impact on the operational performance and profitability of an asset intensive organization so it has become a hot topic in recent years. It is no longer about fixing assets when then break but rather about employing cost effective asset management strategies that maximize asset availability and reliability by minimizing the probability of system failures. Effectively executed asset management can increase the economic life of capital equipment, increase system reliability and reduce maintenance related costs. Reactive Maintenance Keeping equipment working is paramount to any  data center  organization so moving to an asset management philosophy can increase the life of capital assets such as HVAC systems, UPS, generators and buildings. This can be difficult to achieve if the IT way of doing maintenance has become entrenched in the culture of the  data center  organization. When IT hard

7 tips mengelola perawatan data center Anda

Jeff O’Brien is an industry specialist and blogger at  Maintenance Assistant Inc ., a provider of innovative web-based CMMS, which is a tool to manage facilities and infrastructure equipment at data centers.  JEFF O’BRIEN Maintenance Assistant Can you afford to have one of your critical power distribution assets fail because you missed your scheduled preventative maintenance? According to a recent study by Ponemon Institute, one minute of data center downtime now costs $7,900 on average. With an average reported incident length of 90 minutes, we can calculate that the average incident now costs $700,000. This large cost is related to the fact that modern data centers are supporting critical websites and cloud software applications.  Preventative maintenance ensures maximum reliability by taking precautionary and proactive steps to reduce unscheduled equipment downtime and other avoidable failures. The purpose of preventive maintenance is to institute scheduled inspections so t

5 hal tentang data center yang harusnya diketahui Admin

5 Facts about Datacenters Every Administrator Should Know September 12, 2014  By  Natalie Lehrer Being knowledgeable about datacenter basics will help you solidify your technical career. Datacenters large and small operate on a certain set of principles that can be easily adhered to in order to make the most of your time. Any downtime is unacceptable in many business leaders’ eyes therefore building out or leasing reliable datacenter infrastructure is the key to being successful in your technical endeavors. Downtime Costs $5,000 per minute Sometimes more and sometimes less depending upon the industry but most businesses that have the need for a datacenter need the services for the exact reason of having to be up and running at all times due to business demands. The Ponemon Institute  is credited with coming up with this number representing the cost of datacenter downtime. Older Servers May Use Less Power Although data centers themselves are getting  greener , the newer d

Google data center gunakan Recycling Equipment

As part of our commitment to keeping our users' data safe, we destroy failed hard drives on site before shredding and recycling them. Extending our equipment lifecycle From the moment we decide to purchase a piece of equipment to the moment we retire it, we reduce, reuse, and recycle as much as we can. We reduce by sourcing locally Whenever possible, we use local vendors for heavier components like our server racks. Even if material is more expensive locally, we can recoup the extra cost by reducing shipping charges from farther locations where the material may be cheaper. By limiting the shipping distance, we reduce the environmental impact of transportation. We reuse existing machines Before we buy new equipment and materials, we look for ways to reuse what we already have. As we upgrade to newer, higher-speed servers, we repurpose older machines either by moving them to services that don’t require as much processing power, or by removing and reusing the componen

Google data center gunakan Water Cooling

Colorful pipes carry water in and out of the data center. The blue pipes supply cold water and the red pipes return the warm water back to be cooled. Cooling with water—not chillers The electricity that powers a data center ultimately turns into heat. Most data centers use chillers or air conditioning units to cool things down, requiring 30-70% overhead in energy usage. At Google data centers, we often use water as an energy-efficient way to cool instead. We trap hot air and cool our equipment with water. We've designed custom cooling systems for our server racks that we've named “ Hot Huts ” because they serve as temporary homes for the hot air that leaves our servers—sealing it away from the rest of the data center floor. Fans on top of each Hot Hut unit pull hot air from behind the servers through water-cooled coils. The chilled air leaving the Hot Hut returns to the ambient air in the data center, where our servers can draw the chilled air in, cooling them down

Google data center gunakan Plastic Curtain untuk jaga suhu

Plastic curtains in the network room prevent the hot air behind the server racks from mixing with the colder air in front of the server racks. Controlling the temperature of our equipment To help our equipment function optimally while continuing to save energy, we manage the temperature and airflow in our data centers and machines in simple, cost-effective ways. We raise the thermostat to 80°F. One of the simplest ways to save energy in a data center is to raise the temperature. It’s a myth that data centers need to be kept chilly. According to  expert  recommendations and most IT equipment manufacturers' specifications, data center operators can safely raise their cold aisle to 80°F or higher. By doing so, we significantly reduce facility energy use. We plan by using thermal modeling. We use thermal modeling to locate “hot spots” and better understand airflow in the data center. In the design phase, we physically arrange our equipment to even out temperatures

Google data center gunakan server custom.

Blue LEDs on this row of servers tell us everything is running smoothly. We use LEDs because they are energy efficient, long lasting, and bright. Building custom, highly-efficient servers Google's servers are high-performance computers that run all the time. They're the core of our data centers, and we've designed them to use as little energy as possible. We do this by minimizing power loss and by removing unnecessary parts. We also ensure our servers use little energy when they're waiting for a task, rather than hogging power when there’s less computing work to be done. We optimize the power path. A typical server wastes up to a third of the energy it uses before any of that energy reaches the parts that do the actual computing. Servers lose the most energy at the power supply, which converts the AC voltage coming from a standard outlet to a set of low DC voltages. They then lose more at the voltage regulator, which further converts the power supply'

Mengukur efisiensi data center Google.

Measuring and improving our energy use We're focused on reducing our energy use while serving the explosive growth of the Internet.  Most data centers use almost as much non-computing or “overhead” energy (like cooling and power conversion) as they do to power their servers. At Google we’ve reduced this overhead to only 12%. That way, most of the energy we use powers the machines directly serving Google searches and products. We take detailed measurements to continually push toward doing more with less—serving more users while wasting less energy. We take the most comprehensive approach to measuring PUE Our calculations include the performance of our entire fleet of data centers around the world—not just our newest and best facilities. We also continuously measure throughout the year—not just during cooler seasons. Additionally, we include all sources of overhead in our efficiency metric. We could report much lower numbers if we took the loosest interpretation of the  Gr

5 cara Google mencapai efisiensi data center nya.

Measure PUE You can't manage what you don’t measure, so be sure to track your data center's energy use. The industry uses a ratio called Power Usage Effectiveness (PUE) to measure and help reduce the energy used for non-computing functions like cooling and power distribution. To effectively use PUE, it's important to measure often. We sample at least once per second. It’s even more important to capture energy data over the entire year, since seasonal weather variations affect PUE.  Learn more . Manage airflow Good air flow management is crucial to efficient data center operation. Minimize hot and cold air mixing by using well-designed containment. Then, eliminate hot spots and be sure to use blanking plates (or flat sheets of metal) for any empty slots in your rack. We've found that a little analysis can have big payoffs. For example, thermal modeling using computational fluid dynamics (CFD) can help you quickly characterize and optimize air flow for your fac