Tuesday, January 31, 2017

Ayo bangun data center di rumah anda, cuma di Tamansari Cyber Bogor

Kata siapa tidak bisa bangun Data Center di rumah?
Saya sudah buktikan, di perumahan Tamansari CyberPark Bogor, kami bisa membangun data center mini, bahkan di ruang kamar tidur utama. 
Kamar tidur utama disulap menjadi area raised floor sehingga semua perkabelan bisa tertata baik dibawah. Dengan akses fiber optic kecepatan tinggi dari salah satu pusat bandwidth nasional, kami tidak kuatir masalah kecepatan akses Internet. Suasana sejuk kami rasakan sepanjang hari, tetapi kami siapkan 2 unit AC 2 PK untuk menjaga kondisi ruangan.
Memang ini sangat tepat bagi para pebisbis online, para usaha startup serta pengusaha yang menggunakan Internet, semua bisa diakomodasi oleh Tamansari Cyber Bogor. Jangan ragu untuk mengambil unit yang masih ada, segera kontak dan jadikan rumah anda sebuah CyberHome..
Tertarik ? Bisa kontak saya, nanti saya ajak visit kesana ya.

Sunday, January 29, 2017

Is Hot Aisle Cold Aisle Dead?

Is Hot Aisle Cold Aisle Dead?

For years we have been zeroing in on the ultimate in cooling efficiency:  Cool air ducted directly into equipment and hot air ducted directly out.  The ducts carry the respective airstreams to and from HVAC equipment without mixing or leaking into the ambient room space. It has been a long journey from the early days of datacom cooling where we simply flooded the room with an abundance of cool air. Of course the assumption was that the cold air would migrate to where it was needed and the hot exhaust air would also find its way back. And just to be sure, we added plenty of margin—and dollars—to the system.
Fast forward to today. Current equipment designs demand a steady flow of air to each device to maintain a suitable operating temperature. And we know that efficiency is maximized when we don’t mix the hot exhaust air with the cold supply air. Is it possible to accomplish both?
Various solution providers have created just such a cooling utopia. By expanding the cooling system to use the ceiling plenum as the hot air return plenum and utilizing racks with exhaust chimneys, the hot air flows directly back to the CRAC units. The entire room essentially becomes a cold aisle. Taken one step further, an enclosed rack with ducted cold air supply, along with the chimney exhaust, directs the cooling air only through the equipment and the rack. Now the entire room can be virtually any temperature—warm or cold—and the equipment won’t mind.
This cooling utopia isn’t free nor easy to achieve in an existing datacom environment. So what’s the solution when new racks aren’t an option, at least not today? What if the ceiling plenum doesn’t exist or isn’t practically available to serve as the hot air return plenum?
Practical solutions are available to approximate the ideal layout outlined above. Such solutions essentially isolate the hot aisles or cold aisles from the remainder of the room. The isolation prevents, or at least limits, the mixing of supply and exhaust airstreams. The result is better control of each with the potential to raise the supply air temperature and reduce cooling expense.
The most economical solution is a curtain system which hangs from the ceiling along the rack rows. Various options with this system make it an excellent interim solution while planning and preparing for the ultimate cooling solution in the long term.

Data Center Curtains

More Information on Data Center Curtains.
source: http://www.dataclean.com/asia/cold-aisle-containment.html

Apakah Power Usage Effectiveness - PUE?

Power Usage Effectiveness - PUE

The boss called, "What's our PUE?"

Power Usage Effectiveness Example
Do you know your PUE? 
PUE stands for Power Usage Effectiveness and it is rapidly becoming the number to know. In the past, data center managers were simply asked to provide enough space, power and cooling to support the IT equipment. Now, the same managers are being asked to do it efficiently. PUE can be a helpful benchmark.
Introduced by the Green Grid, PUE is a measure of efficiency. It is defined as: the total facility power consumed divided by the total IT equipment power consumed. The total facility power is measured at the utility meter for data centers. (For mixed-use facilities like an office building that contains a data center, only the power needed for the computer room should be measured or even estimated.) The facility power includes everything that supports the IT equipment including power, cooling, lighting, etc.
The IT equipment power is the load associated with servers, storage, networking, work stations, etc., that are used in the data center. Of course, the total facility power will always be greater than the power required by the IT equipment. So the PUE calculation will always be greater than one. But, how much greater?
A PUE of 1.0 would be an ideal situation: no power distribution losses, no chillers, pumps or fans, etc. While that is not possible, industry giants like Microsoft and Google are planning for PUE's of 1.2 or even better. That would truly be best-in-class. Today, according to the Uptime Institute, a typical data center has an average PUE of 2.5. However, it is not uncommon to have a PUE of 3.0 or greater. This means only 1/3 of the power is consumed by the IT equipment. Or, put another way, 2/3 of the power (and the utility bill) is wasted!
Whether you are looking to go Green or lower your OpEx, knowing your PUE is the first step. From there a step-by-step plan can be put in place to move that number down. So, measure now—before the boss calls.
SOURCE: http://www.dataclean.com/asia/power-usage-effectiveness.html