Sunday, December 19, 2021

Thermal Mapping for Data Centers

Thermography refers to the use of a thermal imaging device to detect the radiated heat from an object or hot spots in the data center. Thermography has been around for some time and usually in the security industry, the medical industry, and military applications. 

Thermography detects heat and converts it into an image that you can see. This technology is now being employed in data centers. Thermography ensures that various equipment in a facility is running at normal operational temperatures. It also detects abnormal heat patterns that may show issues in the data center airflow.

Thermal Mapping Technology

thermal mapping


Thermography is used in various applications and industries, including the following:

  • Building diagnostics
  • Chemical imaging
  • Earth science imaging
  • Electrical system monitoring
  • Fluid system monitoring
  • Law enforcement and security imaging
  • Machine condition monitoring
  • Medical imaging, which is often used to diagnose diseases

There are many types of infrared thermometers. The basic type of thermometer has a lens that focuses on infrared thermal radiation onto a detector. This radiation is turned into radiant energy that is shown as color-coded signals. Thermometers are designed to measure temperature even from a distance. This prevents the need for close contact with the object being measured.

Below are the three most common types of thermometers used today.

  • Infrared Scanner

This thermometer can measure the temperature of larger spaces. It is often used in manufacturing plants with conveyors and those with web processes. 

  • Spot Infrared Thermometer

This is a handheld radar gun. It is used to detect the temperature of a specific spot on a surface. Spot infrared thermometer is also known as a pyrometer. It is ideal for measuring heat on hard-to-reach equipment under extreme conditions. HVAC operators often use a pyrometer to check the temperature of a ventilation system. Pyrometer is also used in monitoring electrical rooms, water leaks, and panel boards. It is also used in boiler operation and stream system monitoring.

  • Infrared Thermal Imaging Camera

This is used to measure temperature at various spots across a wide space. It creates 2D thermographic images with an advanced type of infrared thermometer. Thermal imaging cameras are software-based systems that show real-time images to be used with other software. This helps improve accuracy and provides more in-depth insights. 

Thermal Imaging Uses Color Palettes To Show Varying Temperatures

  • Black And White Palette. This is also known as grayscale. It only distinguishes temperature between many levels of gray. Black for the coldest and white for the hottest temperatures.
  • Iron Palette. This is the usual color palette in thermal imaging. The coldest areas are shown in black. Slightly hotter areas are in blue and purple. Mid-range temperatures are red, orange, and yellow. White is for the hottest temperatures.
  • Rainbow Palette. This shows varying temperatures through different colors. Like the iron palette, it also uses the whole spectrum to demonstrate different temperatures.

Thermal Mapping In Data Centers

Thermal mapping in data center


Nowadays, data centers need more than just good IT operations. Efficient software and hardware systems are also essential to function at optimal condition. Thermal imaging and thermal mapping can be used to track power consumption, temperature, cooling, and other IT operations. Using IRt is the best for monitoring electrical, cooling, and computing equipment.

In data centers, IRt is used to look for, diagnose, and record issues in the facility. It is used for facility issues with air conditioning systems, loose electrical connections, and worn-out bearings. After fixing these issues, the IRt is used to recheck equipment. Ensuring that all are functioning properly.

Two Categories of Employing IRt

  • Cooling Systems and Heat-Generating Equipment –  The IRt capture the condition of the cooling system. Thermal mapping is used to gather and present the data collected. It allows for IT management and HVAC professionals to look at heat-related problems of the system.
  • Electrical Power Distribution And Mechanical Systems- Monitoring electrical systems is crucial to safe and efficient data center operations. Electrical IRt is the commonly accepted application of IRt technology in data centers. Electrical switchgear, motors, and motor controls need thorough monitoring. HVAC systems, UPS, ATS, and PDU also require in-depth checking. And IRt is here to help do these requirements.

The function of cooling systems is to make way for the cooling air intakes and the return exhaust air to the CRAC units. Facilities are typically designed using computer software. It is then modeled using computational fluid dynamics (CFD). CFD models predict the thermal performance of data centers. Despite advanced computer modeling, CFD is not always reliable. Other aspects of building facilities can affect the thermal conditions in data centers. For instance, under-floor cable and ducting installations can have a significant impact on airflow.

That is why thermal mapping is a useful tool in data centers. Thermal mapping can capture the full thermal conditions of a facility and all its equipment. Offering comprehensive and actual thermal imagery.

Temperature monitors allow you to track the thermal conditions of your facility. You just need to place them on strategic points to maximize their benefits. But thermal imaging brings more advantages. A thermal image can show more than 75,000 temperature points. It is best to incorporate thermal imaging systems with your sensors.

Thermal Mapping Designs In Data Centers

Designs In Data Centers


Each area of a data center must be strategically designed. For your thermal map, you need to gather all thermal and visual imagery of your facility. You can make a 2D or 3D model, depending on your needs and preferences.

  • 2D Thermal Map Of Data Center Server RacksThe figure below shows an example of a thermal map of server racks. The thermal map shows highly intricate front-facing images of servers. The details are more intricate when you use IR and visible images. You can also see the thermal conditions when cabinet doors are open. By employing thermal maps in your facility. You will have a clearer image of the heat generation in your facility. This is a big advantage towards efficiency.
  • 2D Thermal Map Of Data Center FloorsA 2D thermal map provides quick access to large spaces of data. The mosaic infrared images (IR) can analyze patterns that may not be clear in single images. The figure below is an example of a 2D thermal map of data center floors.
  • 3D Thermal Map of Data Center FloorsThe 3D thermal map is the most advanced approach towards capturing thermal conditions. It is also the most powerful of all tools. 3D models can be viewed from any angle. A complete 3D thermal map gives you the big picture of your facility. With 3D maps, professionals can examine your facility without having to be on-site.

Monitoring with AKCP

AKCP is the world’s leading expert on professional sensor solutions. Our R&D centers specialize in SNMP-based networking and embedded device technology. AKCP is the first company to bring a workable, LoRa based wireless monitoring system and wireless tunnel for critical infrastructure to market. With features specifically tailored for the data center.

  • Wireless Thermal Mapping of IT Cabinets


    Wireless Thermal Map Sensor

Monitor the temperature differential between the front and rear of your cabinet. Datacenter monitoring with wireless thermal map sensors helps identify and eliminate hotspots in your cabinets by identifying areas where temperature differential between front and rear are too high. 

With three (3) temperature sensors at the front and three (3) at the rear, it monitors airflow intake and exhaust temperatures, as well as providing the temperature differential between the front and rear of the cabinet (ΔT) Wireless Thermal Maps work with all Wireless Gateways.


Cooling is an integral part of data centers. Servers and other IT equipment generate heat that needs to be removed in the facility. Nowadays, there are many cooling methods that you can use in your data center to remove heat. But these cooling techniques are not enough. Monitoring temperature and heat waste remain integral aspects of your facility.

Reference Links:

Wednesday, November 17, 2021

Jenis - Jenis Topologi Jaringan Komputer yang Berguna untuk Perusahaan


Topologi jaringan komputer punya fungsi yang bagus untuk sebuah perusahaan ketika akan menginstalasi jaringan komputer. Karena itu, kita diharuskan mengetahui terlebih dahulu topologi yang terbaik untuk dipakai. Perlu diketahui, topologi jaringan pada setiap komputer memiliki perbedaan, baik dari segi fisik maupun kecepatan transfer datanya.

Topologi jaringan ini mempunyai beberapa jenis yang masing-masing punya kelebihan dan kekurangan tersendiri. Untuk itu, Anda harus mempertimbangkan secara matang tentang topologi jaringan komputer yang akan dipakai.

1. Topologi Ring
Topologi ring adalah cara untuk menghubungkan beberapa komputer dengan menggunakan jaringan yang berbentuk cincin (ring). Umumnya, topologi ini menggunakan LAN Card agar saling terkoneksi.

Karakteristik Topologi Ring :
  • Setiap node dihubungkan secara serial sepanjang kabel yang membentuk jaringan seperti lingkaran.
  • Jika salah satu node terputus, maka jaringan lainnya akan mati.
  • Paket data dapat mengalir dua arah (kiri dan kanan). Hal ini dapat menghindari terjadinya collision atau tabrakan.
  • Biasanya menggunakan kabel UTP atau Patch Cable.
  • Layout sangat sederhana.
Kelebihan Topologi Ring :
  • Mudah dirancang dan diimplementasikan.
  • Performa lebih baik daripada topologi bus.
  • Hemat kabel.
  • Mudah untuk melakukan pelacakan dan pengisolasian jika terjadi kesalahan dalam jaringan.
  • Tidak akan terjadi tabrakan pengiriman data (collision).
Kekurangan Topologi Ring :
  • Jika terdapat satu jaringan komputer yang bermasalah, maka jaringan pada komputer lainnya juga mengalami masalah.
  • Menambah atau mengurangi jaringan akan mempengaruhi keseluruhan jaringan.
  • Sulit untuk dikonfigurasikan.
2. Topologi Bus
Topologi bus adalah topologi jaringan komputer yang pertama kali digunakan untuk menghubungkan komputer. Topologi ini menjadi jenis topologi yang paling sederhana dibandingkan dengan lainnya.

Topologi bus menggunakan media kabel pusat sebagai media transmisinya untuk menghubungkan client dan server. Topologi jenis ini biasanya digunakan untuk jaringan komputer perusahaan berskala kecil.
Karakteristik Topologi Bus :
  • Setiap node terhubung dengan menggunakan kabel serial panjang, dan pada kedua ujung kabel ditutup dengan menggunakan terminator
  • Instalasi sangat sederhana.
  • Ekonomis dalam biaya.
  • Paket-paket data saling bersimpangan dalam suatu kabel.
  • Tidak menggunakan hub.
  • Menggunakan banyak Tconnector di setiap ethernet card.
Kelebihan Topologi Bus :
  • Biaya ekonomis.
  • Instalasi mudah.
  • Tidak memerlukan banyak kabel.
  • Topologi yang sederhana.
Kekurangan Topologi Bus :
  • Proses berjalan lebih lambat.
  • Tidak cocok untuk traffic jaringan yang padat.
  • Sulit untuk melakukan troubleshoot.
  • Setiap barrel connector yang digunakan sebagai penghubung akan melemahkan sinyal yang dikirimkan. Kebanyakan akan menghalangi sinyal untuk bisa diterima dengan baik.
Topologi ini cocok bagi perusahaan yang mengingkan setiap komputernya mempunyai akses tersendiri, contohnya untuk print naskah atau dapat mengoperasikan sebuah komputer secara penuh. Pada dasarnya, sistem pengoperasian topologi bus seperti halnya terminal yang mana setiap data akan transit di seluruh komputer yang terhubung pada terminal server utama.

Topologi ini juga tidak terlalu banyak menggunakan kabel sehingga proses instalasinya cukup mudah. Bahkan, untuk menambahkan komputer client yang baru juga bisa dilakukan dengan sangat mudah.

3. Topologi Star
Topologi star menjadi jenis topologi yang paling sering digunakan. Berikut karakteristik dari topologi star :

  • Mudah dikembangkan.
  • Setiap node berkomunikasi langsung dengan konsentrator (HUB).
  • Jika terdapat satu ethernet card yang rusak atau kabel terminal putus, maka jaringan lainnya yang terhubung tidak berpengaruh.
  • Kinerja jaringan akan menurun jika ada paket data yang masuk ke HUB kemudian di broadcast ke seluruh node (misalnya menggunakan HUB 32 port).
  • Biasanya menggunakan jenis kabel UTP.
Kelebihan Topologi Star :
  • Tingkat keamanan tinggi.
  • Jarang mengalami masalah lalu lintas jaringan.
  • Penambahan atau pengurangan jaringan dapat dilakukan dengan mudah.
  • Kemudahan dalam mendeteksi kerusakan pengelolaan jaringan.
  • Jika terdapat satu jaringan yang rusak, maka jaringan lainnya tidak terpengaruh.
  • Paling fleksibel dan akses kontrol terpusat.

Kekurangan Topologi Star :
  • Jika node inti mengalami kerusakan, maka seluruh jaringan akan berhenti.
  • Jika lalu lintas data padat, jaringan akan menjadi lambat.
  • Jaringan tergantung pada terminal pusat.
  • Membutuhkan banyak kabel. Hal ini karena setiap komputer harus disambungkan ke central point.
  • Biaya lebih mahal dari topologi bus atau ring.
Kami membantu Anda untuk :
1. Desain Ruang Server
2. Perapihan Ruang Server
3. Perapihan Kabel - Rak - Tray
4. Pendokumentasian / Pelabelan Kabel
5. Pemasangan PAC / AC Split / AC Standing
6. Pemasangan Fire Extinguisher (FirePro / FM200)
7. Pemasangan Fire Stopper
8. Pemasangan Raised Floor
9. Pemasangan Environment Monitoring System (EMS)
10. Pemasangan PABX (IP/Hybrid)
11. Pemasangan Switch, Router, Firewall
12. Pemasangan NOC System (NMS)
13. Pemasangan PA System
14. Perawatan / Maintenance PAC System

Silahkan hubungi Tim kami untuk mendapatkan penawaran Terbaik terkait kebutuhan anda.

Tuesday, October 26, 2021

Atasi Masalah Data Center Dengan AKCP


Data Center, Pusat Data atau Ruang Server menjadi hal penting saat ini.
Semua data penting ditempatkan di ruangan ini. Pastikan tidak ada masalah terkait dengan monitoring data center.

Pelajari dalam webinar ini bersama solusi dari DCM MONITORING pada 29 Oktober 2021, Jam 14.00 – 16.00 WIB.

Registrasi Webinar di :

Tuesday, October 12, 2021

Pilih Pendingin Untuk Server: Room, Raw atau Rack-Based

Berapa luasan ruang server atau data center anda ? 

Berapa banyak rak dan perangkat di dalamnya?

Itu selalu dua pertanyaan dasar yang kami berikan ke customer waktu mereka bertanya terkait pendingin untuk ruang server mereka. Mengapa kedua hal itu penting ?

Perangkat kita umumnya menghasilkan panas di belakang, hampir semua perangkat. Mulai dari server dalam bentuk PC Server, hingga rack-mounted server, serta semua perangkat jaringan dan storage. Semua sama, menyedot udara dingin dari depan, dan membuang udara panas ke belakang. 

Maka bila perangkat anda tidak ditempakan dalam RACK, maka tentu dingin akan kalah oleh panas yang dihasilkan oleh perangkat. Dan titik panas dalam ruangan akan menjadi muncul di banyak titik (hot spot). 

Untuk menanggulangi ini, cara pertama tentu harus memastikan semua perangkat masuk ke dalam rak, sehingga titik panas dapat terkumpul di dalam rak bagian belakang. 

Dalam pendekatan penempatan perangkat secara keseluruhan dikenal juga model berbaris (RAW). Sehingga muncullah tiga jenis seperti berikut

Bila melihat arah tanda panah biru di atas, maka ini adalah pendekatan pendinginan atau cooling yang dilakukan. 

Room-Based Cooling

Bila kita melihat pendekatan umum, setelah semua perangkat masuk ke dalam rak, maka kita akan melihat model ini yang paling banyak diterapkan, yaitu room based cooling.

Dengan metode ini, semua udara dingin disalurkan via bawah rak melalui raised floor, dan diserap di bagian depan dari rak, umumnya dengan menggunakan perforated rak dan udara dingin keluar melalui perforated panel raised floor (warna panah biru), dan diserap kembali oleh sistem pendingin di bagian atas rak (warna merah). 

Cara ini bagus dengan memperhatikan serapan udara dingin optimal yang bisa diserap perangkat di dalam rak, sehingga harus memastikan tidak ada kebocoran udara dingin yang diserap. Kemungkinan lain di dalam rak juga harus diperhatikan penyerapan udara dingin secara optimalnya, dengan memastikan udara dingin bisa diserap hingga ke bagian paling atas rak. 

Serta memastikan juga udara panas yang terkumpul di belakang rak dapat disedot semaksimal mungkin keluar rak bagian atas agar terserap balik ke unit pendingin. 

Dalam kasus kapasitas yang besar, maka digunakan jalur udara dingin dan panas untuk memastikan serapan udaranya.

Raw-Based Cooling.

Model lainnya adalah raw-based, berdasarkan baris.  Pertama adalah dengan membagi baris untuk masuk udara dingin dan udara panas keluar. 

Dengan cara ini, maka semua udara dingin dikonsentrasikan ke arah masukan rak (pintu rak depan), dan udara panas dikumpulkan di belakang saling membelakangi, sehingga udara panas pun terkumpul, dan naik ke atas kembali menuju unit pendingin. Ini dikenal dengan Floor Mounted Raw Based Cooling.

Model lainnya adalah dengan menggunakan Overhead Raw-Based Cooling.

Dengan cara ini, udara dingin keluarkan dihembuskan ke bagian depan rak, dan udara panas diserap kembali masuk ke dalam unit pendingin yang ada di atas bagian rak. Data center model lama banyak yang menerapkan cara ini, hanya saja pendingin jenis ini harus hati-hati karena air yang sangat dekat dengan rak dimana suatu saat kemungkinan terjadi kebocoran.

Rack Based Cooling
Model yang ini sedang banyak diminati saat ini. Karena hanya mendinginkan per rak.

Rack based cooling akan sangat sesuai bila semua perangkat yang harus dijaga suhu nya ditempatkan dalam perangkat. Kecenderungan sekarang semua perangkat bisa dimasukkan ke dalam satu rak, karena menerapkan virtualisasi dan cloud, sehingga tidak semua perangkat server digunakan. Rack based juga cocok untuk penempatan perangkat yang tidak berada di dalam ruang khusus untuk server.

Bila ingin menerapkan ketiganya juga tidak ada masalah.

Kadang kita memerlukan pendekatan tertentu untuk jenis perangkat tertentu. 

Mana yang tepat untuk kita ? Schneider telah mendefinisikannya berikut ini.

Silahkan kontak kami di / 0881-8857333 untuk membantu solusi pendinginan untuk perangkat server anda.



Kompas: Pendinginan Server Microsoft, Dulu Air Laut Kini Jajal Metode Penambang Kripto - Perusahaan teknologi Microsoft kini mulai pendinginan perangkat pusat data (data center) miliknya dengan cara direndam dalam cairan khusus. Perangkat komputer server yang berfungsi pusat data memang identik menghasilkan panas yang berlebih. Saat ini, sebagian besar server didinginkan menggunakan metode pendinginan rawa, yang mana membutuhkan banyak air. 

Untuk mensiasati penggunaan air, Microsoft mulai mencoba mendinginkan server miliknya dengan cara merendam seluruh bagian rak dalam cairan non-konduktif berbasis fluorokarbon dalam sebuah bak. Metode ini juga disebut sebagai sistem pendinginan imersi dua fase (two-phase immersion cooling system).  Cairan ini disebut dapat menghilangkan panas karena langsung mengenai komponen dan fluida mencapai titik didih yang lebih rendah dibandingkan air yakni 50 derajat celcius (atau 122 derajat Fahrenheit) untuk mengembun dan jatuh kembali ke bak sebagai cairan hujan. Metode imersi dua fase ini menciptakan sistem pendinginan loop tertutup yang mana dapat mengurangi biaya. Hal ini mengingat tidak ada energi yang diperlukan untuk memindahkan cairan ke tangki, dan juga tidak diperlukan chiller untuk kondensor. 

Upaya ini dilakukan Microsoft untuk mengurangi penggunaan air oleh perusahaannya, serta meningkatkan kinerja dan efisiensi server tersebut. Dapatkan informasi, inspirasi dan insight di email kamu. Daftarkan email "Ini berpotensi menghilangkan kebutuhan konsumsi air di pusat data, jadi itu hal yang sangat penting bagi kami," kata Christian Belady, wakil presiden grup pengembangan lanjutan pusat data Microsoft. 

 Untuk diketahui, jika diakumulasikan, penggunaan air Microsoft untuk operasi perusahaannya mencapai 15 juta meter kubik air, khusus untuk periode 2018 dan 2019. Belady mengungkapkan, tujuan utama Microsoft adalah untuk mencapai target nol penggunaan air. Proyek sistem pendinginan imersi dua fase ini boleh jadi akan membantu Microsoft mencapai target tersebut. “Tujuan kami adalah mencapai nol penggunaan air. Itu metrik kami, jadi itulah yang sedang kami upayakan," ungkap Belady. 

Lihat Foto Sejumlah rak server Microsoft didinginkan dengan cara direndam dalam cairan berbasis fluorokarbon.(Microsoft via The Verge) 

Terinspirasi dari penambang bitcoin Usut punya usut, ternyata, metode pendinginan dalam cairan fluorokarbon yang dilakukan oleh Microsoft ini terinspirasi dari para penambang bitcoin (cryptominers). Dalam beberapa tahun terakhir, jenis pendingin cair ini telah digunakan oleh para cryptominer untuk menambang bitcoin dan mata uang kripto lainnya. Belady mengungkapkan bahwa, penggunaan cairan fluorokarbon untuk mendinginkan server ini dilakukan secara bertahap. Pada fase pertama, Microsoft hanya menguji coba cairan pendingin ini pada sebagian rak server miliknya yang memiliki beban kerja yang kecil. 

Pada fase ini, Microsoft mempelajari sejumlah hal, misalnya implikasi keandalan dari pendinginan baru ini dan jenis beban kerja burst apa yang dapat membantu untuk permintaan cloud dan AI dari perusahaan. Ke depannya, uji coba akan melibatkan rak server yang lebih banyak. "Kami memiliki pendekatan bertahap, dan fase berikutnya segera dengan banyak rak," kata Belady. Belady berharap pihaknya dapat melihat keandalan yang lebih baik pada server miliknya yang didinginkan dengan metode imersi dua fase. 

Lihat Foto Pusat data milik Microsoft diangkut ke daratan(

 Gagal di Laut 

Sebelumnya, pada 2018, Microsoft juga sudah sempat menguji coba mendinginkan server miliknya dengan cara direndam di bawah air laut. Ketika itu, Microsoft menenggelamkan 12 rak dengan 864 server dan penyimpanan berkapasitas 27,6 petabytes di pesisir laut Orkney, Skotlandia, sebagai bagian dari Project Natick fase kedua. Baca juga: Dua Tahun Microsoft Riset Taruh Server di Bawah Laut, Ini Temuan Mereka Pada 2020, pusat data tersebut diangkat ke permukaan. Dari 855 server onboard yang dimasukkan ke kapsul dan ditenggelamkan, ternyata hanya delapan yang tidak bisa bertahan. Tingkat kegagalan itu menurut Microsoft lebih baik dibandingkan dengan pusat data yang berada di darat. Tingkat kegagalan yang lebih rendah itu kemungkinan disebabkan oleh tidak adanya interaksi dengan manusia, serta server yang beroperasi di lingkungan kaya nitrogen yang disuntikkan dalam kapsul, alih-alih udara yang kaya oksigen seperti di darat. Menurut Belady, oksigen dan kelembaban adalah dua hal yang menyebabkan kegagalan pada server miliknya, terutama ketika server berada di darat. Oleh karena itu, Belady berharap metode imersi dua fase ini juga dapat membuat server milik Microsoft lebih andal dan mampu bertahan.  "Apa yang kami harapkan dengan metode imersi sama dengan Proyek Natick. Hal ini mengingat cairan akan menggantikan oksigen dan kelembapan, yang menyebabkan korosi dan kegagalan dalam sistem kami,” kata Belady, sebagaimana dihimpun KompasTekno dari The Verge, Rabu (7/4/2021).

Artikel ini telah tayang di dengan judul "Pendinginan Server Microsoft, Dulu Air Laut Kini Jajal Metode Penambang Kripto", Klik untuk baca:

Penulis : Galuh Putri Riyanto

Editor : Reza Wahyudi

Download aplikasi untuk akses berita lebih mudah dan cepat:



Wednesday, October 6, 2021

Data Center Tidak Akan Mati Tapi Berubah Wujud


Your Data Center May Not Be Dead, but It’s Morphing

Published 17 September 2020 - ID G00720127 - 14 min read

By David Cappuccio, Henrique Cecci

Workload placement is not only about moving to the cloud, it is about creating a baseline for infrastructure strategy based on workloads rather than physical data centers. This is causing I&O leaders to rethink infrastructure strategies, which have a direct impact on enterprise data centers.



  • Workload placement in a digital infrastructure is based on business need, not necessarily constrained by physical location.
  • To create a scalable, agile infrastructureI&O leaders will require an ecosystem of service partners.
  • Hybrid digital infrastructure management (HDIM) will provide the tools for I&O to monitor and manage any asset or process, anywhere, at any time, enabling a successful transition to digital business.
  • The movement to digital infrastructure will result in radically increased complexity for I&O, so staff must be retrained, with a focus on versatility.


I&O leaders focused on planning and enabling an infrastructure delivery strategy should:
  • Adopt a plan based on business needs by basing it on the application or workload level, rather than on the physical infrastructure.
  • Leverage their partner ecosystem to enable an agile, flexible infrastructure that is responsive to new business initiatives and reduces the I&O need to do it all.
  • Integrate diverse platform choices together into a unified solution to allow market advances and advantages to be deployed quickly and easily.
  • Develop staff versatility, changing focus away from critical roles (vertical focus) and more toward critical skills across the team.

Strategic Planning Assumption

By 2025, 85% of infrastructure strategies will integrate on-premises, colocation, cloud and edge delivery options, compared with 20% in 2020.


Maintaining and updating traditional data centers is not seen as the primary role of IT. IT leaders are looking to workload placement based on business outcomes as a key success factor and, as such, the physical management of data centers becomes the role of colocation, hosting and cloud providers, not necessarily traditional IT, and facilities teams.
This is not about moving everything to the cloud or the edge, rather about changing the focus on how IT delivers value to the business. Infrastructure and operations (I&O) leaders face a daunting challenge. The IT they have known for decades is changing — radically. IT’s primary function will be to enable the business to be more agile, enter new markets more quickly, deliver services closer to the customer and position specific workloads based on business, regulatory and geopolitical impacts. The role of the traditional data center will be relegated to that of a legacy holding area, dedicated to very specific services that cannot be supported elsewhere, or supporting those systems that are most economically efficient on-premises (see Figure 1).
Figure 1: Workload Placement

The evolving infrastructure is no longer just on-premises, but wherever it needs to be.
As interconnected services, cloud providers, distributed cloud, edge services and SaaS offerings continue to proliferate, the rationale to stay only in a traditional data center topology will have limited advantages. This is not an overnight shift, but an evolutionary change in thinking how we deliver services to our customers and to the business. This trend, coupled with the new reality that outside factors might limit physical access to the data center (such as emergency quarantine), is driving new thinking in infrastructure planning.
The drivers behind this shift to a distributed digital infrastructure are many, but the key impacts to consider are shown in Figure 2.
Figure 2: Impacts and Top Recommendations for I&O Leaders

Future workloads will be placed based on business requirements, not current infrastructure.
With the recent increase in business-driven IT initiatives, often outside of the traditional IT budget, there has been a rapid growth in implementations of IoT solutions, edge compute environments and nontraditional IT. There has also been an increased focus on customer experience with outward-facing applications and on the direct impact of poor customer experience on corporate reputation. This outward focus is causing many organizations to rethink placement of certain workloads based on network latency, customer population clusters and geopolitical limitations.
Historically, we developed sophisticated support structures to rapidly solve customer problems and build long-term intimacy, which improved customer satisfaction. But many of today’s customers might look to social media as a means to airing complaints, and that single customer satisfaction issue can quickly reach thousands of potential customers and become a board-level corporate reputation issue instead. IT’s new role is to place specific workloads and infrastructure to radically reduce that risk of exposure, while improving that customer experience.

Impacts and Recommendations

Workload Placement Has Become the Key Driver of Digital Infrastructure Delivery

Many organizations are developing infrastructure delivery strategies and are wrestling with the issue of cloud adoption. I&O leaders are not primarily concerned with whether moving workloads to the cloud is an option for them or not, rather how to determine which workloads would make the most sense to develop for or migrate to the cloud and which would have the most optimal benefit to the business.
These organizations have realized that while “cloud first” may be the trend, a more realistic model is “cloud first but not always.” Determining the right workload to migrate, at the right time, for the right reasons, to the right provider, will be the key to success over time. I&O leaders are, therefore, beginning to build IT strategies with a focus on their application portfolio, rather than on the physical infrastructure, moving away from traditional IT-architecture-driven decisions toward a services-driven strategy. When business units have traditionally requested new applications (or services), many IT organizations would first ask themselves, “How can we build this service to fit within our architecture?” While this strategy has worked for completely IT-controlled on-premises environments, it becomes self-constraining over time, as the architecture may not adapt quickly to evolving business requirements.
In a hybrid IT environment, the question of service/application delivery changes from the traditional, “Can we make it fit within our existing architecture?” to “Where can we find it elsewhere, rather than building it ourselves?” This becomes an outside-in or top-down strategy, versus the inside-out or bottom-up strategy that traditional IT shops have used. Initially, this strategy applies to new service requests or new applications, but the same logic can be applied to the existing application portfolio, especially when developing a long-term deployment (or redeployment) strategy.
  • Apply specific business rules for rationalizing workload placement (see “Developing a Practical Hybrid Workload Placement Strategy”). These rules focus on areas such as compliance, data protection, security, latency, resiliency, reputation, service continuity, location, availability and performance. They become guidelines for determining where current and future workloads belong and become the baseline for developing an overall infrastructure upgrade strategy. This is not a migration strategy because some workloads may not move at all, rather a strategy designed to optimize business impacts and not just I&O costs.
  • Replace older workloads with an as-a-service offering where appropriate. The trend of migrating back-office workloads toward SaaS adoption continues, but technology procurement leaders must evaluate and assess migration risks in order to achieve maximum benefit. Picking the wrong provider or moving the wrong workload can increase operating costs and risks, rather than decrease them. I&O leaders focused on efficient service delivery need to work closely with business units to determine where as a service is warranted and where it isn’t.

An Ecosystem Will Be Required to Enable Scalable, Agile Infrastructures

The new digital ecosystem can be homegrown and developed in conjunction with key service providers. The deployment of this distributed digital infrastructure begins by agreeing on the business-related benefits that can be attained for each application workload and its associated data. These benefits can include reduced latency, improved customer experience, enhanced corporate reputation, stronger service continuity, geodiversity, improved compliance or mandated data location residency requirements. When answering these questions, take into account not only what the IT infrastructure can deliver, but also what is available on the market that you can leverage — either colocation, hosting, or cloud or, more recently, distributed cloud (see “‘Distributed Cloud’ Fixes What ‘Hybrid Cloud’ Breaks”). More importantly, ask how a service partner can be leveraged to provide you enhanced services when needed.
An evolving trend in the colocation market has been the introduction of enhanced services that go well beyond traditional power, floor space and support services (see “Infrastructure Is Everywhere: The Evolution of Data Centers”). These enhanced services include carrier neutrality, cloud-enabled services, access to multiple cloud services via secure networks, cross-connects to partners on the same premises, or interconnect fabrics to other sites or services. By using these fabrics, enterprise customers could have access to many different providers and services and be able to switch between or swap services when contracts or performance requirements change. Moving between providers is not a simple task. Expect enhanced colocation providers (see Note 1) to offer a software-centric layer above these fabrics, thus providing a seamless mechanism for moving between services for their customers. In this manner, colocation providers could become an integral part of your digital infrastructure, so the development of clearly defined SLAs, key performance indicators (KPIs) and contractual obligations is imperative.
As digital business evolves, the need for geodiversity is evolving as well. Data location, regulatory requirements (such as the GDPR) and customer requirements (such as low latency) may drive the need for workloads to be accessible from multiple locations. A partner ecosystem that supports strong interconnection services can be a key enabler for these workloads.
Data center interconnection is a model in which discrete assets within a multitenant data center are connected to each other directly and in a peer-to-peer fashion. These connections may be as simple as intrasite cross-connects but can allow data-center-based assets to horizontally connect to multiple carriers, cloud providers, peers and service providers.
  • Combine interconnection with high-speed enterprise access to the multitenant data center and include enterprise assets (such as compute, storage and networking), located in the multitenant data center, to bring the enterprise and its applications to the network, as opposed to the outdated model of bringing the network to the enterprise. This creates a flexible infrastructure that allows placement of the right assets, at the right place, for the right reasons, in support of business outcomes.
  • Pick partners based on their vision, capabilities and their partners. When considering ecosystem partners, in particular colocation providers, it’s critical that you understand their long-term vision of the market and how its evolution is changing their strategy. You’ll find many vendors’ “vision” is to produce and provide more of the same — just in more places. However, the important question is how they are preparing for the future of digital infrastructures and how that development will enable you (as a customer) to service your business more effectively.

Hybrid Digital Infrastructure Management Emerges as the Key Enabler for I&O’s Transition to Digital Business

As enterprises move toward hybrid digital infrastructures, one of the key pain points will be operational process and tools (see Note 2). I&O has become great at managing silos, but staff tend to see the world from the construction of silos of servers, storage, networking, virtualization, applications and so on. In highly distributed environments where a workload could be anywhere, with a hybrid mix of sourcing and architectures, the physical location of an asset (or process) will not be as clearly defined, and yet its attributes, performance, KPIs and cost will have an increasingly important impact on how I&O delivers services to end customers. Ultimately, I&O remains responsible for both the assets and the end-user experience and will need tools to actively monitor and manage any asset or process, anywhere, at any time.
Digitalization’s impact can best be observed in the emergence of newer technologies and products providing an advanced analytical foundation (such as artificial intelligence for IT operations [AIOps]) and increasingly relevant monitoring technologies (digital experience monitoring, collective intelligence benchmarking, unified communications monitoring and so on) that support both experience management and delivery automation functions (see HDIM sample vendors in Note 3). These are critical to enabling IT operations management (ITOM) teams to manage a continuously growing and diverse set of technologies, including those with disruptive impact (for example, IoT, wireless networking, cloud and software-centric networking).
  • Invest in the technologies needed to discover and manage a hybrid IT model so I&O can have more proactive and business-relevant insight because, over the long term, this is not about transforming the infrastructure. It’s about transforming how I&O is providing value in a digitally distributed ecosystem. In this new hybrid world, the I&O role is migrating toward integration and operations.
  • Redefine supporting tools to better align with changing demands as the role and value proposition of ITOM changes in support of digital infrastructures. These changes are typically driven by functional groups that focus on managing customer experience quality, automating the provisioning and configuration of resources, or analyzing the performance of technology resources — wherever they are (see “IT Operations Management 2020: Shift to Succeed”).

IT Talent Management and Retraining Existing Staff Are Critical Success Factors

I&O leaders are faced with a seemingly impossible challenge: to develop their staff skills to deliver against the business demand, amid a new and unfamiliar level of infrastructure complexity. They cannot afford to lose staff, yet have restrictions placed on new headcount at a time when they feel like 10 times as many resources are needed, especially those with institutional knowledge (see “Talent Management: Dealing With Silos in a Hybrid Infrastructure World”).
For most leaders, this represents a headache on top of every other headache, as they are faced with the challenges of implementing, understanding and supporting new layers of integration, orchestration, customization and configuration. In parallel, existing teams must deliver what they have been doing to date but also find ways to work harmoniously with others in a bimodal environment that supports the aims of a digital business. So much more is demanded of individuals, to the point where they are only able to focus on the immediate issue in front of them. Thus, they fall into the siloed nature of thinking and behaving. All this is occurring at a time when the business appetite for the pace of change and the complexity of infrastructures and technology solutions are at an all-time high.
  • Prioritize and develop staff versatility, complementing vertical expertise with the additional capabilities needed. When the business view of a service relies on infrastructure provided by multiple vendors, making the right decisions requires broad thinking, often beyond a single technology silo. As IT moves toward the realm of an ecosystem of partners, connecting the business to the right provider and adding value to this particular relationship require broad understanding of both parties in the brokering relationship. Therefore, in distributed digital infrastructures, the added skills required from a versatilist include two critical areas — business knowledge and provider knowledge — and must also be underpinned by the ability to build rapport. With respect to business knowledge, versatility is needed to interpret business situations and the resulting requirements correctly. This is clearly a critical skill in roles that involve solution architecture. However, in a hybrid infrastructure, this becomes even more important for other supporting staff who need to navigate multiple service delivery models and understand the potential effects of their actions.
  • Actively develop individuals and teams, prioritizing collaborative skills and lateral thinking. Let’s begin to recognize what we really value in IT — not only depth in a single discipline (except in a specific subset of people) but also breadth across multiple disciplines, coupled with depth in a primary discipline. Real-world experience is more effective than just training to build the necessary breadth and depth of knowledge needed for the emerging landscape of digital infrastructure. Add or enhance your business analysis functions to facilitate working more closely with lines of business and the CFO.
  • The most effective IT people are always looking for new things to learn, and in many cases the most interesting areas are the unknown areas. Enabling learning, even incentivizing it, is a critical success factor as we move toward fully digital infrastructures. When employees realize that their value is not only how much they know in a discipline, but how much they understand the linkages between disciplines and the impact on the business, IT as a whole will become a much stronger organization and more able to adapt to these changing environments. Additionally, retention of high-quality talent is always an issue with IT organizations, but employees that feel valued are often more motivated and less likely to change roles.

Note 1: Representative HDIM Vendors

  • Hyperview
  • CloudSphere
  • Firescope
  • LogicMonitor
  • Virtana
  • Nlyte Software
  • Snow Software
  • Vistara
  • Ivanti
  • OpsRamp
  • FNT
  • Flexera
  • Turbonomic

Note 2: Hybrid Digital Infrastructure Management

Hybrid digital infrastructure management (HDIM) involves the integration of tools designed to monitor distributed environments and includes devices, subnets, domains, data centers, edge deployments and/or service providers. Its focus is on asset discovery, monitoring, KPI metrics, optimization, dependency mapping, and location of both physical and logical assets.

Note 3: Sample Colocation “Ecosystem” Providers

  • CoreSite
  • Cyxtera
  • Digital Realty
  • EdgeConneX
  • Equinix
  • NTT Global
  • vXchnge
  • source: