globe {{(header.eyebrow.langSelector.label != '') ? header.eyebrow.langSelector.label : 'Choose Language'}}
{{ popupData.primarybody }}
{{ distyMobilePopUpData.title }}
{{ distyMobilePopUpData.primarybody }}
{{ distyMobilePopUpData.secondarybody }}

Part List

{{addedBomQuantity}} {{addedBomName}} Added
{{totalQuantityInBom}} item(s) View List >>

Part List

  1. {{product.name}}

    {{product.description}}

    {{product.quantity}} item(s)
View List >>

How Wireless Monitoring Helps MTDC Providers Realize Energy Savings

Jul 2022
Colocation
shutterstock_2136788185-scaled

MTDCs are a critical part of today’s highly distributed architectures, and the ability to effectively control the high costs of operating them is critical for providers. Here’s why: Data center cooling systems account for nearly 37%1 of the overall power consumption, and it’s the fastest rising data center operation expense. And with providers tackling ever-larger workloads, it has never been more important to take a smarter approach to data center cooling.

Metrics that Matter

Two of the most important metrics in running an energy-efficient data center are power usage effectiveness (PUE) and rack cooling index (RCI) ®. PUE is calculated by dividing the total power entering a data center by the power used to run the computing hardware within it, which excludes cooling infrastructure. PUE is expressed as a ratio, with a PUE of 1.0 representing perfect energy efficiency. However, due to the fact that cooling will always consume additional energy, it is impossible to achieve a perfect PUE. That said, an optimized cooling infrastructure that makes use of innovations like liquid or immersion cooling systems might be able to achieve a PUE of 1.1 or better.

RCI is a measure of how effectively equipment racks are cooled according to industry thermal standards and guidelines. Specifically, it measures the difference in temperature between the air drawn into the system to cool it and the air exhausted afterwards. The recommended inlet temperature is 64°- 80°F.

Cooling Optimization Strategies for Success

In addition to controlling costs, effective cooling optimization helps to extend equipment lifespans, improve reliability, and reduce the risk of unscheduled downtime. However, cooling optimization must be approached as an iterative process due to the fact data centers run around the clock, and every hour sees varying workload sizes. Adapting to this constantly changing environment is the primary objective of cooling optimization, hence the need for new and innovative solutions that can monitor and control cooling systems on the fly.

How Wireless Monitoring Helps Maintain Efficiency

Hotspots are local temperature variations that occur in a data center. They are detrimental to performance and can require significant power and cooling resources to correct them. However, optimally placed server rack temperature sensors can automate monitoring and automatically send notifications if the preconfigured thresholds are passed. More sophisticated systems also control fan speeds automatically to achieve an optimal balance between rack temperature and energy usage. Furthermore, with all sensors connected wirelessly to a centralized database, administrators can experience real-time visibility into their data center environments on holistic and granular levels.

This is the latest post in our series focused on sustainability strategies, solutions, and success stories for MTDC providers. Be sure to subscribe above for updates so you don’t miss out on upcoming insights and don’t forget to check our website to learn more about our multi-tenant data center solutions, as well as Environment, Social, and Governance content to explore sustainability-focused solutions for a connected world.

Download our ebook to learn more about the considerations impacting business opportunities in a post-pandemic world, and why a physical infrastructure that can keep pace with changing business requirements is key to multi-tenant data center success.

 

 

1 Average Data Center Energy Usage Allocation , Lawrence Berkeley National Laboratory 2007

Author:

Jeff Paliga