Visualizing performance at a more granular level is key to unlocking data center energy cooling savings
June 22, 2021 12:28 pm
Comment from Anuraag Saxena, Data Center Optimization Manager, EkkoSense
Organizations clearly want to deliver on their carbon obligations, but that can be challenging when data center operators don’t always have a clear understanding of how their rooms are performing from a cooling, capacity and power perspective.
Indeed, when faced with an external issue – such as an increased thermal demand placed on facilities by a surge in hosted services – the default position for many operations teams remains to just keep throwing more cooling at the problem. This simply adds to the data center’s overall carbon footprint, and often does little to resolve the original issue.
Data center operators also need to recognise that optimizing thermal performance positively impacts data center risk management – however it’s difficult to ask the right questions if you don’t actually have any granular visibility into how individual racks and cooling equipment are performing. EkkoSense’s research shows that only 5% of data center M&E teams currently monitor and report equipment temperature actively on an individual rack-by-rack basis – and even less collect real time cooling duty information or conduct any formal cooling resilience tests.
So, while operators remain keen to secure carbon reductions, the reality for many is that they don’t have access to the tools that can help them to make smart data center performance choices in real-time. While legacy DCIM tools are useful at helping data center operations teams manage their facilities, many find them limited when it comes to the kind of deep data analysis needed to really optimize performance at the mechanical and electrical level.
Moving beyond legacy DCIM reporting.
So perhaps it’s time to stop treating efficient data center operations as a black art. You don’t need over-complex DCIM suites or expensive, non-real-time and often imprecise external CFD consultancy to tell you what’s going on in your own data center. It’s much more useful to have a real-time dynamic viewpoint of your mission critical estate.
That’s why for true data center infrastructure management, M&E reporting tools need to get much more granular – drawing on the latest low-cost data center IoT sensor technologies and intuitive 3D software visualizations to see rooms in a realistic 360° real-time digital-twin view. This makes the immersive real-time optimization of data centers a reality for operations/ facility teams. By gathering and visualizing this data at a granular level, they can start to identify how individual racks and cooling equipment are performing. They can then draw on the latest AI and machine learning analytics capabilities to secure actionable improvements as part of their efforts to actively manage and maximize the performance of their critical data center environments.
From a space perspective, data center operations can use the 3D visualization approach and a simple drag and drop interface to support a range of M&E capacity planning activity from basic rack changes through to complete new room layouts. Capabilities such as space planning and reserved space allocation can help organisations to unlock any stranded capacity from their existing data center cooling and power infrastructure – effectively enabling them to do much more with less.
While there’s an increasing awareness of what best practice thermal optimization can achieve, it’s an approach that still demands much more attention. Data center teams recognize that the benefits software-driven thermal optimization can bring – reduced risk, the ability to unlock increased IT capacity from existing resources and lowering energy costs being some of the most important ones. Key advantages here include:
- Low-risk and light touch deployment
- Benefits available immediately
- Payback in under a year
- Very low total cost of ownership and human resourceoverhead during operation
- Costs typically financed by cooling energy savings of 30%
In contrast to DCIM solutions that can take years to implement, software-driven thermal optimisation gives data centre teams much faster access to the insights they need for less cost and less human management overhead. The result is exactly the kind of data-driven decision-making and scenario planning that lets them make the transition from simply monitoring critical facilities to identifying and actioning thermal, power and capacity opportunities to offer demonstrable and rapid ROI.
Based on our analysis of a significant sample of midsize, enterprise and hyperscale data centres, we estimate that an overall annual global carbon emissions of some 3.3 million tonnes CO2-equivalent emissions reduction is achieveable – simply by applying data center cooling optimization best practices.
What are your thoughts? I’d welcome your views. [email protected]Tags: Data center, DCIM, thermal optimisation
This post was written by Cheryl Billson