Traditional software toolsets simply can’t balance escalating IT workloads with the need to cut data center energy consumption
Today’s data centers face a challenge that, initially, looks almost impossible to resolve. Operations have never been busier, yet at the same time critical facilities are coming under increased pressure to reduce their energy consumption – particularly as the reality of corporate net zero commitments start to bite. So how can data centers resolve what appear to be two potentially conflicting demands – supporting escalating workloads while cutting carbon usage?
Unfortunately traditional data center software toolsets – such as BMS, EPMS, CFD and DCIM – can’t provide a credible answer as they don’t equip operations teams with a complete view of what’s happening in their data centers. While traditional data center infrastructure toolsets may have very solid use cases, they’re less flexible when it comes to directly supporting real-time optimization activities. BMS is of course a key platform, but it typically doesn’t feature any analytics and is usually designed to alert on only hard faults or SLA breaches – which is often too late to prevent outages. Similarly, Electrical Power Management Systems are often treated as a BMS extension.
And while CFD systems can be great to support new build projects or major design changes, they typically only utilize data from a specific point in time under set parameters – making it overly complex to unlock real-time optimization opportunities. DCIM systems too tend to be largely driven out of IT requirements, with very little focus on the M&E side. Because many DCIM vendors originated from the IT side of the fence and, although they might have claimed comprehensive functionality for ‘inside the rack’, none of them have yet to properly address the very real M&E needs of data center operators – particularly when it comes to overall energy efficiency and capacity management.
Given these inefficiencies, it’s hardly surprising that the Uptime Institute reported that the global data center industry was on track to waste some 140-billion-kilowatt hours of energy in 2020, or about $18 billion worth of inefficient cooling and poor airflow management. There’s no doubt that the size of the optimization prize is considerable. But achieving this will require a break from legacy infrastructure tools.
Instead, I believe that the answer requires a fundamentally different and more innovative approach, combining the power of machine learning and AI algorithms with much more comprehensive sensing and advanced software optimization tools to provide a true real-time visualization of data center performance. Such an approach doesn’t just highlight problems as they occur, but delivers intelligent insights in real-time – along with recommended actions before potential issues even have a chance to develop.
What do you think? Check out my next post to learn more about how machine learning and AI can make all the difference when it comes to data centre optimization.