You still don’t need AI to operate your Data Center

It seems that the “hype” is starting to fade a bit, but I still hear the word AI far too often in relation to Data Center infrastructure operations, and it’s logical, because even the “big bosses”, who are not involved in the day-to-day of the Data Center, regularly ask: does our DCIM system, BMS, or any other system have AI? 

And this reminds me of when we talk about integrations between systems, the answer is another question. Yes, it can be done, but what is it going to bring you?. 

I think that sometimes we once again confuse a tool that will be part of a journey with the objective itself. 

What is the objective in Data Center operations? It’s always good to keep them at hand: 

      Minimize risks: resilience.  

      Reduce costs: efficiency.  

      Reduce time in service deployment: strategy. 

That’s why, no matter how much we talk about automation, artificial intelligence, or even autonomous Data Centers that manage themselves, in daily life operations are still reactive, full of alerts, urgent decisions and too many variables that depend on people’s experience (or tiredness). 

Maybe the problem is not in the technology. Maybe the problem is that we have started the journey from the end, we look for solutions outside of problems that we can only work on from the inside: processes. And it is always easier to buy into the idea that there is something plug and play that will solve all problems than to face people, departments, management silos, etc., because facing them is something none of us likes. 

Does that mean there is no path to automation? That there is no technology that can help us? Not at all, what it means is that each technology or capability must be placed at the moment when you are ready for it. Imagine that tomorrow they buy you a robot for Data Center operations, they bring you the package with the instructions and tell you there you have it… and then… where would you start? 

At Bjumper we have been asking ourselves these questions for a long time, and we don’t have all the answers, but we have chosen a path, from the bottom up, with patience (which lately seems to have stopped being the mother of all sciences).  

From our perspective, automation does not really start with AI. It starts with something much more basic (and much more difficult): clear inputs. 

When we think about inputs, we almost always think about technical data: temperatures, consumption, equipment status. But a Data Center receives many more inputs than we usually admit: 

  • Incidents 
  • Changes 
  • Projects 
  • Maintenance 
  • Approvals 
  • Human decisions 
  • Process exceptions 

Everything that enters the Data Center and has an impact on operations is an input, even if today we do not treat it as such. The problem is that these inputs are usually: 

  • Scattered across multiple tools.
  • Without a common structure.
  • Without operational context.
  • Duplicated or, even worse, contradictory.

And this is the first point to tackle because when the starting point is confusing, the rest of the journey will be as well. 

If we do not know exactly what enters the Data Center, the “in” of the input, we cannot trust anything that comes out.

Alright, now we have clear inputs, normalized because the processes are defined and followed, and we think that the outputs will be alerts, dashboards or reports. Are they? From my point of view those are not outputs, they are data visualizations.  

An output must be operational and valuable and must answer very specific questions: 

     What is happening?  

     Why does it matter? 

     What are we expected to do now? 

An alert without context does not help. A dashboard without interpretation does not decide. A report without action does not change anything. Because a good output does not inform, it guides. 

Therefore, there is more work to do, which is to identify valid outputs, understand which inputs we need for them and include the technology that is required (which may or may not be AI) to reach them and automate it, this is the route so that automation strategies do not remain halfway. 

When we have already done this work, outputs change in nature, they stop being noise and start being: 

  • Clear recommendations.
  • Warnings with real impact.
  • Confirmations that everything is under control (this is also an output and the best of them!).
  • Prioritized actions based on risk and timing.

It is not about having more information, but about having the right information at the right time. 

Here something key for any future evolution appears, and the most complicated point from my perspective: trust.  

Trust that the output makes sense. 
Trust that no information is missing. 
Trust that the proposed decision is coherent with operational reality. 

As I have said on other occasions, this will only be achieved over time, with a transition that allows you to have control and security over the ordered input/data, and again this can only be reached with clear processes that are followed. The moment someone makes any change within the Data Center bypassing the processes… they will ruin the inputs on which automation is built. 

Well, after having clarified the difference between input and output and how to get there, we were missing something else, and since in the world of digital development and product there is so much knowledge and they are so so organized, we came across the concept of outcome, and what is the “outcome”? Well, what really matters to the business. 

The outcome is what truly changes in the operation and what responds to the objectives of the Data Center. 

  • Fewer human errors.
  • Less improvisation.
  • Faster and more coherent decisions.
  • More predictable operations.
  • Teams that can focus on improving the service, not on putting out fires.

Outcomes are not measured in managed alerts, but in real and sustained improvements. The value is not in automating tasks, but in improving results with respect to the 3 objectives we mentioned at the beginning: minimize risks (resilience), reduce costs (efficiency), reduce time in service deployment (strategy). 

In this part of course AI can help us a lottttt, but first we had to walk a path. Can you jump ahead? Of course, always, will you get frustrated? Probably. If we do not know exactly what enters the Data Center, the “in” of the input, we cannot trust anything that comes out.

Automation: the last step, not the first

If there is something I believe we all agree on, it is that automation is necessary in the operation of such a critical environment as a Data Center, however this is the last step, not the first, because: 

Automating without clear inputs is automating chaos. 

Automating unreliable outputs is accelerating error. 

Automating decisions that we do not yet understand is losing control and therefore trust. 

Well-designed automation follows a much more logical path: 

  1. Clear and structured inputs.
  2. Understandable and reliable outputs.
  3. Measured and repeatable outcomes.
  4. Progressive automation.
  5. Autonomy based on trust.

It is not a leap, and there is no rush, it is a gradual process, where technology earns the trust of operations step by step. 

Final reflection

The autonomous Data Center will not arrive because of fashion or market pressure, it will not arrive by putting AI at the center, far from it. It will arrive when we trust what comes in, understand what goes out and know how to measure what truly improves. 

To give you an example, a couple of weeks ago, our development and product team had a hackathon with the challenge of using AI in the functionalities of binOra, which will be the new star of the Data Center 😎.  

Interestingly, from the 2 groups that were formed the approaches were different: 

In chatbot format for guidance and consultation of information, creation of graphs or tables, comparing data, requesting improvements. It is the person who commands.

In CTA (call to action) format enhanced by AI, that is, based on our knowledge or on market best practices, we already include recommendations for improvements, optimize tasks, etc. 

It was great work to create a foundation on which we are working, and beyond the “wow” effect that it may imply what we are clear about is one point: There is no AI if there is no organized data, and there is no organized data if we do not focus on processes. 

This is not a BIM, or an as built which are a photograph at a given moment, operating a DC constantly generates changes in the data and new data, and without well-organized processes that data will never be properly collected. 

Can we include AI in the product? Of course, should we lay the foundations first? Absolutely, if we move ahead too soon, we will lose trust.



Your Data Center generates data. The question is whether it generates results.