Articles

Working At Full Power: Data Centers In The Era Of AI by Juan Font

Juan Font is President and CEO of CoreSite.

If ever there was a moment in which technology defined the cultural zeitgeist, it’s now. Artificial intelligence (AI) innovation has shaken every industry and is altering the way we work and live in fundamental, profound, and likely irreversible ways. Notably, Bill Gates has asserted that AI is the most important and revolutionary innovation in 30 years, comparing today’s AI tech race to the emergence of graphical user interfaces in the early 1980s, mobile phones, and the internet itself. It’s exciting, disruptive, and a little bit scary.

Of course, along with the increased adoption of AI tools, new challenges have emerged—particularly in the ways we store, transmit and process data, and our capability of doing so.

Data Centers Working Overtime

Unsurprisingly, AI applications are very power-intensive. Particularly, deep learning models lead to higher processing requirements for data centers because training and executing AI models relies on substantial computational power. Running these applications demands advanced hardware such as GPUs (specialized electronic circuits that accelerate graphics and image rendering) and TPUs (circuits designed to accelerate AI and machine learning workloads).

Traditional data centers are designed with five to 10 kilowatts per rack as an average density; the advent of AI now requires 60 or more kilowatts per rack. Moreover, AI applications generate far more data than other types of workloads and thus require significant amounts of data center capacity.

New data centers must be built with a great deal more power density; that’s one part of enabling AI. Current data centers are adapting to these changes, increasing their capacities by implementing optimized interconnection, compute and storage solutions—something some legacy and most on-premises data centers would have trouble accomplishing at the scale needed to keep up with the latest tools.

Energy-intensive GPUs and TPUs give off so much heat that enhanced environmental controls, including liquid cooling solutions, can be required. This issue of heat is both a technical consideration and an environmental one.

Increased Benefits Of Colocation

In the past, in the same way that big banks have big vaults, big companies had gigantic data centers that were purpose-built for their operations. The advent of cloud computing (AWS, Google Cloud, Microsoft Azure, etc.) created a new utility for enterprises to use these services on demand, as opposed to hosting them on-premises or buying “seats.”

Some industries are still very traditional: Insurance companies, banks, and healthcare companies often hold their servers very close and own them due to data security and privacy constraints. But even these companies have started relying more on SaaS and have become more comfortable using the cloud as well as third-party data centers for an increasing range of services.

Companies had to solve for a self-serve digital economy when the pandemic took off, resulting in rapidly increased migration of workloads from private data centers to the cloud and, more recently, to a multi-cloud architecture. This transition has also led to hybrid models, wherein a company has some applications that reside in the data center and other applications in their private cloud. One role of modern colocation data centers, then, is providing the conduit between the private and the public clouds.

For companies using many AI tools, colocated data centers present a far more efficient option than the on-site data centers of yore: They provide robust connectivity options and low-latency access to the types of powerful computing resources on which these applications depend for real-time processing, reducing data transfer time and accelerating time to cloud.

And let’s not forget scalability. Colocation data centers offer the sheer space, power, cooling capability and infrastructure to allow companies to expand AI usage as their business needs demand. Ultimately, the results for enterprises are increased AI performance, reduced costs, greater sustainability, smalle

r carbon footprints and greater flexibility on the whole as more of their workloads become AI-driven.

A Look Ahead

In my view, the optimal topology for an enterprise includes having your IT infrastructure adjacent to the cloud, so you have the capability to query that cloud—for storage, for analytics, for AI—at your fingertips, with near real-time latency and minimal data transfer costs. That’s one of the reasons we’re seeing more distributed or hybrid cloud architectures as well; companies can have instantaneous compute resources closer to their end users while using increasingly more data, due largely to AI dependence.

Fundamentally, however, companies should look at their data center infrastructure with an eye toward future-proofing and preparedness. We are entering a business world in which ever more processes must, by necessity, be AI-supported, data-driven and operating as efficiently as possible in periods of economic uncertainty and pervasive climate change.

Ultimately, every business is, at this point, in the process of hybridization, existing on a continuum between completely private cloud and completely public cloud—and enterprise leaders must ensure that their data and cloud infrastructure transforms with the times, placing them wherever they need to be within that continuum, lest they fall behind.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *