Artificial intelligence integration has become a massive part of our everyday lives, including in data centers. In fact, AI adoption is accelerating faster than any other computing wave we’ve seen previously, and enterprises are upgrading infrastructure to support this change. Data center decommissioning is entering a new era as AI expands, introducing new challenges and risks. Overall, the AI growth is reshaping decommissioning processes and compliance requirements, which is bringing a variety of changes to the way data center decommissioning is working due to the increase in hardware growth.
The Rise of AI Hardware in Data Centers
As we explore the expansion of AI in data centers, there is a rise in AI hardware that comes with it. There has been a shift from CPU-centric architectures to GPU- or accelerator-centric architectures, as demand for NVIDIA GPUs and custom accelerators has grown. Hardware lifecycles are also getting shorter as AI infrastructure cycles have changed from 5 to 7 years to about 18 to 36 months now. This is leading to more frequent decommissions and a change in the lifespan of these devices. High-density cooling is increasingly in demand to replace traditional air cooling, with liquid cooling, immersion cooling, and hybrid systems. All of these changes are a result of the rise in AI hardware and the adjustments required to accommodate increased data center decommissioning.
How AI is Changing Data Center Infrastructure Requirements
Artificial intelligence is pushing data centers beyond their traditional architectures, forcing businesses to rethink many aspects of their enterprises, such as power, cooling, and operational designs. As enterprises adopt changes like the integration of AI, there is also a need to change the infrastructure of their business to accommodate that integration. Here is a deeper look into these infrastructure changes and what they mean for enterprises.
Increases in Power Requirements
Traditionally, enterprises consumed about 5 to 10kW per rack, but AI compute racks have increased this to 20 to 60 kW for GPU and 80 to 100 kW in hyperscale environments. Power distribution cannot support modern AI loads, and many older data centers will require electrical overhauls. For facilities with power caps, they might be forced to retire or relocate outdated hardware to free up space. Whole facilities could be decommissioned because they cannot meet the new electrical demands resulting from the integration of artificial intelligence hardware.
The Changeover to Liquid Cooling
Air cooling is no longer efficient enough for AI hardware and integration due to the heat that these devices continue to put out, which has led to the changeover to liquid cooling as it’s become essential. Emerging cooling methods include direct-to-chip liquid cooling, which uses cold plates; immersion cooling, which is achieved by full immersion in liquid; rear-door heat exchangers for hybrid cooling; and chilled-water loops. With this type of cooling, some changes will be required to facilities to accommodate it. Some of these infrastructure changes include floor-level plumbing, leak detection states, CDUs, and reinforced subfloors. In addition, spatial design must change because AI compute clusters are much heavier and more complex than alternatives, and many facilities are consolidating to accommodate these changes. That said, this change is leading to the decommissioning of older HVAC systems as they become obsolete in the facility.
Increased Need for High-Bandwidth
Artificial intelligence workloads require tons of data throughout their processing, which is much greater than facilities have ever needed before. It has become common for facilities to upgrade to 400G or 800G networks to accommodate the bandwidth required to run AI software. They are also replacing copper with high-speed fiber and using new networking equipment that requires its own cooling upgrades, which necessitates transitioning to liquid cooling. High-speed networking requirements make it impossible for legacy switches, cables, and routers to handle the new AI workload, necessitating decommissioning and replacement with upgraded versions that can accommodate the AI traffic.
Upgraded Backup Power
Backup power is essential for GPU racks, as they require uninterrupted power to prevent heat spikes and data loss that could be detrimental to the facility. Stronger UPS systems, larger battery banks, and larger-scale generators are needed to ensure that power loss doesn’t occur; if it does, measures are in place to keep the components running as normally as possible.
Changes in the Facility Layouts
With artificial intelligence, there is a need for new facility layouts to accommodate these changes and the additional components that accompany them. We’ve reviewed many of the latest additions that must accompany AI integration, making it crucial to ensure there is sufficient dedicated space for them. Some facility layout changes we could see are dedicated zones within data centers for AI-specific machinery, cooling loops, airflow isolation between AI components, and even physical security for AI models.
Sustainability and ESG Pressure Due to AI
With AI becoming increasingly mainstream in data centers, there is a growing need to ensure that any changes are energy-efficient and adhere to sustainability practices. Because artificial intelligence consumes substantial energy and can introduce numerous non-sustainability-friendly impacts, there is a need to implement changes that keep sustainability top of mind, even amid these developments. Facilities can expect increased pressure to reuse and recycle hardware, to comply with reporting requirements for the environmental impacts of the data center, and to meet higher expectations regarding the disposal of e-waste and liquid coolant. Sustainability practices have been a considerable focus for data centers. Still, the increased energy consumption of artificial intelligence places additional pressure, making it essential for data centers to implement as many measures as possible to remain sustainable and energy-efficient.
Artificial Intelligence Integration in Data Centers
Artificial intelligence is transforming how data center infrastructure is designed, decommissioned, and operated, reshaping current data center operations. Power, cooling, space, networking, bandwidth, and sustainability are all areas heavily affected by the integration of artificial intelligence, and they are undergoing significant transformations as a result. With this, there is a massive wave of data center decommissions as facilities make room for AI and everything that comes with it. As a data center adopting artificial intelligence, it’s crucial to understand the extent to which this change will affect your center and everything that occurs within it, including data center decommissioning. We can expect these changes to continue as AI becomes a prominent part of our daily lives. Contact us today to learn more!