
Source: Tetra Images/Tetra images via Getty images.
Data center infrastructure is rapidly evolving to meet the demands of large-scale, high-density AI workloads. To support these requirements, the entire industry is actively exploring innovative technologies such as high-voltage DC power distribution for improved energy efficiency, solid-state transformers for enhanced power conversion and flexibility, and advanced liquid-cooling systems to effectively manage the increased heat generated by AI hardware. In a study conducted by 451 Research from S&P Global Energy Horizons, we seek to understand the key dynamics driving the data center market.
The survey, conducted with a panel of IT decision-makers, focused on the adoption of new technologies, including ±400V or 800V DC, SSTs, energy storage and liquid cooling to address AI workload challenges.
The Take
The evolution of data center infrastructure is being shaped by the rapid growth of AI workloads, pushing operators to adopt advanced technologies. The industry shows strong momentum toward the adoption of high-voltage DC power distribution and solid-state transformers (SSTs), with nearly 95% of organizations expressing a high likelihood of adopting these innovations once they become widely available. This enthusiasm persists despite notable challenges, including high deployment costs, system complexity and internal skills shortages. In response to the surge in GPU workloads, operators are implementing a variety of energy storage solutions at multiple points along the power chain. To address the demands of high-density environments, a diverse mix of cooling technologies remains in use, with many organizations adopting a pragmatic combination of liquid and air cooling — particularly in direct-to-chip installations — to optimize heat management.
Summary of findings
A significant majority of organizations are enabling AI workloads within their data centers. Regarding their readiness to support AI workloads, 63% of respondents indicate that their data center infrastructure already supports AI applications. Additionally, 19% plan to implement support within the next 12 months and 7% expect to do so within two years. Only 12% of respondents report having no plans to support AI workloads over the next two years.
High-voltage DC power systems, such as ±400V or 800V DC, are expected to see widespread adoption in AI data centers over the next three years. According to survey responses, 46% of participants expect these high-voltage DC solutions to be widely available in AI data centers within 1 year, 37% within 2 years and 12% within 3 years. Regarding the likelihood of their own organizations adopting high-voltage DC if it becomes widely available, 64% of respondents indicate they are very likely to do so. In comparison, 32% say they are somewhat likely — totaling more than 95% expressing a strong inclination toward adoption.
The primary obstacles to adopting high-voltage DC in data centers include the high cost of deploying DC components, cited by 60% of respondents. Other significant challenges are the complexity of fault protection mechanisms, such as managing arc flash risks (45%), a lack of internal expertise to support ongoing operation and maintenance (43%) and limited compatibility with existing servers and components (41%).
SSTs are also expected to achieve widespread adoption in AI data centers over the next three years. Survey results show that 46% of participants anticipate SSTs will be widely available in AI data centers within 1 year, 37% within 2 years and 12% within 3 years. Regarding the likelihood of their own organizations adopting high-voltage DC if it becomes widely available, 60% of respondents indicate they are very likely to do so. In comparison, 35% say they are somewhat likely — bringing the total to nearly 95% and indicating a strong inclination toward adoption.
The main obstacles to deploying high-voltage DC and SSTs in data centers include the high cost of SST components (63%), complexity in system design — such as higher voltage integration and fault isolation (50%), an internal skills gap for supporting operation and maintenance (49%) and a lack of organizational buy-in for deployment (34%).
To manage the surge in GPU workloads, operators are implementing a range of solutions across various points in the power chain. Regarding the energy storage technologies used to address these power surges, 62% of respondents report deploying hybrid systems, such as combinations of supercapacitors with batteries or flywheels. Additionally, 52% use battery energy storage systems, 44% have dynamic voltage restorers or active power conditioners, and 37% use supercapacitors or ultracapacitors. Regarding the placement of energy storage solutions within the power distribution chain, 54% of respondents install them at the power supply unit, 44% at the medium-voltage level, 41% at the server level and 37% at the UPS input.
Operators managing AI workloads are leveraging a diverse range of cooling technologies to address heat removal challenges. Among these respondents, 49% use two-phase immersion cooling, 44% rely on air cooling, 42% employ two-phase direct-to-chip solutions, 36% use single-phase direct-to-chip cooling and 33% have adopted single-phase immersion cooling. Among those who use direct-to-chip installations, 90% still need to combine them with air-cooling technology.
Want insights on data analytics trends delivered to your inbox? Join the 451 Alliance.

