500 billion is the budget that NASA estimates would enable human stocom pletea Mars landing . It could buy 1.36 Alibabas(with a market value of 367 billion), 3.5 NBA leagues (valued at

140 billion),fund the construction of 100 Apple Parks (costing 5 billion each), or purchase 140 billion cups of coffee (at $3.5 per cup). However, this amount is only enough for OpenAI to build one Stargate data center.

But this might just be the beginning. Industry insiders believe that OpenAI’s ambition could even be ten times this figure. Tech giants like xAI and Meta are also frantically investing in AI data centers, triggering a global wave of infrastructure construction and betting on a new trillion-dollar market. Yet, behind this frenzy, we can’t help but wonder: Where is all this money going?

In this article, we’ll delve into the capital expenditures behind AI data centers. What components make up a data center? Who are the major upstream and downstream companies and players? How exactly is the money being spent? Interestingly, after combing through various reports, we found that the budget estimates vary widely. Who is right? Moreover, some data centers are even being “forced” into space. Why is that? In a situation where AI is being questioned for having a bubble, why are capital still pouring in aggressively?

01 Understanding the Trillion-Dollar Investment: Where Does the Money for Data Centers Flow?

Let’s first look at the cost analysis of next-generation AI data centers by Bank of America on October 15th this year.

We mainly divide data center expenditures into four major categories: IT equipment, power supply equipment, cooling equipment, and engineering construction. For ease of comparison, we’ll unify the calculation unit to expenditure per GW.

IT Equipment

First, there’s IT equipment directly related to computing, which is divided into three parts: servers, networking, and storage. The bulk of the cost lies in servers, with an estimated $37.5 billion needed per GW.

Servers contain important components such as CPUs, GPUs, memory, and motherboards. They are usually directly supplied by ODMs (Original Design Manufacturers), such as Foxconn Industrial Internet. These ODMs obtain server design standards from chip design companies like Nvidia and AMD and manufacture complete machines, directly supplying them to hyperscale customers like Oracle, Meta, and Amazon.

ODMs account for 46% of the server market share. For small and medium-sized enterprises that need to purchase servers, they have to turn to OEMs (Original Equipment Manufacturers) like Dell, Super Micro, and HP.

In terms of networking, $3.75 billion worth of networking equipment is needed per GW. Major players include Arista, Cisco, Huawei, Nvidia, and others.

It’s worth mentioning that although Nvidia only accounts for 5% of the market share in this area, some in the industry believe that despite its more expensive InfiniBand (a network communication standard), its advantages of low latency and no packet loss risk make it more suitable for AI data centers.

Finally, for storage, that is, hard drives,

1.9 billion worth of storage equipment is required per GW.Major player sinclude Samsung, SK, Micron, Seagate,and others. Add ingup these three items,we geta total IT equipment expend it ure of 43.15 billion per GW. This is the major part of data center spending.

Cooling System

In 2018, a data center in Atlanta suffered a cyberattack, causing the shutdown of multiple urban service agencies including courts, police stations, and airports. In addition to locking up data with ransomware, the attackers also infiltrated the cooling system.

After the cooling system was compromised, the ambient temperature soared to over 100 degrees Fahrenheit (about 37.8 degrees Celsius), damaging many chips at once. The hackers even took control of the servers and the cooling system as “hostages,” demanding a ransom of $51,000 in Bitcoin.

Since then, methods of attacking cooling systems have become increasingly common and diverse. This story illustrates the importance of a cooling system for a data center, although its construction budget only accounts for 3% of the total cost.

With the exponential increase in global AI computing power demand, traditional air-cooling technology has found it difficult to meet the heat dissipation needs of high-density computing power equipment. Moreover, for Nvidia’s GPUs, heat dissipation capacity has also become a core bottleneck restricting computing power to a certain extent. Therefore, for data centers, liquid cooling has evolved from an alternative cooling solution to a necessity.

For data centers equipped with liquid cooling systems, the cooling equipment mainly includes cooling towers, chillers, CDUs (Cooling Distribution Units), and CRAHs (Computer Room Air Handlers). To handle the heat dissipation of 1 GW, they require expenditures of 90million,360  million, 450 million,and 575 million respectively, totaling $1.475 billion.

Since major suppliers are scattered across various links and there are numerous players, we won’t list them one by one. However, Vertiv, Johnson Controls, Stulz, and Schneider are all major players in this field.

Power Supply Equipment

Now let’s look at the power part, which is the core infrastructure. Power supply equipment is mainly divided into backup diesel generators for emergency power supply, switchgear responsible for power distribution control, UPS (Uninterruptible Power Supply) to ensure uninterrupted power, busbars for distributing power to each cabinet, and other power distribution equipment.

Bank of America believes that the cost of a typical diesel generator per MW ranges from 400,000 to 550,000. The cost of fuel tanks, fuel pumps, and installation adds up to about 350,000 to 500,000. There fore, the cost of a generator per MW is approximately 800,000.To provide 1GW of power,800 million worth of emergency generators are needed.