Globally, data center energy consumption in 2022 was 240-340 terawatt hours (TWh), which is around 1-1.3% of total electricity demand (IEA, 2025). This excludes networking, which was estimated to be around 260-360 TWh in 2022 and it also excludes cryptocurrency, estimated to be 100-150 TWh in 2022 (IEA, 2025).
In the US, data center energy consumption in 2023 was 176 TWh, which is about 4.4% of US electricity consumption. Between 2014-2016 consumption was stable around 60 TWh, but started to increase from around 2017 (Shehabi et al., 2024).
| Region | Year | Value | Source |
| Global | 2010 | 193 TWh | Masanet, et al. 2020 |
| Global | 2018 | 205 TWh | Masanet, et al. 2020 |
| Global | 2021 | 350-500 TWh | Hintemann & Hinterholzer, 2021 |
| Global | 2022 | 240-340 TWh | IEA, 2025 |
| USA | 2018 | 76 TWh | Shehabi et al. 2024 |
| USA | 2023 | 176 TWh | Shehabi et al. 2024 |
The challenge with estimating data center energy consumption is the uncertainty in the figures. Precise data is not available, so figures are based on estimates using various methodologies.
Although you can find many reports about data center energy, there are only two research groups who produce reliable estimates. In a review I co-authored (Mytton and Ashtine, 2022), we analyzed 258 data center energy estimates from 46 original publications between 2007 and 2021 to assess their reliability by examining the 676 sources used. This revealed that the only credible global models are from the two research groups represented by Masanet and Hintemann & Hinterholzer (Borderstep).

The variance in estimates is a major challenge for anyone trying to get to the bottom of how much energy data centers use. In the review we show that 31% of sources were from peer-reviewed publications, 38% were from non-peer-reviewed reports, and many lacked clear methodologies and data provenance. We also highlight issues with source availability—there is a reliance on private data from IDC (43%) and Cisco (30%), 11% of sources had broken web links, and 10% were cited with insufficient detail to locate (Mytton and Ashtine, 2022).
In this article, I’ll look at what a data center is, the major components that make up a data center, their energy consumption and the relevancy of cloud computing and energy efficiency improvements.
What is a data center?
In the same way applications run on your laptop, accessing anything on the internet also requires those applications to run on a computer. These computers are called servers. They are just like a laptop but do not have a screen or keyboard and must be located somewhere where they have access to the internet, power, and cooling. Such places are called data centers.
Data centers can range in size from small 100ft2 cabinets up to massive 400,000ft2 “hyperscale” warehouses (Shehabi et al, 2016). Whenever you use any service on the internet, you are connecting to one of many millions of servers located in one of many thousands of data centers around the world.
The average data center demands around 5-10 megawatts (MW), but these large hyperscale facilities can be greater than 100 MW (IEA, 2025). These tend to be owned by the three large cloud providers – Amazon, Google, and Microsoft. Their scale allows for very good efficiency, but brings other challenges.
Servers
Industry data shows 6.5 million servers were shipped in 2022 with forecasts suggesting 7.7 million will be shipped in 2028 (Shehabi et al. 2024). The total installed base was 14 million in 2014 and 21 million in 2020. Of the total stock in 2020, 1.6 million were estimated to be related to AI (Shehabi et al. 2024).
Power drawn is partially related to usage, expressed as a percentage of maximum power. This is related to the number of Central Processing Unit (CPU) sockets and remained static from 2007-2023: 118W for single socket servers and 365W for two socket servers (Shehabi et al, 2016; Shehabi et al, 2018).
This has been revised for 2023 where dual processor servers are now modelled at 600 W based on a database of server power consumption from Green Grid. The average server in the SPEC database in 2023-2024 was around 750 W (Shehabi et al. 2024).

Power proportionality is key to understanding the efficiency of servers. This scales in proportion to utilisation. With perfect power proportionality, a server at 10% utilisation will draw 10% of its maximum power (Shehabi et al, 2016). This is measured as Dynamic Range – a ratio between idle power and maximum power which can be affected by hardware properties, power management software and the server configuration (Shehabi et al, 2016).

These improvements in server dynamic range have been coupled with improvements in utilisation caused by software management systems and the move to hyperscale facilities run by the cloud providers. In 2014, conventional servers were idling at 51% of maximum power. By 2023 this had improved to 36% (Shehabi et al. 2024).
Storage
The amount of data generated by humanity is growing every year (Statista, 2018), and it needs to be stored on disks. Power drawn per disk varies by drive type.
- Hard Disk Drive (HDD) wattage is not related to capacity. In 2006 this was estimated at 14W/disk, 8.6W/disk in 2015 (Shehabi et al, 2016), 7W/disk in 2020 and 6.4W/disk in 2025 (Shehabi et al. 2024).
- Solid State Disk (SSD) wattage has remained a constant 6W/disk between 2010 and 2015. As storage capacity has been increasing, wattage per terabyte (TB) has been decreasing, with capacity per watt increasing 3-4x between 2010-2020 (Shehabi et al, 2016). The latest estimate is 11W/disk for 2025 (Shehabi et al. 2024).

Network
Servers need to be connected to each other, and to the internet; this is the network component.
Network devices use power related to the number of ports and their speed (Shehabi et al, 2016, Shehabi et al. 2024). This means that network energy consumption is not proportional to the amount of data transmitted (Mytton et al, 2024).

It is a common error to use historical energy intensity to project the total amount of energy consumption in the future. This has resulted in wildly varying estimates for the energy intensity of the internet (Aslan et al, 2018). These can only be used if you have the total energy consumption and total data transfer for a particular network and cannot be used to make projections into the future.
The latest estimate for global network energy consumption is 260-360 TWh for 2022 (IEA, 2025).
Infrastructure
The data center building consists of infrastructure to support the servers, disks and networking equipment it contains. This includes cooling, power distribution, backup batteries and generators, lighting, fire suppression, and the building materials themselves (GHG Protocol, 2017). The infrastructure overhead is measured using Power Usage Effectiveness (PUE) (Uptime Institute, 2019). This is the ratio between power drawn by the infrastructure components and power delivered to the servers, disks and networking equipment (GHG Protocol, 2017).
A PUE of 1.0 means 100% of data center power inputs go to the IT equipment. The industry average PUE is 1.5 (Uptime Institute, 2023), but ranges from Google showing best in class at 1.08 for US data centers (Google, 2023) to 1.7 in Middle East, Africa, and Latin American regions (Uptime Institute, 2023).

Using the PUE ratio alone has been criticised because it will decrease when IT load increases (Brady et al, 2013) even though efficiency may not have improved. This makes it useful to compare facilities rather than indicating efficiency by itself. Although out of scope for this briefing, data centers have environmental impacts wider than just energy: water used for cooling and the life cycle of IT equipment are other important factors not included in PUE (GHG Protocol, 2017). Metrics such as Water Use Effectiveness and Land Use Effectiveness, alongside Life Cycle Analysis, have been suggested to help understand true impacts (Kass and Ravagni, 2019).
In a traditional data center, the fuel to server efficiency is only 17.5% – this is due the generally low efficiency of generating electricity through fossil fuels (which still make up the majority of the energy mix in most power grids) combined with the grid losses and the with losses in the power distribution systems within the data center (Zhao et al, 2014). Fuel cells have been investigated as a method of eliminating these losses (Zhao et al, 2014) and could increase efficiency to 29.5%.

With further modifications to data center design to use Direct Current (DC) from the fuel cell and bypassing the Uninterruptible Power Supply (used for backup power but not needed with gas reliability at 99.999%), 53.2% efficiency could be achieved (Zhao et al, 2014).


Cloud computing
Physical IT equipment can be measured to determine the environmental impact in embodied energy as well as power drawn during actual usage. This can then be combined with calculations from the data center components to calculate emissions. Indeed, these calculations are part of the Greenhouse Gas Protocol (GHG Protocol, 2017) and standards exist for constructing energy efficient data centers (Huusko et al, 2012). These types of emissions fall under the Scope 1 and Scope 2 reporting guidelines (GHG Protocol, 2015) that many organisations are required to publish (Department for Business, Energy & Industrial Strategy, 2018).
However, when IT workloads are moved to the cloud and resources are purchased in tiny “virtual” units on a pay-as-you-go basis, their associated emissions shift to voluntary Scope 3 reporting as “indirect” or outsourced emissions (Mytton, 2020a). This means that cloud customers must rely on the cloud provider to release enough data to calculate their emissions. The major cloud vendors (Amazon Web Services, Google Cloud and Microsoft Azure) publish aggregated global data, and with varying degrees of transparency (Mytton, 2020b).
Amazon is the least transparent – they report limited environmental data other than their carbon footprint: 51 MMT CO2e in 2019, 68 MMT CO2e in 2023 (Amazon, 2024). Since this includes all of Amazon’s operations and is not broken out for the Amazon Web Services cloud business, this is not a useful figure. This lack of transparency resulted in Amazon being criticised in a Greenpeace report (Cook, Jardim and Craighill, 2019).
Internal data centers were most common in 2010, with hyperscale and colocation providers making up less than 10% of all deployments. As of 2023, 74% of all servers are now in colocation or hyperscale facilities (Shehabi et al. 2024). This matters because of the efficiency improvements available at larger scale, which means lower overall energy consumption.
As of 2022, all three cloud operators give their customers cloud carbon calculators, although Microsoft’s is only available if you have an Enterprise Azure contract. How these figures are calculated is a separate question. Google and Microsoft provide details about their methodology, but AWS does not. AWS also uses market-based GHG Protocol calculations whereas Google uses location-based reporting. Location-based is more useful because it considers where the electricity is consumed in relation to carbon mitigation in that region e.g. renewables on the same grid as the data center. It’s generally accepted practice to report both, but location-based reporting is important for encouraging demand for more clean energy where it is actually consumed.

Hyperscale providers operate at such a scale that they can justify activities such as Google building their own servers (Metz, 2016; GCP, 2017) and Microsoft constructing the first ever gas data center (Belady & James, 2017), all of which contribute to improving energy efficiency. The technology sector is also the largest purchaser of renewables (Kamiya, and Kvarnström, 2019) (but what does 100% renewable actually mean?).
AI challenges ahead?
Although the last 20 years have seen major efficiency improvements, a shift in with AI becoming a meaningful proportion of energy demand poses new challenges.
We’ve seen this story before. A peer reviewed article from 2015 claimed that by 2020 data centers were going to consume 1,200 TWh of energy. That obviously didn’t happen.
AI energy consumption is something to pay attention to, but it’s important to keep an eye on the details. There will be lots of crazy numbers floating around which sound terrible, but make no sense when you look into them. The most common mistake is extrapolating historical figures because AI technology is advancing so quickly.
I wrote about this a year ago and highlighted several key variables that make AI energy consumption difficult to predict: new models with fewer parameters, more energy efficient models, different data center hardware, and different client hardware. We can see this is happening with the significantly lower cost of training from the Deepseek model vs what has previously been possible with OpenAI et al:
During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs. Consequently, our pre- training stage is completed in less than two months and costs 2664K GPU hours. Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M.
That is not the fully loaded cost, but considering how much it cost to train the current leading models, this is indicative of the direction things are taking.
Despite improving AI energy efficiency, total energy consumption is likely to increase because of the massive increase in usage. A large portion of the increase in energy consumption between 2024 to 2023 is attributed to AI-related servers. Their usage grew from 2 TWh in 2017 to 40 TWh in 2023.

This is a big driver behind the projected scenarios for total US energy consumption, ranging from 325 to 580 TWh (6.7% to 12% of total electricity consumption) in the US by 2028.

With network energy, storage, and data center infrastructure energy consumption remaining flat or improving, the focus for improvements has to be on the servers and GPUs driving the majority of this increase. That is where the opportunity for efficiency is.

