The AI Elephant in the Room

2023 was the year of AI. The launch of ChatGPT was both highly publicized and highly accessible, giving the general public a clear way to understand the utility of AI applications in everyday life. This precipitated a broader conversation about where AI applications can add value to sustainability. The intersection of AI and sustainability has been of interest to the practice since 2020, when we released an insight exploring the topic, authored by strategic advisor Karuna Ramakrishnan. Subsequent insights published in 2023 presented 25+ AI tools for sustainability professionals; AI applications in building & construction; and a primer on how to understand the role of AI in optimizing satellite-based sustainability data.

Amidst these AI benefits to sustainability, there is a glaring elephant in the room: generative AI is both energy and resource intensive, consuming large amounts of electricity, as well as water to cool data centers. A 2023 paper by Dutch analyst Alex de Vries used AI hardware and software specs published by Nvidia on their graphics processing units (GPUs) to estimate AI-associated global energy usage. His finding: that by 2027, global AI could consume the same amount of energy as his home country of the Netherlands.

University of California (Riverside) researcher Shaolei Ren observed a similar trend with water usage linked to data processing centers. His research illuminates global AI’s scope 1 (onsite server cooling) & scope 2 (offsite electricity generation) water withdrawals. His conclusion: that by 2027, global AI’s scope 1 & 2 water withdrawals will be 4-6x that of Denmark. This finding is reflected in surging water withdrawals by large technology companies, with Google (+20%) and Microsoft (+34%) showing large gains from 2021 to 2022 as their AI programs accelerated.

Self-Regulation?

The energy and resource intensity of AI systems is no secret. Training one AI model can consume as much electricity as 100 U.S. homes do in an entire year. Companies are beginning to internalize how integrating and scaling AI systems will affect their scope 1 & 2 emissions and broader sustainability and net zero targets. There are best practices emerging to further mitigate AI-related energy consumption. A recent paper “Power Hungry Processing” explored the inference cost of various machine learning (ML) applications ranging from simpler “task-specific” models to more complex “general-purpose” ones performing multiple, interconnected tasks. Their findings suggest that general-purpose models consume far greater energy and emissions, adding some nuances to different types of AI models and their associated footprints.

The Google “4M approach” recommends selecting efficient “sparse models,” using processors and systems optimized for machine learning training, computing in the cloud rather than on-premise, and map optimization to choose locations with the cleanest energy. By following these practices, Google claims, energy use can be reduced by 100x and emissions by 1000x. To generate a baseline, the IBM Cloud Carbon Calculator provides estimates of emissions associated with cloud computing. These approaches and tools can further advance self-regulation whereby AI system creators can better optimize the value of new models against these environmental costs.

But there are also challenges to relying on AI system creators to self-regulate and optimize the efficiency of their application development against environmental considerations. Data transparency on energy and resource intensity is scarce. While the aforementioned studies have started to estimate these footprints, there is a long way to go before a more formalized rating system for the green footprints of different AI systems and applications can be implemented. Also, large technology companies rarely disaggregate their AI-related scope 1 and 2 emissions from their broader reporting, making it difficult to decipher their relative contribution to the overall reporting provided to the public. Lastly, researchers point to a possible “rebound effect” as AI systems evolve. Namely, that increasing energy and resource efficiency resulting from these best practices will be offset by demand for rapidly expanding complexity in the AI models themselves.

Policy Developments

National governments are proving faster and more nimble in their approach to AI regulation when compared to their regulation of social media platforms a decade ago. With respect to the environmental impact of these AI systems, 2024 has revealed signs of more targeted regulation. In February 2024, the US Congress introduced the Artificial Intelligence Environmental Impacts Act of 2024. The legislation would direct the National Institute of Standards and Technology (NIST) to develop standards to measure and report the full range of artificial intelligence’s (AI) environmental impacts, as well as create a voluntary framework for AI developers to report environmental impacts. 

Also last month, the European Union member states approved an “AI Act” that would require “high risk” (which include the powerful “foundation models” that power ChatGPT and similar A.I.s) developers to report their energy consumption, resource use, and other impacts throughout their systems’ lifecycle. The EU law takes effect next year. India and China have yet to introduce regulations targeted at reporting and managing these AI environmental impacts. Yet AI governance is a clear policy priority in both countries – evidenced by China’s rolling out of binding AI regulations and India’s intervention in the introduction of new AI products from national technology companies – suggesting that national governments are developing strong regulatory frameworks where supplementary environmental standards could be added in the near future.

Non-governmental organizations like the International Organization for Standardization are working on the first international standard sustainability in AI. The technical report next year will cover where AI and environmental sustainability coincide. This includes everything from energy and water consumption, waste, carbon footprint, the AI system life cycle and supply chains. It will also include ways to measure the environmental sustainability aspects of AI systems, such as energy efficiency, raw materials, transport and water, as well as approaches to reducing AI systems’ environmental sustainability impacts. Researchers Kate Crawford and Vladan Joler present a fascinating exploration of the full lifecycle of Amazon’s Echo in their 2018 paper “Anatomy of an AI System.”

Environmental Costs and Benefits

These environmental costs to AI systems exist alongside clear benefits. AI applications today – even with sub-optimal configurations in terms of energy and resource use – contribute to tangible improvements in sustainability performance. There are numerous examples: improved country and sector-level sustainability measurement; energy and resource optimization within operations; superior forecasting and early warning systems related to severe weather events or agricultural disruptions; automated customer service; streamlined sustainability reporting. The list goes on. The challenge is ensuring that through greater data transparency and universal standards, we accelerate these benefits while the associated costs recede through being managed responsibly.

Contact Jeremy Tamanini for more background on this topic, as well as reading these related insights:

AI Everywhere: Tangible Applications for Sustainability Teams (link here)

AI in Building & Construction: Tangible Applications for Sustainability Teams (link here)

How to Work with Satellite-Based Sustainability Data (link here)

Contact us - we'd love to hear from you.