Remarks to the National Sustainability Society

Below are condensed remarks from Jeremy Tamanini’s presentation on the environmental footprint of AI systems, delivered September 10, 2024 at the National Sustainability Society at the University of Washington.

Today I am going to talk about an elephant in the room related to the growing integration of AI systems with our economy. Namely that these systems come with a large – and often opaque – environmental footprint. I hope my brief talk will leave you with a general understanding of the issues in play, as well as some tangible solutions to consider in moving forward with your own work in this space.

We have all read a lot about AI over the past year. I was trying to find the right example to illustrate the reality of this current moment. Interestingly, I found it in regulatory filings of companies about to go public that had the letters “AI” in their name.

What I found is striking: more and more of these start-ups link themselves to AI in their public-facing communication, yet few report any meaningful direct revenue from AI applications businesses. Data from the Census Bureau confirms this observation, with surveys showing that only 5% of businesses are currently using AI to produce products and services. 

I see this working with clients over the past year: AI is top of mind but most organizations are still at the learning phase. A survey we conducted confirmed this with regards to sustainability: the majority of respondents indicated that they had limited integration of AI systems to their sustainability work streams but that this topic was a vital and active one. This survey, by the way, is active through the conference app and I would love to hear your feedback there.

Against this backdrop, it’s always useful to remember that “compute” isn’t free. Compute refers to processing power, memory, networking, and storage required to execute a computational command. There is a growing understanding that most economic activities come with environmental costs: food systems, consumer goods, transportation and so on. But we often overlook that compute exerts a significant, but often hidden, strain on both energy and resource use. 

I have used compute in multiple ways since I woke up this morning: to download my emails, refreshing LinkedIn or checking the Whova conference application, or running a data query in the cloud for a client project. These are fairly simple operations, with a light environmental footprint. Yet the same can’t be said for many AI systems. These applications can be highly complex with large volumes of interconnected datasets. How these systems are trained, designed and integrated to companies and organizations in the next several years will have a meaningful impact on climate and sustainability-linked targets.

The elephant in the room of course with all this is that these AI systems are both energy and resource intensive, consuming large amounts of electricity, as well as water to cool data centers. Estimates suggest that by 2027, global AI could consume the same amount of energy as the Netherlands, and produce scope 1 & 2 water withdrawals 4-6x greater than Denmark.

The energy and resource intensity of AI systems is an open secret, and we can see it in the sustainability reporting from big technology companies who report significant emissions increases this decade, mostly linked to data center expansion to support these expanding AI systems. While big tech has faced this challenge for years, companies, organizations and institutions alike are beginning to internalize how integrating and scaling AI systems will affect their scope 1 & 2 emissions and broader sustainability and net zero targets. 

There are best practices emerging to mitigate AI-related energy and resource consumption. From my perspective, academia is on the cutting edge of this work. A recent paper “Power Hungry Processing” explored the inference cost of various machine learning (ML) applications ranging from simpler “task-specific” models to more complex “general-purpose” ones performing multiple, interconnected tasks. These findings suggest that general-purpose models consume far greater energy and emissions, adding some nuances to different types of AI models and their associated footprints.

The technology sector offers further support. The Google “4M approach” (model, machine, mechanization, map) recommends selecting efficient “sparse models,” using processors and systems optimized for machine learning training, computing in the cloud rather than on-premise, and map optimization to choose locations with the cleanest energy. By following these practices, Google claims, energy use can be reduced by 100x and emissions by 1000x. The IBM Cloud Carbon Calculator provides estimates of emissions associated with cloud computing. These approaches and tools can further advance self-regulation whereby AI system creators can better optimize the value of new models against these environmental costs.

Even policy makers are catching up. The AI Act in Europe stipulates that “high impact” developers should report their energy consumption and resource use. In the US, the AI Environmental Impacts Act would direct the National Institute of Standards and Technology (NIST) to develop standards to measure and report the full range of artificial intelligence’s (AI) environmental impacts, as well as create a voluntary framework for AI developers to report environmental impacts. 

Amidst these environmental impacts, AI systems can be positive tools to decarbonize and promote sustainable development. Reducing energy consumption and the associated cost-savings are always a company priority, and AI tools can accelerate these efficiency gains. 

Given the disjointed reporting requirements for companies over the past few years, sustainability professionals have been challenged to efficiently report on their ESG (environmental, social, and governance) metrics. Some software solutions in this realm are quite broad, connecting insights across governance, risk, compliance, audit and ESG.

AI-powered planning tool that offers architects and urban planners the ability to design sustainable, livable cities with heightened precision. Autodesk harnesses the power of AI to simulate the implications of diverse design decisions on critical factors, such as energy consumption, traffic flow, and air quality, with an aim to help designers make more informed and sustainable design choices while enhancing the sustainability and livability of projects. 

As someone who looks often for asset-level data on GHG emissions and other climate metrics, new initiatives like Climate TRACE offer huge potential, using AI and machine learning to scale asset-level “ground truth data” to similar sites, giving data people like me much wider coverage than has been available (and, by the way, offering independent emissions measurements that in some cases may challenge official statistics provided by governments and corporates).

The environmental costs to AI systems exist alongside clear benefits. My work over the past year reveals that most companies and organizations are still at the beginning of their AI journey. As practitioners and researchers in this space, let us provide the tools and data transparency to these stakeholders so that they can integrate AI systems in a way that leverages benefits while limiting the associated environmental costs.

Contact Jeremy Tamanini for more background on this topic, as well as reading these related insights:

The AI Elephant in the Room (link here)

AI Everywhere: Tangible Applications for Sustainability Teams (link here)

AI in Building & Construction: Tangible Applications for Sustainability Teams (link here)

How to Work with Satellite-Based Sustainability Data (link here)

Contact us - we'd love to hear from you.