Lack of transparency can impact the adoption of AI models. Credit: Laurence Dutton/E+/ Getty Images

A growing rush to harness artificial intelligence (AI) to speed up and scale efforts and find solutions to common challenges underscores the need to closely examine the technology’s impact on the environment and ethical concerns around transparency and fairness.

The launch of the AI Innovation Grand Challenge at the 2023 United Nations climate summit was a significant step in the push for AI in climate action in developing countries. This was to achieve the Sustainable Development Goals — world’s blueprint to end hunger and poverty, clean up the environment, and provide health care for all by 2030.

Suppose generative AI is used daily by billions worldwide. In that case, the total annual carbon footprint may touch around 47 million tons of carbon dioxide, contributing to a 0.12% increase in global carbon dioxide emissions.

According to our analysis, a generative AI chatbot application that assists 50 call centre workers, each supporting four customers per hour, can generate around 2,000 tonnes of carbon dioxide annually. Water consumption from large-scale adoption of generative AI — half of the world population sending 24 queries a day/per person — may match the annual fluid intake of more than 328 million adults.

Data centres serving AI technology workloads host large-scale computing infrastructures, especially arrays of graphic processing units. These computing infrastructures generate high heat energy when they serve the AI workloads, which must then be removed from the data centre server room to avoid overheating and keep the machines running within the operating temperature zone. Two types of cooling systems are typically used: cooling towers and outside air cooling. Both of these need water.

The lack of clear insights into decision-making processes of AI models makes it challenging to understand biases, potentially leading to unfair outcomes. Transparent models are crucial for upholding ethical standards and ensuring accountability during errors. Lack of transparency can impact the adoption of these models in industry, academia, and other sectors.

A recent example is a lawsuit filed by the New York Times against OpenAI and Microsoft, the creators of ChatGPT and other AI tools, for copyright infringement. The lawsuit claims that AI models, including ChatGPT, were trained using millions of NYT articles, leading to concerns about unauthorized use, potential competition, and the impact on journalism.

Setting up standards and frameworks to make AI sustainable is essential. Frameworks such as the Montreal Declaration for Responsible AI and the Organisation for Economic Co-operation and Development’s AI Principles are widely accepted and adopted by governments, organizations, and industry in pursuing sustainable AI. The AI Alliance, launched in December 2023, also advocates the use of AI in a sustainable manner.

Solutions to mitigate AI’s carbon footprint and ethical concerns

Effective AI models can be developed without the need for extensive data. Prioritizing targeted, domain-specific AI models over constant size increases aligns with sustainability by optimizing resources and addressing specific use cases efficiently. This approach minimizes environmental impact and promotes responsible development.

Actions such as prompt engineering, prompt tuning, and model fine-tuning can optimize hardware usage, reducing the carbon footprint in adapting foundation models (a form of generative AI) for tasks.

Techniques that make models more efficient for deployment on resource-constrained devices or systems (quantization, distillation, and client-side caching) and investing in specialized hardware (e.g., in-memory computing, analog computing), enhance AI model performance and contribute to overall sustainability.

Shifting AI operations to energy-efficient data centers within cloud computing helps reduce environmental impact. This involves transferring the computational workload to data centres with greener practices, mitigating the overall carbon footprint associated with AI execution in the cloud.

To assess the transparency of generative AI, a multidisciplinary team from Stanford, MIT, and Princeton has designed a scoring system called the Foundation Model Transparency Index. The system evaluates 100 aspects of transparency, from how a company builds a foundation model, how it works, and how it is used downstream.

The challenges are real, but the potential of AI as a transformative agent in the sustainability space is equally significant.