Environmental Impact Tracking — Sources & Methodology
How we estimate the energy, water, carbon, and resource cost of every AI message
1. Overview
Every time you send a message to an AI model, a data center somewhere consumes electricity to generate the response. That electricity requires water for cooling, produces greenhouse gas emissions based on the local power grid, draws on natural energy resources, and depletes finite minerals through the infrastructure that generates it.
We track five environmental metrics for every AI message in the IQ Assistant, and display them in a tooltip alongside each response. Our goal is transparency: giving you real numbers — with honest uncertainty ranges — so you can make informed choices about how you use AI.
This page documents every data source, assumption, and calculation behind those numbers. Where possible, we cross-validate our estimates against multiple independent sources. Where data is uncertain or missing, we say so explicitly.
2. What We Measure
Each AI message shows five environmental metrics. The first three — energy, greenhouse gas emissions, and water — are displayed in a compact footer beneath each message. All five are available in the expanded tooltip.
Every metric includes an uncertainty range (min–max) reflecting the limits of current measurement science. More on that in How Confident Are We?
3. How It Works
The calculation is straightforward. For each message, we know the model used and how many tokens were processed (input) and generated (output). Research consistently shows that generating output tokens costs significantly more energy than processing input tokens — Caravaca et al. measured an ~11× difference — so we account for them separately.
The Calculation
Energy is the foundation. Every other metric is derived by multiplying energy by an infrastructure factor that depends on where the model runs — which data center, which power grid, which cooling system. The same model running in France (low-carbon nuclear grid) produces very different emissions than one running in Virginia (gas + coal mix).
Each metric also has min/max uncertainty bounds derived from the model's uncertainty multipliers (e.g., 0.5–1.5 means the true value could be 50%–150% of the nominal estimate). These ranges are applied uniformly across all five metrics.
When a single conversation uses multiple models (e.g., a reasoning model hands off to a faster model), we calculate each model's contribution separately and sum the results.
4. Per-Model Energy Factors
The most important — and most uncertain — input is how much energy each model consumes per token. No AI provider currently publishes per-token energy data. Google is the only company to have published per-query measurements (0.24 Wh for a median Gemini text prompt, August 2025). Everyone else relies on academic estimates with ~20–50% margin of error.
Our per-token values are derived from the best available research, primarily Jegham et al. 2025 (the most comprehensive LLM energy benchmark to date), cross-referenced with Oviedo/Microsoft Research, Epoch AI, Caravaca et al., and EcoLogits.
Model-by-Model Factors
Cross-Validation: Do These Numbers Make Sense?
Multiple independent sources converge on a 0.2–0.4 Wh range for a standard text query to a frontier model, giving us confidence our estimates are in the right ballpark. Reasoning models consume 5–70× more depending on complexity.
Standard Text Queries
Reasoning Models
Full Benchmark: Short Prompt (100 input / 300 output tokens)
From Jegham et al. Table 4, sorted by energy consumption:
Why Output Tokens Cost More Than Input
Multiple sources confirm that generating output tokens is significantly more energy-intensive than processing input tokens. We use a conservative 5:1 ratio based on API pricing, but direct measurement suggests the true ratio may be higher.
Energy Breakdown by Component (Google 2025)
Google's technical paper (arXiv:2508.15734) provides the only published component-level breakdown for production AI inference:
5. Data Center Infrastructure
Once we know how much energy a model consumes, infrastructure multipliers convert that into real-world impacts. These factors come from Jegham et al. Table 1, supplemented with provider sustainability reports (Google, Microsoft, AWS, Scaleway) and independent verification.
Provider Infrastructure Factors
PUE Cross-Validation
Power Usage Effectiveness measures how efficiently a data center delivers energy to its computing equipment. A PUE of 1.0 would mean zero overhead; the industry average is 1.56.
WUE Cross-Validation
Water Usage Effectiveness measures how much water a data center consumes per unit of energy. Lower is better.
¹ Google WUE derived from official figures: 0.26 mL per median prompt ÷ 0.24 Wh = ~1.08 mL/Wh.
² Google carbon intensity is a blended estimate using 66% CFE × regional grid mix. Varies significantly by region: Iowa 87% CFE, South Carolina 31% CFE, Oregon 87% CFE.
³ GreenPT/Scaleway CO₂: Scaleway's Environmental Footprint Calculator publishes PAR-2 (DC5) carbon intensity as 0.065 kgCO₂e/kWh, calculated using EMBER electricity mix data × DC5 PUE, following a location-based methodology per ADEME PCR guidelines (deliberately excluding their 100% renewable Guarantees of Origin).
6. Lifecycle Factors: Primary Energy & Abiotic Depletion
Beyond direct energy, CO₂, and water, we track two lifecycle metrics that capture the broader environmental cost of electricity generation itself. These come from EcoLogits / ADEME Base Empreinte® / ecoinvent databases.
Primary Energy (PE)
Primary Energy measures the total energy extracted from nature — fossil fuels, nuclear fuel, wind, solar — to produce each watt-hour of electricity you consume. It includes all the losses along the way: fuel extraction, refining, transport, and generation inefficiency. A factor of ~0.0097 MJ/Wh means roughly 2.7× the direct electricity is consumed as primary energy from nature.
Abiotic Depletion Potential for Elements (ADPe)
ADPe measures the depletion of non-renewable mineral and metal resources — lithium, copper, gold, rare earths — required for the electricity generation infrastructure. It's expressed in kilograms of antimony equivalent (kg Sb eq) per the CML-IA characterization method from Leiden University. France has lower ADPe than the US grid because nuclear-dominated generation requires less diverse mineral extraction than a fossil-fuel-heavy mix.
• PE and ADPe are infrastructure-level lifecycle factors — they depend on the electricity grid, not the model itself.
• These factors capture only the operational electricity lifecycle, not hardware manufacturing (embodied impacts).
• Values derived from the same ADEME/ecoinvent databases used by EcoLogits for LCA compliance (ISO 14044).
• The same uncertainty ranges applied to energy, water, and CO₂ are also applied to PE and ADPe.
7. Real-World Comparisons
Raw numbers like "0.15 Wh" or "0.04 gCO₂e" are hard to grasp. For each metric, we display a real-world comparison in the tooltip to make the numbers tangible. Here's what we compare to and why.
Energy
Selection logic: < 2 minutes → "≈ X seconds of streaming video"; < 50 searches → "≈ X Google searches"; else → "≈ X% of a phone charge"
Water
Selection logic: < 100 drops → "≈ X water drops"; else → "≈ X teaspoons"
CO₂ Emissions
Note on car CO₂: We use 130 g/km for the European on-road fleet (all vehicle ages). The EEA reported 108 g/km for 2024 new cars, down from 160–170 g/km in 2015–2019 due to EV adoption and EU emissions standards. The total on-road fleet is higher because older, less efficient vehicles remain in service (average European car age ~12 years). The US EPA fleet average is ~249 g/km.
Selection logic: < 100 metres → "≈ driving X metres"; else → "≈ X Google searches"
Primary Energy
Selection logic: < 50 matches → "≈ X matches burned"; < 100 Cal → "≈ X food Calories"; else → "≈ boiling X cups of water"
Abiotic Depletion (ADPe)
ADPe is expressed in kg Sb eq (antimony equivalent) — an abstract unit. To make it tangible, we convert to the equivalent mass of copper that would need to be mined to cause the same level of mineral depletion, using the CML-IA 2016 characterization factor for copper (1.4 × 10⁻³ kg Sb eq/kg Cu).
For context: manufacturing one smartphone (1.25 × 10⁻³ kg Sb eq) causes the same mineral depletion as generating ~3,000 kWh of electricity — roughly one year of average European household electricity. A typical LLM query's ADPe equates to mining 0.001–0.05 mg of copper.
Additional Reference Values (not used in tooltip)
8. How Confident Are We?
There is currently no single standardized uncertainty methodology for LLM energy estimation, though progress is being made. The SCI for AI specification (ISO/IEC 21031:2024, ratified December 2025) provides standardized reporting requirements for AI carbon accounting — a critical step toward comparable, reproducible environmental impact reporting.
Different research groups currently use different approaches to quantifying uncertainty:
Our Uncertainty Ranges
Each model defines uncertainty multipliers (e.g., 0.5–1.5 means the true value could be 50%–150% of our estimate). These are applied uniformly to all five metrics. Here's why we chose ±30–50% as the default range:
- Jegham et al. error bars range from ±15% (well-known models like GPT‑4o: 0.42 ± 0.13) to ±52% (o3: 7.03 ± 3.66)
- Oviedo/Microsoft IQR spans roughly ±50% of median (0.18–0.67 on 0.34)
- Caravaca et al. found batch size alone causes 36× variation (Llama 405B: 21.7 Wh single vs 0.6 Wh batched), but production systems always batch
- Nature Scientific Reports 2024 identifies hardware, geography, and utilization as the primary uncertainty sources
- Non-production estimates overstate by 4–20× (Oviedo/Microsoft), suggesting our academic-derived estimates may be conservatively high
Models with more data points (GPT‑4o, Claude 3.7 Sonnet) have tighter ranges; models extrapolated from architectural reasoning (Claude Opus 4, Gemini 3 Pro) have wider ranges.
9. Important Caveats
- Limited official data. Only Google has published official per-query energy data (0.24 Wh for Gemini, August 2025). Oviedo/Microsoft Research provides the next most rigorous estimate (0.34 Wh median for frontier models). OpenAI's single data point (0.34 Wh for ChatGPT) came from a CEO statement without methodology. All other figures are academic estimates with ~20–50% margin of error.
- Hidden reasoning tokens. Reasoning models (o3, o4-mini) generate hidden chain-of-thought tokens that dramatically increase energy consumption. Jegham et al. measured o3 at 7–39 Wh per query — up to 70× more than efficient models. These internal tokens are invisible via the API but consume compute, making per-visible-token metrics potentially misleading for reasoning models.
- "100% renewable" claims. AWS and Microsoft use annual Renewable Energy Certificate (REC) matching — purchasing certificates equal to annual consumption from any location and time. This is an accounting solution, not an engineering one: a certificate from a solar farm in Arizona at noon can "offset" fossil-fuelled consumption in New York at midnight. Google's 66% CFE (measured hourly on the same regional grid) is more conservative and transparent. Greenpeace found AWS meeting only 12% of its renewable commitment physically in Virginia.
- GreenPT transparency gap. GreenPT has not published per-token energy figures despite marketing as "green AI." Their primary environmental advantage comes from the French nuclear grid (21.7 gCO₂/kWh per RTE France) and Scaleway's efficient data centers (PUE 1.25, adiabatic cooling), rather than demonstrated model-level efficiency.
- Perplexity: no sustainability data. No environmental reports, no energy figures, no climate commitments. Infrastructure multipliers are assumed from AWS (their primary cloud provider). The additional energy cost of web search + retrieval is estimated, not measured.
- Model version drift. Jegham et al. benchmarked Claude 3.7 Sonnet and o3, not the current generation. Our estimates for Claude Opus 4.6, Sonnet 4.6, GPT‑4.1, and Gemini 3 Pro are derived from the closest benchmarked equivalents and architectural reasoning. No published paper provides direct measurements for these models as of February 2026.
- Batch size and utilization. Energy per query varies dramatically with server utilization. Caravaca et al. found 36× reduction from single-prompt to batch-100 for Llama 405B. Oviedo/Microsoft warns non-production estimates overstate energy by 4–20×.
- Water accounting. WUE figures include both on-site cooling (evaporative towers, adiabatic systems) and off-site water consumed by electricity generation. Per IEA 2023, two-thirds of total data center water is indirect/off-site. Li et al. (CACM 2025) provides the most comprehensive framework for total water footprint estimation.
- Embodied emissions not included. Our calculations cover only operational energy. Hardware manufacturing emissions are significant: NVIDIA's HGX H100 reports 1,312 kg CO₂e cradle-to-gate per system. TechInsights (2026) projects GPU manufacturing emissions to grow ~16× from 2024 to 2030.
- Jevons Paradox. As AI becomes more efficient and cheaper, total resource consumption may increase. De Vries (2025) projects AI produced 32.6–79.7 million tonnes CO₂ in 2025 alone. Google's total emissions rose 11% to 11.5M tonnes in 2024 despite per-query efficiency gains.
- France carbon intensity validated. Our CIF of 0.065 gCO₂/Wh for Scaleway matches their own Environmental Footprint Calculator published value for DC5. This is higher than RTE France 2024 grid intensity (21.7 gCO₂eq/kWh) because ADEME's regulatory average for France is structurally higher than single-year figures, and the PUE multiplier (1.25) further increases effective carbon per useful kWh.
10. Glossary
11. References
- Jegham et al. 2025 — "How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference"
- arXiv: 2505.09598 (v6, November 24, 2025)
- Provider infrastructure multipliers (PUE, WUE, CIF) from Table 1
- Per-query energy benchmarks from Table 4 (30+ models)
- Validated within 19% of OpenAI CEO's disclosed 0.34 Wh/query for GPT‑4o
- Uses cross-efficiency DEA for multi-dimensional sustainability ranking
- Google 2025 — "Measuring the environmental impact of delivering AI at Google Scale"
- Google Cloud Blog
- Full Report PDF
- arXiv: 2508.15734
- First official per-query measurement: 0.24 Wh energy, 0.03 gCO₂e, 0.26 mL water (median Gemini text prompt)
- 33× reduction in per-prompt energy over 12 months (May 2024–May 2025)
- Epoch AI 2025 — "How much energy does ChatGPT use?"
- Article
- Bottom-up FLOP-based estimates for GPT‑4o class models
- Couch 2026 — "Electricity use of AI coding agents"
- Blog
- Per-token rates derived from Epoch AI data: ~0.39 Wh/MTok input, ~1.95 Wh/MTok output
- EcoLogits — Open-source parametric estimation library (v0.9.2, January 2025)
- Methodology
- GitHub
- PyPI
- LCA framework compliant with ISO 14044
- Source for PE, ADPe, and WCF lifecycle factors via ADEME Base Empreinte®
- Ritchie 2025 — "What's the carbon footprint of using ChatGPT or Gemini?"
- GreenPT — Sustainability documentation
- Scaleway — Data center environmental reports
- RTE France — Annual electricity review 2024
- Key Findings (2024)
- 2025 First Trends (PDF)
- France grid carbon intensity: 21.7 gCO₂eq/kWh (2024)
- Google 2025 Environmental Report
- Microsoft 2025 Environmental Sustainability Report
- AWS Sustainability
- ADEME Base Empreinte® — French Agency for Ecological Transition
- Base Empreinte®
- ADEME Data Portal
- Source for electricity lifecycle data: PE and ADPe per kWh by country/region
- Boavizta — Open-source methodology for embodied impacts of IT equipment
- Muxup 2026 — "Per-query energy consumption of LLMs"
- Article
- Independent energy benchmarking of open-weight models using InferenceMAX benchmark suite
- CML-IA — Characterization factors for life cycle impact assessment (Leiden University)
- Fairphone 5 LCA 2024 — Life Cycle Assessment (Fraunhofer IZM, September 2024)
- Oviedo et al. 2025 — "Energy Use of AI Inference" (Microsoft Research)
- arXiv: 2509.20241
- Microsoft Research
- Median energy per query: 0.34 Wh (IQR: 0.18–0.67 Wh) for frontier models on H100 nodes
- Caravaca et al. 2025 — "From Prompts to Power: Measuring the Energy Footprint of LLM Inference"
- arXiv: 2511.05597
- 32,500+ measurements across 21 GPU configurations and 155 architectures
- Output tokens have ~11× greater energy impact than input tokens
- Niu et al. 2025 — "TokenPowerBench: Benchmarking the Power Consumption of LLM Inference"
- arXiv: 2512.03024
- Super-linear energy scaling: LLaMA‑3 1B to 70B = 7.3× energy increase for 70× parameters
- Wilhelm et al. 2025 — "Beyond Test-Time Compute Strategies: Advocating Energy-per-Token"
- EuroMLSys '25, ACM
- Chain-of-Thought energy overhead: +72% to +177%
- Jin et al. 2025 — "The Energy Cost of Reasoning" (Harvard)
- arXiv: 2505.14733
- Li et al. 2023–2025 — "Making AI Less 'Thirsty'"
- arXiv: 2304.03271
- Peer-reviewed: Communications of the ACM, 2025
- Principled framework covering Scope 1 (on-site) and Scope 2 (off-site) water footprint
- De Vries 2025 — "Carbon and Water Footprints of Data Centers"
- Patterns (Cell Press), 2025
- AI systems: 32.6–79.7 million tonnes CO₂ in 2025
- EcoLogits (JOSS) — "EcoLogits: Evaluating the Environmental Impacts of Generative AI"
- JOSS Paper (Journal of Open Source Software, 2025)
- Authors: Samuel Rince et al. (GenAI Impact non-profit)
- ISO 14044-compliant LCA approach
- Mistral 2025 — Official Environmental Report
- Announcement (January 2025)
- Per 400-token query: 1.14 gCO₂e and 45 mL water
- Pronk et al. 2025 — "Benchmarking Energy Efficiency of Large Language Models Using vLLM"
- arXiv: 2509.08867
- Kumar et al. 2025–2026 — "OverThink: Slowdown Attacks on Reasoning LLMs"
- arXiv: 2502.02542
- Ozcan et al. 2025 — "Quantifying the Energy Consumption and Carbon Emissions of LLM Inference via Simulations"
- arXiv: 2507.11417
- van Oers et al. 2020 — "Abiotic resource depletion potentials (ADPs) for elements revisited"
- Nature Scientific Reports 2024 — "Reconciling the contrasting narratives on the environmental impact of large language models"
- ecoinvent — Life cycle inventory database
- ecoinvent Electricity
- Version 3.12 (February 2026), 3,500+ datasets in 250+ geographies
- SCI for AI — Software Carbon Intensity for Artificial Intelligence (ISO/IEC 21031:2024)
- Green Software Foundation — SCI for AI
- Ratified December 17, 2025
- ML.ENERGY Leaderboard v3.0 — Standardized LLM energy benchmarks
- ML.ENERGY Leaderboard
- 46 models across 1,858 hardware configurations (December 2025)
- NVIDIA HGX H100 Product Carbon Footprint
- HGX H100 PCF Summary (PDF)
- Cradle-to-gate: 1,312 kg CO₂e per HGX H100 system
- TechInsights (2026) — GPU manufacturing emissions growth projection
- Coalition for Sustainable AI — International governance framework
- Coalition for Sustainable AI
- AI Action Summit (Wikipedia)
- Launched at Paris AI Action Summit, February 2025; 58 countries signed
- EF 3.1 — Environmental Footprint characterization factors (JRC, 2025)
- JRC Environmental Footprint
- Official EU Product Environmental Footprint method
- GPT-5 energy estimate — University of Rhode Island AI Lab (August 2025)
- The Guardian / AI Commission
- Tom's Hardware analysis
- GPT‑5 average: ~18.35 Wh per 1000-token query; ~8.6× increase over GPT‑4