Environmental Impact Tracking — Sources & Methodology
Data Sources, Assumptions & Calculation Approach
Overview
This document describes the sources, methodology, and caveats for the per-model environmental impact factors used in the IQ Assistant's environmental impact tracking feature (src/functions/ai/environmental-factors.ts).
Glossary
Primary References
- Jegham et al. 2025 — "How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference"
- arXiv: 2505.09598 (v6, November 24, 2025)
- Provider infrastructure multipliers (PUE, WUE, CIF) from Table 1
- Per-query energy benchmarks from Table 4 (30+ models)
- Validated within 19% of OpenAI CEO's disclosed 0.34 Wh/query for GPT‑4o
- Includes reasoning model class tracking (o3, DeepSeek‑R1) and recent Claude/GPT families
- Key finding: o3 and DeepSeek‑R1 consume >29 Wh per long prompt (~65× the most efficient models)
- Uses cross-efficiency DEA (Data Envelopment Analysis) for multi-dimensional sustainability ranking
- Google 2025 — "Measuring the environmental impact of delivering AI at Google Scale"
- Google Cloud Blog
- Full Report PDF
- arXiv: 2508.15734
- First official per-query measurement: 0.24 Wh energy, 0.03 gCO₂e, 0.26 mL water (median Gemini text prompt)
- 33× reduction in per-prompt energy over 12 months (May 2024–May 2025); 44× total emissions reduction
- Epoch AI 2025 — "How much energy does ChatGPT use?"
- Article
- Bottom-up FLOP-based estimates for GPT‑4o class models
- Couch 2026 — "Electricity use of AI coding agents"
- Blog
- Per-token rates derived from Epoch AI data: ~0.39 Wh/MTok input, ~1.95 Wh/MTok output
- EcoLogits — Open-source parametric estimation library (v0.9.2, January 2025)
- Methodology
- GitHub
- PyPI
- Formula:
E_output = #T × (8.91e-5 × P_active + 1.43e-3)Wh - LCA framework compliant with ISO 14044
- Source for Primary Energy (PE), Abiotic Depletion Potential (ADPe), and Water Consumption Factor (WCF) lifecycle factors via ADEME Base Empreinte®
- v0.9.0 methodology update (Nov 2024): updated model repository, electricity mix listing, homepage
- Ritchie 2025 — "What's the carbon footprint of using ChatGPT or Gemini?"
- GreenPT — Sustainability documentation
- Scaleway — Data center environmental reports
- RTE France — Annual electricity review 2024
- Key Findings (2024)
- 2025 First Trends (PDF)
- France grid carbon intensity: 21.7 gCO₂eq/kWh (2024)
- Google 2025 Environmental Report
- Microsoft 2025 Environmental Sustainability Report
- AWS Sustainability
- ADEME Base Empreinte® — French Agency for Ecological Transition
- Base Empreinte®
- ADEME Data Portal
- Source for electricity lifecycle data: Primary Energy (PE) and Abiotic Depletion Potential (ADPe) per kWh by country/region
- Used by EcoLogits for PE and ADPe factors
- Boavizta — Open-source methodology for embodied impacts of IT equipment
- Methodology
- Boavizta API
- Combined with NVIDIA Product Carbon Footprint data for hardware manufacturing impacts
- Used by EcoLogits for lifecycle impact allocation
- Muxup 2026 — "Per-query energy consumption of LLMs"
- Article
- Independent energy benchmarking of open-weight models (DeepSeek‑R1, GPT‑OSS‑120B) using InferenceMAX benchmark suite
- Confirms proprietary models lack independent benchmarks; only provider self-reporting available
- DeepSeek‑R1: 0.63–16.3 Wh/query depending on quantization and output length
- CML-IA — Characterization factors for life cycle impact assessment (Leiden University)
- CML-IA Characterisation Factors
- Abiotic Depletion in LCIA
- The Abiotic Depletion Potential: Background, Updates, and Future (2016)
- Abiotic depletion characterization factors for elements: copper = 1.4 × 10⁻³ kg Sb eq/kg, gold = 52.2 kg Sb eq/kg
- Used for ADPe real-world comparison (copper mining equivalent)
- Fairphone 5 LCA 2024 — Life Cycle Assessment of the Fairphone 5 (Fraunhofer IZM, updated September 2024)
- Fairphone Sustainability
- FP5 LCA Report (PDF)
- Smartphone ADPe: 1.25 × 10⁻³ kg Sb eq (whole device, 3-year use cycle)
- Production phase accounts for ~100% of ADPe; integrated circuits and precious metals are dominant contributors
- Gold and copper identified as top mineral depletion contributors in consumer electronics
- See also: Fairphone 6 LCA (Dec 2025)
- Oviedo et al. 2025 — "Energy Use of AI Inference: Efficiency Pathways and Test-Time Compute" (Microsoft Research)
- arXiv: 2509.20241 (September 24, 2025)
- Microsoft Research
- Bottom-up methodology estimating per-query energy of large-scale LLM systems based on token throughput estimation
- Median energy per query: 0.34 Wh (IQR: 0.18–0.67 Wh) for frontier models (>200B params) on H100 nodes
- Test-time compute / agentic workflows: 10–15× more energy-intensive than standard inference (up to 4.32 Wh)
- Key insight: non-production estimates overstate energy use by 4–20× vs production deployments
- Caravaca et al. 2025 — "From Prompts to Power: Measuring the Energy Footprint of LLM Inference"
- arXiv: 2511.05597 (November 5, 2025)
- 32,500+ measurements across 21 GPU configurations and 155 architectures
- Batch processing dramatically reduces energy: Llama 405B single prompt 21.7 Wh vs 0.6 Wh/prompt in batch of 100
- Output tokens have ~11× greater energy impact than input tokens
- Released Chrome browser extension for estimating energy for ChatGPT/Gemini/DeepSeek
- Niu et al. 2025 — "TokenPowerBench: Benchmarking the Power Consumption of LLM Inference"
- arXiv: 2512.03024 (December 2, 2025)
- 15+ open-source models, 1B–405B parameters
- Super-linear energy scaling: LLaMA‑3 1B to 70B = 7.3× energy increase for 70× parameters
- MoE models (Mixtral‑8x7B) consumed energy comparable to dense 8B models
- TensorRT-LLM and vLLM reduce energy per token by 25–40% vs Transformers engine
- Wilhelm et al. 2025 — "Beyond Test-Time Compute Strategies: Advocating Energy-per-Token"
- EuroMLSys '25, ACM (5th Workshop on Machine Learning and Systems, Rotterdam)
- Chain-of-Thought prompting on Llama 1B: accuracy gains of +0% to +19% depending on task
- Majority Voting energy overhead: +72% to +177% more energy
- Proposes dynamic reasoning depth regulation to balance accuracy and energy
- Jin et al. 2025 — "The Energy Cost of Reasoning: Analyzing Energy Usage in LLMs with Test-time Compute" (Harvard)
- arXiv: 2505.14733 (May 20, 2025; revised November 9, 2025)
- Test-time compute surpasses traditional model scaling in accuracy/energy efficiency for complex reasoning tasks
- Rising computational demands of reasoning require careful energy-cost consideration
- Li et al. 2023–2025 — "Making AI Less 'Thirsty': Uncovering and Addressing the Secret Water Footprint of AI Models"
- arXiv: 2304.03271 (April 2023, updated through 2025)
- Peer-reviewed: Communications of the ACM, 2025
- Training GPT‑3 in Microsoft US data centers: 700,000 liters on-site, 5.4 million liters total
- GPT‑3 inference: 500 mL bottle of water per 10–50 responses depending on location/time
- Principled framework covering Scope 1 (on-site) and Scope 2 (off-site) water footprint
- De Vries 2025 — "Carbon and Water Footprints of Data Centers"
- Patterns (Cell Press), 2025
- AI systems: 32.6–79.7 million tonnes CO₂ in 2025 (comparable to New York City)
- AI water footprint: 312.5–764.6 billion liters in 2025 (comparable to global annual bottled water consumption)
- EcoLogits (JOSS) — "EcoLogits: Evaluating the Environmental Impacts of Generative AI"
- JOSS Paper (Journal of Open Source Software, 2025)
- Authors: Samuel Rince et al. (GenAI Impact non-profit)
- ISO 14044-compliant LCA approach
- GWP now sourced from Our World in Data; PE and ADPe retained from ADEME Base Empreinte
- v2025 update integrates ML.ENERGY Leaderboard v3.0 data for improved per-token energy calibration across 46 models
- Mistral 2025 — Official Environmental Report
- Announcement (January 2025)
- First comprehensive lifecycle analysis of an AI model (Mistral Large 2, 123B params)
- Per 400-token query: 1.14 gCO₂e and 45 mL water
- Pronk et al. 2025 — "Benchmarking Energy Efficiency of Large Language Models Using vLLM"
- arXiv: 2509.08867 (September 10, 2025)
- Energy efficiency decreases close to linearly with model parameter size for same-architecture models
- Kumar et al. 2025–2026 — "OverThink: Slowdown Attacks on Reasoning LLMs"
- arXiv: 2502.02542 (February 2025, revised February 2026)
- Adversarial attacks can force reasoning models to generate massively inflated reasoning chains — up to 18× slowdown on FreshQA and 46× slowdown on SQuAD
- Directly translates to proportional energy increases since energy scales linearly with token generation
- Ozcan et al. 2025 — "Quantifying the Energy Consumption and Carbon Emissions of LLM Inference via Simulations"
- arXiv: 2507.11417 (July 15, 2025)
- GPU power model simulation framework
- Found renewable offset potential of up to 69.2% with carbon-aware scheduling
- van Oers et al. 2020 — "Abiotic resource depletion potentials (ADPs) for elements revisited"
- Int J Life Cycle Assess, Springer
- Updated ultimate reserve estimates and introducing time series for production data
- Latest revision of CML-IA ADPe characterization factors
- Nature Scientific Reports 2024 — "Reconciling the contrasting narratives on the environmental impact of large language models"
- Nature (2024)
- Addresses uncertainties from hardware, geography, and individual worker behavior
- Excluding Scope 3 helps avoid non-trivial uncertainty that could distort comparative eco-efficiency
- ecoinvent — Life cycle inventory database
- ecoinvent Electricity
- Version 3.12 (February 2026), updated electricity market mixes reflecting 2021–2022 data
- 3,500+ datasets in 250+ geographies; large countries split into sub-regions
- Key paper: Treyer & Bauer (2016), "Life cycle inventories of electricity generation and power supply in version 3 of the ecoinvent database"
- SCI for AI — Software Carbon Intensity for Artificial Intelligence (ISO/IEC 21031:2024)
- Green Software Foundation — SCI for AI
- Ratified December 17, 2025; extends the SCI specification (ISO/IEC 21031:2024) specifically for AI workloads
- Provides standardized methodology for measuring carbon intensity of AI inference and training
- Defines functional units, system boundaries, and reporting requirements for AI carbon accounting
- ML.ENERGY Leaderboard v3.0 — Standardized LLM energy benchmarks
- ML.ENERGY Leaderboard
- Version 3.0 (December 2025): 46 models across 1,858 hardware configurations
- Provides per-token energy measurements under controlled conditions (standardized prompts, batch sizes, hardware)
- Used by EcoLogits for calibrating energy-per-token estimates
- NVIDIA HGX H100 Product Carbon Footprint — GPU manufacturing emissions
- HGX H100 PCF Summary (PDF)
- ISO 14067-conformant, third-party reviewed (WSP) product carbon footprint
- Cradle-to-gate emissions: 1,312 kg CO₂e per HGX H100 system (8× H100 SXM GPUs)
- Materials/components account for 91% of emissions: HBM (42%), ICs (25%), thermal (18%)
- Key input for embodied carbon allocation in LLM lifecycle assessments
- TechInsights (2026) — GPU manufacturing emissions growth projection
- TechInsights Sustainability Insights
- 2026 Inflection Point: Semiconductor Sustainability Predictions
- Global AI GPU Carbon Emissions Forecast 2025–2030: manufacturing emissions to grow ~16× from 2024 to 2030 (CAGR 58.3%), reaching 19.2 million metric tons CO₂e
- 2026 semiconductor manufacturing emissions projected to reach 186 million metric tons CO₂e (+9% YoY)
- HBM stacking yield identified as structural sustainability risk; AI GPU production to account for 8.7% of all semiconductor emissions by 2030
- Highlights tension between inference efficiency gains and explosive hardware scaling
- Coalition for Sustainable AI — International governance framework
- Coalition for Sustainable AI
- AI Action Summit (Wikipedia)
- Launched at Paris AI Action Summit, 10–11 February 2025; 1,000+ participants from 100+ countries
- Led by France, UNEP, and ITU; supported by 11 countries, 5 international organisations, and 37 tech companies (including EDF, IBM, NVIDIA, SAP)
- 58 countries signed the "Statement on Inclusive and Sustainable AI for People and the Planet"
- Signals regulatory direction toward mandatory AI environmental reporting
- EF 3.1 — Environmental Footprint characterization factors (JRC, 2025)
- JRC Environmental Footprint
- Updated characterization factors for 16 impact categories including climate change, water use, and mineral resource depletion
- Successor to EF 3.0; used alongside CML-IA for ADPe characterization
- Official EU Product Environmental Footprint method
- GPT-5 energy estimate — University of Rhode Island AI Lab (August 2025)
- The Guardian / AI Commission
- Tom's Hardware analysis
- Researchers: Nidhal Jegham et al. (same group as reference #1)
- GPT‑5 average: ~18.35 Wh per 1000-token query; up to 40 Wh for medium-length response
- ~8.6× increase over GPT‑4 (2.12 Wh); reasoning mode can add 5–10× further overhead
- Methodology: response time × estimated hardware power draw (Azure H100/H200), with PUE/WUE/CIF multipliers
- Validates the importance of per-model energy factors rather than fixed averages
Infrastructure Multipliers
Source: Jegham et al. Table 1, supplemented with provider sustainability reports (Google, Microsoft, AWS, Scaleway) and independent verification.
PUE Cross-Validation
WUE Cross-Validation
¹ Google WUE derived from official figures: 0.26 mL per median prompt ÷ 0.24 Wh = ~1.08 mL/Wh.
² Google carbon intensity is a blended estimate using 66% CFE × regional grid mix. Varies significantly by region: Iowa 87% CFE, South Carolina 31% CFE, Oregon 87% CFE.
³ GreenPT/Scaleway CO₂: Scaleway's Environmental Footprint Calculator publishes PAR-2 (DC5) carbon intensity as 0.065 kgCO₂e/kWh (65 gCO₂e/kWh), calculated using EMBER electricity mix data × DC5 PUE, following a location-based methodology per ADEME PCR guidelines (deliberately excluding their 100% renewable Guarantees of Origin).
Primary Energy & Abiotic Depletion Lifecycle Factors
Source: EcoLogits / ADEME Base Empreinte® / ecoinvent electricity lifecycle data.
These factors convert electricity consumption (Wh) into lifecycle impact metrics, accounting for the full supply chain of electricity generation including fuel extraction, processing, transport, and infrastructure.
Primary Energy (PE)
Measures the total energy extracted from natural resources (fossil fuels, nuclear, renewables) required to produce the electricity consumed. Includes extraction, refining, transport, and generation losses. A factor of ~0.0097 MJ/Wh means roughly 2.7× the direct electricity is consumed as primary energy from nature.
Abiotic Depletion Potential for Elements (ADPe)
Measures the depletion of non-renewable mineral and metal resources (lithium, copper, gold, rare earths) required for the electricity generation infrastructure. Expressed in kg antimony equivalent (kg Sb eq) per the CML-IA characterization method. France has lower ADPe than the US grid due to nuclear-dominated generation requiring less diverse mineral extraction.
• PE and ADPe are infrastructure-level lifecycle factors — they depend on the electricity grid, not the model itself.
• These factors capture only the operational electricity lifecycle, not hardware manufacturing (embodied impacts).
• Values derived from the same ADEME/ecoinvent databases used by EcoLogits for LCA compliance (ISO 14044).
• The same uncertainty ranges (uncertaintyMin/uncertaintyMax) applied to energy, water, and CO₂ are also applied to PE and ADPe.
Per-Model Energy (Wh per 1,000 Tokens)
Cross-Validation of Energy Estimates
Multiple independent sources converge on a 0.2–0.4 Wh range for a standard text query to a frontier model, with reasoning models consuming 5–70× more depending on complexity.
Convergence Table: Standard Text Queries
Convergence Table: Reasoning Models
Jegham et al. Full Model Benchmark (Short Prompt: 100 input / 300 output tokens)
Energy Breakdown by Component (Google 2025)
Google's technical paper (arXiv:2508.15734) provides the only published component-level energy breakdown for production AI inference:
Output vs Input Token Energy Ratio
Multiple sources confirm that output tokens are significantly more energy-intensive than input tokens:
The 5:1 ratio used as our baseline (from Couch/API pricing) is a conservative estimate — direct measurement suggests the true ratio may be higher.
Real-World Comparison References
Used in the tooltip to provide human-understandable context for each metric.
Energy Comparisons
Logic: < 120s → "≈ Xs streaming video"; < 50 searches → "≈ X Google searches"; else → "≈ X% phone charge"
Water Comparisons
Logic: < 100 drops → "≈ X water drops"; else → "≈ X teaspoons"
CO₂ Comparisons
Note on car CO₂: The 130 g/km value is an estimate for the current European on-road fleet (all vehicle ages). EEA reported the 2024 new-car fleet average at 108 g CO₂/km, down from 160–170 g/km in 2015–2019 due to EV adoption and EU emissions standards (Regulation 2019/631). The total on-road fleet average is higher because older, less efficient vehicles remain in service (average European car age ~12 years). EU 2025 target: 93.6 g/km; 2030: 49.5 g/km; 2035: 0 g/km (zero tailpipe). The EPA US fleet average is ~249 g/km — if a US-only comparison were needed, the value should be 0.249 gCO₂/m.
Logic: < 100m → "≈ car driving Xm"; else → "≈ X Google searches"
Primary Energy Comparisons
Logic: < 50 matches → "≈ X matches burned"; < 100 Cal → "≈ X food Calories"; else → "≈ boiling X cups of water"
ADPe Comparisons
ADPe is expressed in kg Sb eq (antimony equivalent), an abstract characterization factor. To make it tangible, we convert to the equivalent mass of copper that would need to be mined to cause the same mineral depletion, using the CML-IA characterization factor for copper (1.4 × 10⁻³ kg Sb eq/kg Cu).
Logic: Convert kg Sb eq → equivalent copper mass: copperKg = adpKgSb / 1.4e‑3. Display as mg or g of copper mined (mg minimum for readability).
Context: Manufacturing one smartphone (1.25 × 10⁻³ kg Sb eq, Fairphone 5 LCA) causes the same mineral depletion as generating ~3,000 kWh of electricity — roughly one year of average European household electricity. A typical LLM query's ADPe equates to mining 0.001–0.05 mg of copper.
Additional Reference Values (not used in tooltip)
Uncertainty Methodology
There is currently no single standardized uncertainty methodology for LLM energy estimation, though progress is being made. The SCI for AI specification (ISO/IEC 21031:2024, ratified December 2025 by the Green Software Foundation) provides standardized functional units, system boundaries, and reporting requirements for AI carbon accounting — a critical step toward comparable, reproducible environmental impact reporting. Different sources currently use different approaches:
Our Uncertainty Ranges
Each model in our system defines uncertaintyMin and uncertaintyMax multipliers (e.g., 0.5–1.5 means the true value could be 50%–150% of the nominal estimate). These ranges are applied uniformly to all 5 metrics.
Rationale for ±30–50% default range:
- Jegham et al. error bars range from ±15% (well-known models like GPT‑4o: 0.42 ± 0.13 = ±31%) to ±52% (o3: 7.03 ± 3.66)
- Oviedo/Microsoft IQR spans roughly ±50% of median (0.18–0.67 on 0.34)
- Caravaca et al. found batch size alone causes 36× variation (Llama 405B: 21.7 Wh single vs 0.6 Wh batched), but production systems always batch
- Nature Scientific Reports 2024 identifies hardware, geography, and utilization as the primary uncertainty sources
- Non-production estimates overstate by 4–20× (Oviedo/Microsoft), suggesting our academic-derived estimates may be conservatively high
Models with more data points (GPT‑4o, Claude 3.7 Sonnet) have tighter ranges; models extrapolated from architectural reasoning (Claude Opus 4, Gemini 3 Pro) have wider ranges.
Key Caveats & Limitations
- Limited official data. Only Google has published official per-query energy data (0.24 Wh for Gemini, August 2025). Microsoft Research (Oviedo et al.) provides the next most rigorous estimate (0.34 Wh median for frontier models). OpenAI's single data point (0.34 Wh for ChatGPT) came from a CEO statement without methodology. All other per-token figures are academic estimates with ~20–50% margin of error.
- Hidden reasoning tokens. Reasoning models (o3, o4-mini) generate hidden chain-of-thought tokens that dramatically increase actual energy consumption. Jegham et al. measured o3 at 7.03 Wh (short) to 39.2 Wh (long) — up to 70× more than efficient models. Oviedo/Microsoft found test-time compute causes a 13× energy increase. Kumar et al. (OverThink) showed adversarial attacks can inflate reasoning chains by 18–46×, proportionally increasing energy. The internal reasoning tokens are not visible via the API but consume compute. This makes per-visible-token metrics potentially misleading for these models.
- "100% renewable" claims. AWS and Microsoft use annual Renewable Energy Certificate (REC) matching — purchasing certificates equal to annual consumption from any location and time period. This is an accounting solution, not an engineering one: a REC from a solar farm in Arizona at noon can "offset" fossil-fueled consumption in New York at midnight. Peer-reviewed research found REC-based claims lead to "an inflated estimate of the effectiveness of mitigation efforts." Google's 66% CFE (carbon-free energy, measured hourly on the same regional grid) is more conservative and transparent. Greenpeace found AWS meeting only 12% of its renewable commitment physically in Virginia; grid actual renewable is <5%. Dominion Energy (Virginia) mix: ~33% nuclear, ~33% gas, ~25% coal, ~4–6% renewable.
- GreenPT transparency gap. GreenPT has not published per-token energy figures despite marketing as "green AI." Their primary environmental advantage comes from running on the French nuclear grid (21.7 gCO₂/kWh direct in 2024, ~27 gCO₂eq/kWh lifecycle in 2025 per RTE France — among the lowest in the world) and Scaleway's efficient data centers (PUE 1.25, adiabatic cooling), rather than from demonstrated model-level efficiency.
- Perplexity: no sustainability data. No environmental reports, no energy consumption figures, no climate commitments. Their infrastructure multipliers are assumed from AWS (their primary cloud provider). The additional energy cost of web search + retrieval in each query is estimated, not measured.
- Model version drift. Jegham et al. benchmarked Claude 3.7 Sonnet and o3, not Claude Opus 4.6 / Sonnet 4.6 or GPT‑4.1. Our per-token estimates for current models are derived from the closest benchmarked equivalents and architectural reasoning. As of February 2026, no published paper provides direct energy measurements for GPT‑4.1, Claude Opus 4.6, Claude Sonnet 4.6, or Gemini 3 Pro. Sonnet 4.6 inherits Sonnet 4.5's energy factors based on identical API pricing ($3/$15 per MTok) and similar output throughput (~57 tok/s vs ~63 tok/s).
- Batch size and utilization. Energy per query varies dramatically with server utilization. Caravaca et al. found 36× reduction from single-prompt to batch-100 for Llama 405B (21.7 → 0.6 Wh). Jegham et al. assumes batch size 8; real production loads vary. Oviedo/Microsoft warns non-production estimates overstate energy by 4–20×.
- Water accounting. WUE figures include both Scope 1 (on-site cooling: evaporative towers, adiabatic systems) and Scope 2 (off-site: water consumed by electricity generation). Per IEA 2023, two-thirds of total data center water is indirect/off-site. Provider-reported WUE values: AWS 0.15 L/kWh (2024, best-in-class), Microsoft 0.30 L/kWh (FY2024), Google ~1.0 L/kWh (global on-site). Li et al. (CACM 2025) provide the most comprehensive framework for total water footprint estimation.
- Input/output split. The per-1k-token split between input and output is estimated for most models. Caravaca et al. found output tokens have ~11× greater energy impact than input tokens (direct measurement). Couch 2026 uses a 5:1 ratio based on API pricing. SemiAnalysis suggests ~15× at smaller scales. The true ratio varies by model architecture and is not published by any provider.
- PE and ADPe precision. Lifecycle factors are grid-level averages from ADEME Base Empreinte / ecoinvent databases, as confirmed by EcoLogits' JOSS paper and methodology documentation. They do not capture provider-specific optimizations (e.g., Google's on-site solar) or time-of-day variations in grid composition. EcoLogits confirms they use ADEME specifically due to "a lack of open, up-to-date alternatives" for PE and ADPe. The CML-IA characterization method (Leiden University) underpins all ADPe calculations; latest revision: van Oers et al. 2020.
- Jevons Paradox. (Jegham v6) As AI becomes more efficient and cheaper, total resource consumption may actually increase. De Vries (Patterns/Cell Press, 2025) projects AI systems produced 32.6–79.7 million tonnes CO₂ and consumed 312.5–764.6 billion liters of water in 2025 alone. Google's total emissions rose 11% to 11.5M tonnes CO₂ in 2024 (+51% from 2019), mostly Scope 3.
- France carbon intensity validated by Scaleway. Our CIF of 0.065 gCO₂/Wh (65 gCO₂/kWh) for Scaleway matches their own Environmental Footprint Calculator published value for PAR-2 (DC5). Scaleway uses a location-based methodology per ADEME PCR guidelines with EMBER electricity mix data, deliberately excluding their 100% renewable Guarantees of Origin. This figure is higher than RTE France 2024 grid intensity (21.7 gCO₂eq/kWh direct, 30.2 lifecycle) because ADEME's regulatory average for France (52 gCO₂e/kWh, multi-year) is structurally higher than single-year RTE figures, and the PUE multiplier (1.25) further increases effective carbon per useful kWh.
- Embodied emissions not included. Our calculations cover only operational energy (use phase). Hardware manufacturing (embodied) emissions are significant — per Boavizta and EcoLogits methodology, the manufacturing phase of server hardware contributes additional GWP, PE, and ADPe that are amortized over the equipment's useful life. Fairphone LCA data shows manufacturing accounts for ~100% of ADPe in consumer electronics; similar dominance applies to data center hardware. NVIDIA's HGX H100 Product Carbon Footprint reports 1,312 kg CO₂e cradle-to-gate per system. TechInsights (2026) projects GPU manufacturing emissions to grow ~16× from 2024 to 2030 due to AI chip demand, underscoring the growing importance of embodied carbon in total AI lifecycle assessments.
- No provider publishes per-token energy. Google provides per-query data (but not per-token breakdown). All per-token values are derived from per-query benchmarks divided by estimated token counts. Couch 2026 provides the most explicit derivation methodology.
Calculation Formula
For a given model with inputTokens input tokens and outputTokens output tokens:
All multipliers (water, CO₂, PE, ADPe) are infrastructure-level factors applied uniformly to the total energy consumed. Water, CO₂, PE, and ADPe depend on the electricity grid and provider infrastructure, not on the model architecture.
Each metric also has min/max uncertainty estimates derived from the model's uncertaintyMin and uncertaintyMax multipliers (e.g., 0.5–1.5 means the true value could be 50%–150% of the nominal estimate).
Displayed Metrics
The environmental impact tooltip shows 5 metrics per message:
All 5 metrics include real-world comparisons: Energy (streaming video / Google searches / phone charge), GHG (car driving / Google searches), Water (water drops / teaspoons), Primary Energy (matches / food Calories / boiling water), and Abiotic Resources (copper mining equivalent).
The inline footer below each message shows Energy, GHG (CO₂), and Water for compactness (matching tooltip order).