I'm not in the business of making you trust me. Every number, study, and data point used on this site links to the primary source. If you see something elsewhere on these pages that isn't cited — let me know and I'll fix it.
Public opinion & usage
Pew Research Center — "Key findings about how Americans view AI" (March 2026). The 64%/jobs figure, the 10%/more-excited figure, and the broader picture of the public-expert divide on AI. Read the findings →
Pew Research Center — "34% of US adults have used ChatGPT" (June 2025). The adoption and demographic breakdown cited in the FAQ. Survey of 5,123 US adults, Feb–March 2025. Read the piece →
Pew Research Center — "Views of risks, opportunities, and regulation of AI" (April 2025). The public-expert divide on AI's economic impact. Read the study →
Gallup — Gen Z AI adoption & skepticism (2026). The shift in Gen Z sentiment: "excited" 36%→22%, "angry" 22%→31%. Conducted for Walton Family Foundation and GSV Ventures. Read the poll →
Stanford HAI — AI Index Report 2026 (Public Opinion chapter). The canonical annual snapshot of public sentiment on AI. Read the chapter →
Energy, cost & environment
International Energy Agency — "Energy and AI" report (2025). The 2% of global electricity figure, the comparison to EVs/A/C/industrial demand, and AI's share within data centers (5–15% today, possibly 35–50% by 2030). Read the report →
Epoch AI — "How much energy does ChatGPT use?" Research-grade analysis of per-query energy. The source for the 0.3 Wh–19 Wh range depending on model. Read the analysis →
Carbon Brief — "Five charts that put data-centre energy use and emissions into context." Visual comparison of AI's energy share against other sectors. Read the piece →
"The Rising Costs of Training Frontier AI Models" — Cottier et al., arXiv 2024. Documented training costs: GPT-4 ~$78M, Gemini Ultra ~$191M, Llama 3.1 405B ~$170M. Read the paper →
Deepfakes & misinformation
Sumsub — Identity Fraud Report 2025–2026. The source for deepfake volume growth (UK government projection of 8M in 2025, up from 500,000 in 2023) and the 700% jump in deepfake-powered fraud. Released November 2025. Access the report →
World Economic Forum — "How cognitive manipulation and AI will shape disinformation in 2026." Context on the systemic impact of deepfakes and synthetic media on elections and public discourse. Read the article →
Labor & work changes
McKinsey Global Institute — "Generative AI and the Future of Work in America." Base research on AI-driven task automation, workforce displacement patterns, and reemployment timelines. Read the research →
World Economic Forum — "Future of Jobs Report 2025." Base figures on job creation vs. displacement projections (170M new / 92M displaced between 2025 and 2030). Read the report →
Brains, cognition & AI use
MIT Media Lab — "Your Brain on ChatGPT" (June 2025). EEG study on cognitive engagement during LLM-assisted writing. 54 subjects, three groups (LLM / search / brain-only). LLM users showed weakest brain connectivity and lowest self-reported ownership of their work. Note: preprint at release; small sample size.Read the paper →
Sparrow, Liu & Wegner — "Google Effects on Memory," Science, 2011. The foundational paper on how search engines change what we remember. DOI: 10.1126/science.1207745. Access the paper →
Maguire et al. — "Navigation-related structural change in the hippocampi of taxi drivers," PNAS, 2000. The classic London cab driver study on how intensive spatial memory use changes brain structure. Access the paper →
How AI actually works
Brown et al. — "Language Models are Few-Shot Learners" (arXiv, 2020). The foundational GPT-3 paper — establishes how modern LLMs are built and what they're doing under the hood. Access the paper →
Bender, Gebru, McMillan-Major & Shmitchell — "On the Dangers of Stochastic Parrots" (ACM FAccT 2021). The canonical argument for why LLMs are sophisticated pattern-matchers, not minds. Access the paper →
Vectara Hallucination Leaderboard. A continuously-updated benchmark of how often various AI models make up facts. Live, public, open-source. View the leaderboard →
Stanford HAI — AI Index Report 2026. The annual comprehensive snapshot of the state of AI, including benchmarks, hallucination data, public opinion, and economic effects. Read the report →
A note on sources I didn't use.
A lot of articles you'll find about AI online are marketing pages, listicles, or opinion pieces dressed up with a citation. I've kept those out of this site. Everything cited here links to a research paper, a primary government or think-tank report, or a well-documented press release from a primary institution. If I cite a statistic and can't find a primary source I trust, I cut the statistic.