Fifteen minutes.
No sign-up. No sales pitch.
You've heard AI will take your job, destroy the planet, and start thinking like us. Some of that's true. Most of it isn't. This walks you — honestly, without hype or horror — through what AI actually is, the real worries worth your attention, and three small questions that will make you good at using it.
It's for parents, retirees, job seekers, teachers, small business owners, skeptics, curious professionals, and anyone who's tired of feeling lost in the AI conversation. No prior experience needed. By the end, you'll have one thing worth trying tonight.
A mental model for what AI is. Three questions to ask every time. A real, verified attempt at a task you've been avoiding. And a short, honest look at the worries that matter.
It's a pattern-matcher, not a mind.
Here's the honest version of what ChatGPT is doing when you type to it. It's guessing the next word. Over and over. Very well — but guessing.
It was fed trillions of words — books, websites, conversations, news archives. It learned patterns in how words fit together. Not facts. Patterns. When you ask it something, it picks the most probable next word, then the word after that, then the word after that — based on everything it's read. Watch:
It doesn't know what it's saying. It doesn't understand you. It's not thinking. It's very fancy autocomplete. Every answer is its best guess — sometimes brilliant, sometimes confidently wrong. And it can't tell which.
AI is the smart kid in class who never says "I don't know." They'll always give an answer. Sometimes brilliant. Sometimes confidently wrong. You can't tell which from their face.
Want to go deeper on how LLMs actually work? See Sources → for the foundational papers (Brown et al., Bender et al.).
Now watch AI be wrong — with total confidence.
Here's a question AI reliably flubs. Read the answer. Then, before we tell you, decide: do you trust it?
Let's count together — every R highlighted:
There are three Rs, not two. The AI said it with complete confidence anyway. That's the whole problem — it can't tell the difference between a good guess and a bad one. Neither can you, from just reading the answer.
One more thing — it's not deterministic. The same question, asked again, will often give a different wrong answer. Try it:
One more exhibit — the invented citation. Ask AI for sources on an obscure topic, and it will happily give you three professional-looking references. Tap each to find out which are real:
It sounded right. It wasn't. It didn't flag its uncertainty. It made up citations that could be mistaken for real ones. That's the problem you're here to solve — not by trusting less, but by checking better.
Want to dig into how models hallucinate? See Sources → for the current hallucination benchmarks and primary research.
Question 1: What do you actually want?
Most disappointment with AI is the user's fault, not the AI's. Vague in, vague out. The fix takes ten seconds: tell it the who, the what, and the why — like you're briefing an intern who has never met you.
Drag the slider below to watch how a prompt transforms — and how AI's answer improves with it. This is scripted, but the pattern is real: every extra bit of specificity you add pays off.
At Level 1, AI gives you a Wikipedia paragraph. Not useful. Turn the dial.
Every level you add — why you're asking, what you want back, what matters to you specifically — the answer improves. Not by a little. By a lot.
Question 2: What would correct look like?
Before you trust an AI answer, ask yourself: how will I check this? If the answer is "I'd know it when I see it" — be on guard. If you have a way to verify — great. Quick practice:
If you have no way to verify a high-stakes answer, that's not an AI question — that's a human-expert question. AI can help you prepare (vocabulary, better questions to ask). It shouldn't be the final word.
Question 3: What can't it know?
Four things AI fundamentally can't see. These are the blind spots where AI will confidently make things up. Tap any card to expand.
Before you trust an answer that matters, ask out loud: "What does it not know here?" If the answer affects a decision — verify that part with a human or a real source.
For hallucination benchmarks and current data on where AI gets things wrong, see Sources →.
Now use all three, on something real.
Before you type your own prompt — one more demonstration. This time, we show you the single biggest quality upgrade you can make. It's not a better prompt. It's giving AI the thing itself.
Context changes everything
Same prompt. Same question. One version gets nothing but the question; the other gets the lease document attached. Watch what the AI actually produces.
Now build your own
Pick something you've been avoiding. A bill you don't understand. A hard email. A document that's been sitting. Type it below. The three questions turn it into a prompt you can paste anywhere.
Before you send: don't paste anything private you wouldn't email a stranger. Redact names, account numbers, social security numbers. AI works just fine with "[NAME]" and "[ADDRESS]".
Five myths. Five facts.
The media is really good at scaring you about AI. Some of it's real; most of it's overblown. Here's a straight swap: five common myths you've probably heard (and why they don't hold up), and five facts worth knowing.
This is a snapshot of AI in 2026. Things are moving fast: models are getting better, guardrails are catching up, norms are forming. Treat this as today's map — not a permanent warning. Most of these issues will soften over the next few years. Your habits — the three questions — are what'll still be useful.
“AI will take your job.”
The loudest version of the jobs worry — the one in headlines and LinkedIn posts.
Why this doesn't hold up: jobs reshuffle, they don't disappear overnight. The pattern's identical to email, Excel, and Google — the skills that mattered changed. Entry-level writing, research, and basic analysis are under real pressure now. The response that works: learn the tool. The people using AI well at work get paid more; the people who avoid it end up competing against the ones who don't.
AI isn't deterministic. Same question, different answer.
Ask the same question twice, you'll usually get two different responses — sometimes meaningfully so. AI samples from probabilities each time; it isn't a calculator returning the same output for the same input.
Why this matters: you can sometimes "re-roll" a bad answer and get a better one. But you also can't reliably reproduce a good or bad answer later. Two people asking the same question will get different answers. Don't assume the first response is the final answer.
“ChatGPT is destroying the planet — power and water.”
The viral framing: "a single prompt uses a bottle of water"; "AI is boiling the oceans."
Power, in actual numbers:
- A typical ChatGPT-4o query uses ~0.34 Wh (OpenAI's published figure; independent Epoch AI analysis lands at ~0.3 Wh). That's a hairdryer for one second, or less than 2% of a phone charge.
- Newer reasoning models (GPT-5, Claude 4, Gemini 2.5 Pro) average 2–20 Wh per query — at the high end, about one phone charge.
- Efficiency is improving fast. Per-query energy has dropped roughly an order of magnitude in two years. New chip architectures (NVIDIA Blackwell), mixture-of-experts models, distillation, and quantization are making frontier capability cheaper every quarter — not just in dollars, but in watts.
- A single 10-minute hot shower uses more electricity than a few thousand ChatGPT queries. An hour of central A/C easily tops all the AI querying an average person does in a month.
- All data centers worldwide — AI + streaming + email + search — are about 2% of global electricity. Between now and 2030, EVs, new factories, and A/C will each add more new demand than AI data centers will.
Water, in actual numbers:
- Per query: roughly a fifteenth of a teaspoon (~0.000085 gallons) — OpenAI's own number. Viral "bottle of water per prompt" figures come from research that includes the entire electricity-generation supply chain, not water the data center literally consumes.
- Hyperscaler data centers (AWS, Azure, Google Cloud) increasingly use closed-loop cooling — water is recirculated, not "used up." Older evaporative systems do consume water, but the industry is moving away from them, fast.
- The major cloud providers have all committed to being "water positive" by 2030 — meaning they plan to replenish more water than they consume, through watershed restoration and efficiency projects.
Real. Worth watching. Not the dominant story in climate change — and getting better, not worse, per unit of useful output.
34% of US adults had used ChatGPT by early 2025. 58% of those under 30.
That share roughly doubled between 2023 and 2025. You're not "behind" if you haven't. You're also not "ahead" if you've used it without thinking.
Why this matters: it's not an early-adopter curiosity anymore. In most workplaces and classrooms, AI is already present. Learning to use it deliberately — as a tool, with the three questions — is the practical skill. Learning to ignore it entirely is increasingly difficult, even if you wanted to.
“AI is conscious, or will be soon, and will take over.”
The Terminator frame. Feels urgent. Powered by movies, not research.
Why this doesn't hold up: it's a pattern-matcher. Very fancy autocomplete. It has no goals, no plans, no wants. You saw this in Station 2 — it's picking the next most-probable word, over and over. The real governance concerns are about the humans building and deploying these systems: who pays, what data they train on, what they're used for, who's accountable. Worth watching. Not sci-fi.
Deepfakes crossed a threshold in 2025. Fraud using them rose 700% in a year.
The UK government projected 8 million deepfakes shared online in 2025 — up from 500,000 in 2023. Cousin-level tools can now produce convincing synthetic video and audio of real people. The good news: detection tools, platform content-authenticity standards, and cryptographic provenance (like C2PA) are catching up — and device manufacturers are starting to sign legitimate photos and video at capture.
Why this matters, today: if a clip makes you feel strong emotion — rage, fear, disgust — wait 30 seconds before sharing. Check it on two trusted sources. The people hardest to scam aren't smarter; they're slower.
“AI knows everything on the internet — it's a giant database.”
"It was trained on all of the internet, right? So I can just ask it anything."
Why this doesn't hold up — and why that's actually good news:
AI doesn't store the internet. It stores numbers — billions of them — that help it predict how text should continue. Think of it as extreme compression. The training data runs into the tens of terabytes; the model itself is maybe one or two terabytes; the internet is measured in zettabytes — vastly larger. The model isn't a library looking things up. It's a pattern-recognizer that learned how language, ideas, and concepts fit together, and compressed those patterns into something small enough to run on a single server.
That's why AI is so useful:
- It's exceptional at pattern work: explaining a concept, summarizing a document, rephrasing a draft, brainstorming, connecting ideas across fields, translating between expert language and plain English. These are the jobs patterns do well — and they're most of what you'll actually want help with.
- It's weaker at precision work: specific dates, exact numbers, named citations, rare or very recent facts. That's the thin slice where you double-check.
- Modern models add web search, which closes much of that precision gap for current info — but the core engine is still a pattern-predictor, not a database lookup.
Use it like a brilliant generalist who's read widely. Great for understanding and drafting. Verify the specific facts that will actually drive a decision. That's not avoidance — that's using the tool well, and it's where almost all the real value comes from.
Your privacy depends on which AI and which plan.
ChatGPT's free tier may use your conversations to improve the model unless you opt out. Claude says it doesn't train on consumer chats by default. Enterprise and paid business plans usually have stronger protections. Defaults shift; rules change; and they'll change again.
Two habits that always work: (1) don't paste anything you wouldn't email to a stranger — names, account numbers, SSNs, medical records. (2) Once a year, check the Data controls settings in each AI tool you use.
“If AI said it, it must be true — it's basically a smart expert.”
The most common myth, and the most important one to get past.
Why this doesn't hold up, today: AI will sometimes invent facts, citations, and statistics with the same confident tone it uses for correct answers. Good answers and bad answers come from the same engine. The good news: newer models are measurably better at flagging their own uncertainty — hallucination rates have dropped sharply year-over-year, and more models now explicitly say "I'm not sure" or offer to check. Still, for anything that matters, verify the part that affects a decision. That habit will serve you regardless of how good the models get.
The quality of the answer depends mostly on the quality of the prompt.
The single most reproducible AI result: vague prompts get vague answers; specific prompts get specific answers. Three questions cover it — what do I actually want? What would a correct answer look like? What can it not know? You don't need to become a "prompt engineer." You just need those three habits.
Why this matters: most people's disappointment with AI is actually disappointment with their own first draft of the question. The skill is making the prompt good enough that the AI could give a useful answer. That's a skill you can build in a week and keep forever.
Different AI models give different answers. That's normal.
Same question, two AIs, two different answers — often with the same confident tone. This isn't a bug; it's how the technology works. Different companies train on different data, with different values, different methods, and different safety tuning. A question about medication dosing, a controversial topic, or a creative task will read noticeably different across providers.
Gas is typically cheaper to operate long-term if you have a gas line. Electric (especially heat pump water heaters) is more efficient per unit of energy but depends on electricity prices. Gas has a shorter lead time for hot water recovery.
Heat pump (electric) water heaters are the best choice in most scenarios — 3-4x more efficient than standard electric, and now often cheaper to run than gas given recent gas price volatility and available federal tax credits. Gas is only cheaper if you have an old-school tankless gas system.
Why this matters: neither AI is "wrong" — they emphasize different things, and their training probably ended at different points. On anything where the answer genuinely affects your decision, it's worth asking the same question of two tools and seeing where they agree vs. disagree. The disagreement is usually where the actual nuance is.
Every number on these cards links to a primary source. The full list is on Sources → (Pew, IEA, Sumsub, Stanford HAI, McKinsey, etc.).
Got kids? One quick detour.
If you have children at home — or grandchildren — there's a quieter worry worth a minute. If you don't, skip ahead. Nothing to miss.
What AI does to brains — kids' and yours.
Researchers have been studying this for decades. When we offload a skill to a tool, we get weaker at that skill. GPS users have shown measurable changes in spatial memory (Maguire et al., PNAS 2000). We remember where information is stored, not the information itself (Sparrow et al., Science 2011).
A 2025 MIT Media Lab EEG study found people using ChatGPT to write essays showed weaker brain connectivity than those using search or nothing. They got a good essay. They got less from writing it.
"Did you try first?"
If yes → AI becomes a tutor. That's powerful. If no → they're skipping the learning. That's the part that should concern you.
Not "keep kids away from AI." That's unrealistic and unfair. Teach them to use it after the struggle, not instead of it. The muscle grows during the struggle. AI helps them make sense of what got learned. For the full framework, see our For Parents page.
One real thing. Before bed.
You've got the mental model. You've got the three questions. Now pick one real thing — something you've been avoiding — and do it. Tonight.
What now?
That's the journey. If you want to go deeper, the reference pages are here whenever you need them:
- The Three Questions → The core skill, in depth.
- Try Tonight → 8 real example prompts.
- For Parents → Kids, cognition, and "did you try first?"
- FAQ → Plain answers to the loud worries.
- Sources → Every claim, linked to its source.
I run this as a free 60-minute live workshop for libraries, community centers, nonprofits, and civic groups. Questions or workshop requests: robert@completeideas.com. I answer everyone.