Learn AI by CompleteIdeas
Station 1 of 10 Glossary Exit
01
Welcome

Fifteen minutes.
No sign-up. No sales pitch.

You've heard AI will take your job, destroy the planet, and start thinking like us. Some of that's true. Most of it isn't. This walks you — honestly, without hype or horror — through what AI actually is, the real worries worth your attention, and three small questions that will make you good at using it.

It's for parents, retirees, job seekers, teachers, small business owners, skeptics, curious professionals, and anyone who's tired of feeling lost in the AI conversation. No prior experience needed. By the end, you'll have one thing worth trying tonight.

What you'll get

A mental model for what AI is. Three questions to ask every time. A real, verified attempt at a task you've been avoiding. And a short, honest look at the worries that matter.

02
What it actually is

It's a pattern-matcher, not a mind.

Here's the honest version of what ChatGPT is doing when you type to it. It's guessing the next word. Over and over. Very well — but guessing.

It was fed trillions of words — books, websites, conversations, news archives. It learned patterns in how words fit together. Not facts. Patterns. When you ask it something, it picks the most probable next word, then the word after that, then the word after that — based on everything it's read. Watch:

Live demonstration · click to generate word by word
The cat sat on the
Top candidates for the next word

It doesn't know what it's saying. It doesn't understand you. It's not thinking. It's very fancy autocomplete. Every answer is its best guess — sometimes brilliant, sometimes confidently wrong. And it can't tell which.

One mental model to keep

AI is the smart kid in class who never says "I don't know." They'll always give an answer. Sometimes brilliant. Sometimes confidently wrong. You can't tell which from their face.

Want to go deeper on how LLMs actually work? See Sources → for the foundational papers (Brown et al., Bender et al.).

03
Confident, and wrong

Now watch AI be wrong — with total confidence.

Here's a question AI reliably flubs. Read the answer. Then, before we tell you, decide: do you trust it?

Count the letter "r" in the word "strawberry".
Trust it?
It was wrong.

Let's count together — every R highlighted:

strawberry

There are three Rs, not two. The AI said it with complete confidence anyway. That's the whole problem — it can't tell the difference between a good guess and a bad one. Neither can you, from just reading the answer.

One more thing — it's not deterministic. The same question, asked again, will often give a different wrong answer. Try it:

One more exhibit — the invented citation. Ask AI for sources on an obscure topic, and it will happily give you three professional-looking references. Tap each to find out which are real:

The takeaway

It sounded right. It wasn't. It didn't flag its uncertainty. It made up citations that could be mistaken for real ones. That's the problem you're here to solve — not by trusting less, but by checking better.

Want to dig into how models hallucinate? See Sources → for the current hallucination benchmarks and primary research.

04
The first question

Question 1: What do you actually want?

Most disappointment with AI is the user's fault, not the AI's. Vague in, vague out. The fix takes ten seconds: tell it the who, the what, and the why — like you're briefing an intern who has never met you.

Drag the slider below to watch how a prompt transforms — and how AI's answer improves with it. This is scripted, but the pattern is real: every extra bit of specificity you add pays off.

Specificity Level 1 · Vague
Vague Hint Shaped Detailed Dialed in
Your prompt
help me with my lease
AI reply
Leases are legal contracts between you and your landlord. They typically cover rent, duration, security deposit, and rules. Would you like tips on what to look for?

At Level 1, AI gives you a Wikipedia paragraph. Not useful. Turn the dial.

The pattern

Every level you add — why you're asking, what you want back, what matters to you specifically — the answer improves. Not by a little. By a lot.

05
The second question

Question 2: What would correct look like?

Before you trust an AI answer, ask yourself: how will I check this? If the answer is "I'd know it when I see it" — be on guard. If you have a way to verify — great. Quick practice:

Round 1 of 3
Loading...
If you can't check it

If you have no way to verify a high-stakes answer, that's not an AI question — that's a human-expert question. AI can help you prepare (vocabulary, better questions to ask). It shouldn't be the final word.

06
The third question

Question 3: What can't it know?

Four things AI fundamentally can't see. These are the blind spots where AI will confidently make things up. Tap any card to expand.

The habit

Before you trust an answer that matters, ask out loud: "What does it not know here?" If the answer affects a decision — verify that part with a human or a real source.

For hallucination benchmarks and current data on where AI gets things wrong, see Sources →.

07
Put it together

Now use all three, on something real.

Before you type your own prompt — one more demonstration. This time, we show you the single biggest quality upgrade you can make. It's not a better prompt. It's giving AI the thing itself.

Context changes everything

Same prompt. Same question. One version gets nothing but the question; the other gets the lease document attached. Watch what the AI actually produces.

The prompt
Summarize the three lease renewal changes I'm being asked to accept, and flag anything worth pushing back on.
AI reply — without context
Lease renewals typically include changes to rent, term length, security deposit, pet fees, or shared responsibilities. Common things to watch for include rent increases above local market averages, new fees, changes to maintenance responsibilities, or restrictions on subletting or guests. To give you a specific analysis, I'd need to see the actual clauses. Could you paste the lease language?
Same prompt. Same question. Without the document, AI gives you a generic primer. With the document, AI gives you a specific, useful analysis — and it knew which clause to flag. This is called grounding, and it's the single biggest quality upgrade most people can make.

Now build your own

Pick something you've been avoiding. A bill you don't understand. A hard email. A document that's been sitting. Type it below. The three questions turn it into a prompt you can paste anywhere.

Your prompt — paste into ChatGPT, Claude, Gemini, or Copilot

      

Before you send: don't paste anything private you wouldn't email a stranger. Redact names, account numbers, social security numbers. AI works just fine with "[NAME]" and "[ADDRESS]".

08
Real worries. Real handles.

Five myths. Five facts.

The media is really good at scaring you about AI. Some of it's real; most of it's overblown. Here's a straight swap: five common myths you've probably heard (and why they don't hold up), and five facts worth knowing.

Ten cards Alternating myth → fact. Click through all ten before moving on.

This is a snapshot of AI in 2026. Things are moving fast: models are getting better, guardrails are catching up, norms are forming. Treat this as today's map — not a permanent warning. Most of these issues will soften over the next few years. Your habits — the three questions — are what'll still be useful.

Every number on these cards links to a primary source. The full list is on Sources → (Pew, IEA, Sumsub, Stanford HAI, McKinsey, etc.).

09
Optional · for parents

Got kids? One quick detour.

If you have children at home — or grandchildren — there's a quieter worry worth a minute. If you don't, skip ahead. Nothing to miss.

What AI does to brains — kids' and yours.

Researchers have been studying this for decades. When we offload a skill to a tool, we get weaker at that skill. GPS users have shown measurable changes in spatial memory (Maguire et al., PNAS 2000). We remember where information is stored, not the information itself (Sparrow et al., Science 2011).

A 2025 MIT Media Lab EEG study found people using ChatGPT to write essays showed weaker brain connectivity than those using search or nothing. They got a good essay. They got less from writing it.

One rule, before any homework

"Did you try first?"
If yes → AI becomes a tutor. That's powerful. If no → they're skipping the learning. That's the part that should concern you.

Not "keep kids away from AI." That's unrealistic and unfair. Teach them to use it after the struggle, not instead of it. The muscle grows during the struggle. AI helps them make sense of what got learned. For the full framework, see our For Parents page.

10
You made it

One real thing. Before bed.

You've got the mental model. You've got the three questions. Now pick one real thing — something you've been avoiding — and do it. Tonight.

What now?

That's the journey. If you want to go deeper, the reference pages are here whenever you need them:

Need help? Hosting a group?

I run this as a free 60-minute live workshop for libraries, community centers, nonprofits, and civic groups. Questions or workshop requests: robert@completeideas.com. I answer everyone.

Continue where you left off? Station 4