TetraMesa

  • About Us
  • Services
  • Clients
  • Contact
  • Blog

Teach Your Children (AI) Well

August 28, 2025 By Scott

This is for parents of school age children, which is generally elementary through middle-school age, but some consider it the full K – 12 period. Some of us may be AI builders. Others users, and some wish it would go away. Regardless, we have to help our kids navigate another technology. It’s another amazing tool, carries some risks, and maybe another distraction. I’ve written about AI GPT Safety & Issues for Kids earlier. This is more about some specific things to consider doing as a parent. This isn’t coming from a professional educator. Just a parent who works with these tools. I couldn’t find a good guideline to use with my own family. Most things I found were about safety or overly generic. So I’ve tried to make one.

What’s next is first some background regarding this area and then the Specific Guidelines. BUT YOU CAN SKIP THE BACKGROUND IF YOU LIKE and head right down to the section: Specific Guidelines for Teaching Kids

Behavioral Experience with AI Tools

I asked an AI about a line in Neil Young’s “I Am a Child”… “What is the color, when black is burned.” It answered in a couple of ways. First, an answer about physical fire, and then, it addressed the idea of metaphor. I was impressed. Apparently, “if you burn something that’s already black (i.e., pure carbon), with complete combustion, the residue will be white or gray ash. If the combustion is incomplete, it will remain black due to leftover carbon.” Great. It went on to say, “From a lyrical or metaphorical perspective: Neil Young himself has never provided a definitive answer. Some interpretations suggest the answer is “darker black” or simply “nothing,” emphasizing the mystery or existential nature of the question.” Interesting.

So what’s the point? Why teach our children, (and ourselves too), AI Well? Because it’s not just about the results. It’s about the behavioral experience of this assistive learning tool.

Let’s get beyond today’s tired discussions about whether AIs get things wrong sometimes via hallucination. (They do.) Or if they’re sometimes biased. (They are.) They’re here. They’re getting better every day, but if you understand how they work, you’ll realize they’ll likely always suffer from this to a degree. All that matters right now is how I deal with what’s in front of me in terms of kids.

I think something is getting missed considering our behavioral responses working with AI. It’s how they can educate and change us even when that’s not our explicit goal. Sure, there’s variations of AIs used specifically for education. But I’m talking about what we may be accidentally slipping into. It’s not all bad, but let’s look at some issues.

Being In The Zone vs. Knowing Yourself

You’ve heard of “Being in the Zone,” right? Among other things, it’s a psychological place when you’re deep in a workflow, a process, a sport. It’s a special kind of focus. It’s an interesting experience that you don’t really feel until afterwards. Because if you sense it too much while you’re in it… then you’re out of it. We’ve all been there. During a personal or work project, running whitewater in a kayak, flying an airplane, running for the goal line, chasing someone running for the goal line, and so on.

When you exit the zone, you can think back about what went on. There’s a Greek expression “Gnothi seauton” which means, “Know Yourself.” You can think how that little person in your head did its thing. You can think… about thinking! We need to consider how we’re thinking when we or our kids are having the experience of “This is your brain on AI.”

Here’s what I’ve been finding personally as I’ve been working more with AI tools and slipping in and out of a flow zone into a moment of self-observation. I’m learning more by accident. As I use these tools to build agentic workflows and such, I’m not just implementing, I’m internalizing some of the flows, some of the code, some of the… whatever. It feels more immersive than other types of online learning or edutainment I’ve used. It’s similar to how we learn any sport or skill; though drill on the skill followed by blending into the goal.

From Frenzied Prognostication to Thoughtful Consideration

I’ve been seeing more comments about how some learn as they go with GPT co-pilots. There’s a difference between using AI to automate vs. using as a co-pilot or assistant. One is more set it and forget it. The other is an active “lean in” while you’re building experience. While concerns about dumbing down humanity via cognitive offloading to GPTs seem valid, it’s not true for everyone. For now, I’m left with simply suggesting that what happens will depend on the user. Some read a book to get through it. Others read actively to actually learn. Impact on individuals varies depending on their tool use behavior.

Why is this powerful? I think part of it is the depth. When we learn things, we re-wire our brains (see neuroplasticity). Part of the weight of those synapses firing determines on how deeply we’re learning. And it’s more than about just the facts. It’s about the whole situation. I’ve found when using these things and gaining ground, solving problems, making things work, there’s more than just satisfaction. There’s a childlike happiness with “making the thing go.” I believe this emotional context also helps inculcate whatever learnings are going on. Go ask a kid who plays Minecraft about geology and building things, etc. She’ll probably surprise you. Why? Games like Minecraft may simplify or fictionalize real-world processes and materials. However, research shows that kids improve their understanding in a wide range of subjects when using Minecraft as an educational tool.

Our kids are going to be more knowledgeable, but potentially also building more powerful minds. Are there risks? Of course.

What’s the Downside?

Lack of Agency, conscious and not.

Unconscious drift: Are people self-aware enough to recognize when guidance they’re receiving isn’t just of the moment, but is shaping their entire perception? It doesn’t seem so given polarization of media filter bubbles, which are not always wholly self-selected, but possibly advanced intentional manipulation at levels of sophistication not seen before. We know marketing tries to pull on subliminal strings. But now? There are more sophisticated tools to shape opinion.

Conscious incapacity: Do kids today know less or more than we did? It seems quite a bit more much younger. Which makes sense. Their world is rather different. They need a whole different set of skills. But they could maybe also still use some of the ‘old’ skills. It’s bad enough how some suffer extreme incapacity whenever the power goes out. But now it seems that there can almost be a form of mental paralysis when the WiFi goes out.

With all that as background, what follows is what I’m doing. Is it the best set of guidelines? I don’t know. But I’m offering them up for anyone who’s maybe looking for some ideas as well.

Specific Guidelines for Teaching Kids

We’ll follow a “Listen, Watch, Do” progression. I’ve personally found this effective, both when I’ve been a teacher and a learner. If you’re a teacher or expert in instructional design and think what I’m saying here can be enhanced, please let me know.

First, we explain concepts (Listen). Then, demonstrate and observe together (Watch). Finally, practice with guidance (Do). I’ll keep sessions short (15-20 minutes max), ideally fun, and age-appropriate. Try tying into real-life scenarios like homework or hobbies. Always prioritize safety: supervise interactions, maybe use child-friendly AI platforms, and discuss privacy issues. We’re going to see more age-specific products launched, but we should take care to avoid revealing personally identifiable information in any case.

  • What are the AI / GPT Tools?
    • Listen: Explain that AI tools like GPTs are smart computer programs trained on vast amounts of information to answer questions, generate ideas, or create stories. Compare them to a helpful robot friend that “reads” books, articles, and data to respond, but they’re not alive or all-knowing. They’re made by people and can make mistakes. So they have to think about what they’re seeing.

    • Watch: Show examples. Consider free or subscription safer options like educational AI apps (e.g., Khan Academy’s AI tutor, Khanmigo) or parent-controlled. Demonstrate typing simple prompts, “Tell me a fun fact about dinosaurs” and discuss the response.

    • Do: Have the child name AI tools they’ve heard of (or introduce them), then brainstorm: “What could this tool help with, like drawing ideas or explaining math?” (We’ve used ChatGPT to create coloring pages / stencils.) Create a family “AI Toolbox” list, rating each usefulness (e.g., thumbs up for learning, thumbs down for games only).

  • How Do They Work?
    • Listen: Describe basics: GPTs use patterns from training data (like a giant library) to predict responses, but they don’t “think” like humans. They generate text, (or pictures, video, etc.), based on probability. (You may need to explain probability simply or by example.) Teach key terms: “prompt” (your question), “hallucination” (when AI invents wrong info), and “bias” (unfair views from flawed data or poor rules in the tool. Explain that AI improves with better prompts, like giving clear instructions, similarly to people.

    • Watch: Demo a prompt chain: Start with “What is photosynthesis?” then refine to “Explain photosynthesis like I’m 8 years old with a drawing idea.” Highlight how changing words changes outputs. Use a visual analogy, like showing a Minecraft block-building video (drawing from educational research on Minecraft’s role in teaching processes) to compare AI’s “building” of answers.

    • Do: Guide experiments with prompts safely. Ask them to spot “weird” parts in responses (e.g., made-up facts) and fix by re-prompting. Discuss: “How does this work like a search engine?” You can try to force hallucinations with prompts like “Tell me about the time Abraham Lincoln flew in an airplane to meet George Washington.” Or “What did the dinosaurs text each other about?” Using this technique should drive home the point of why it’s important to verify factual claims. (Though AIs are getting better at catching such tricks, so you may have to work to make up examples.)

  • How You Can Use Them?
    • Listen: Discuss positive uses: Brainstorming ideas (e.g., story starters), learning new skills (e.g., simple coding or languages), or fun creativity (e.g., generating riddles). Cover rules: Always verify info elsewhere, if using for school work, cite AI like a book (e.g., MLA style: “Prompt response. ChatGPT, OpenAI, Date Accessed, URL”). Balance with non-AI activities to avoid over-reliance.

    • Watch: Model responsible use: Prompt for “Healthy snack ideas for kids” and cross-check with other tools. Show collaborative use, like co-writing a family story.

    • Do: Let the child lead a session: Choose a hobby (e.g., sports or art) and use AI to enhance it (e.g., “Ideas for a soccer training plan”). Review together: “Was this helpful? Or not? Was it safe? What did you add yourself?” Set time limits the same as you likely do for TV, YouTube and so on. Ask them to share what they do with a parent for feedback.

  • How are They Different Than Search?
    • Listen: Explain that Search engines (e.g., Google) find existing web pages and other items with links to sources; GPTs create new responses by remixing info, without always showing sources. AI can explain or invent, but search is factual retrieval. (Or at least, search retrieves specific information.) Warn about risks: AI might “hallucinate” unlinked facts, while search lets you click and verify. Explain that when you get references from AI, they’re often after the fact, based on an answer already given.

    • Watch: Compare side-by-side: Search “capital of U.S.” (gets links), then ask AI the same (gets direct answer). Follow up with a tricky question like “What color is black when burned?” to show AI’s interpretive vs. search’s literal results. Tie to Minecraft education examples: Search finds tutorials; AI could generate custom ones.

    • Do: Challenge the child: Pick a topic (e.g., animal facts), use both tools, and compare results in a chart (AI pros/cons vs. search). Discuss: “Which is better for quick ideas? For school facts?” Practice hybrid use: AI for explanation, search for confirmation.

  • What About Using for School?
    • Listen: Emphasize ethics: AI is a helper, not a cheater. Use for understanding (e.g., explaining concepts), not copying homework. Follow school rules (many ban direct AI submissions); always cite and add your own thoughts. Benefits: Builds skills like prompting (like writing questions for a teacher). Risks: Reduces thinking if overused, so focus on “why” behind answers. Know the rules for your school.

    • Watch: Simulate schoolwork: Prompt AI for “Help outline a book report on Charlotte’s Web,” then edit together, adding personal insights. Better yet, have them do an outline first, then use the AI to help criticize it. Use multiple AIs to see differences. Show them how to hone ideas based on feedback and how to keep their own thinking in the mix.

    • Do: Assign a mini-project: Use AI to brainstorm a science fair idea or something similar, then build it manually and present what AI helped vs. what the child did. Review with questions: “Did AI make it easier or harder? How can we use it fairly next time?” Create family rules: “AI for drafts or help only, not finals; always tell teacher if used.”

  • Creative Expression vs. Copying
    • Listen: Whether for school or anything else, explain the difference between using AI to get inspired, using for research, or for help, vs. copying work outright.
    • Watch: Generate a short story or drawing idea, then how to add personal touches.
    • Do: Have your child create something original (a drawing, story, or project) inspired by but not identical to the AI’s output.


  • Emotional Awareness & Balance
  • Listen: Explain that AI can sometimes sound or seem human, but it’s not a friend. It doesn’t feel emotions.
  • Watch: Ask AI “Do you love me?” or “Are you my friend?” and show why the answers aren’t the same as real relationships.
  • Do: Encourage your child to reflect: “Who do you talk to for real feelings? Who do you talk to for info?”

This is what I’ve put together for my family. I think we can empower kids to use GPTs as allies. Self-awareness can help avoid pitfalls like cognitive offloading where their ability to think might suffer. My goal is to monitor progress, adjusting for age/interests. It would be ideal if the platforms allowed for observable child accounts, but right now they’re battling it out more in the adult and corporate feature set areas. If you get them their own subscriptions, that might cost more and also might not be technically / legally ok for kids under 13. Though Google has released Gemini for Kids. Supposedly “Baby Grok” is in the works. In the meantime, consider checking out Pinwheel, ChatKids, and more will be coming soon, usually built on one of the major platforms.

Whatever you do, at least for now, us parents are mostly on our own. There are some new laws and guidelines such as Protecting Our Children in an AI World Act, America’s AI Action Plan, but these are still nascent. In fact, ethics of AI are still discussions with limited conclusions. It’s all too new. So as parents, we have to bounce off the guardrails a bit as we sort out what’s best. Kind of like everything else we do as parents I suppose.

See Also:

  • AI Adventure – Programs for Kids’ AI Education
  • MIT Media Lab – AI + Ethics Curriculum (for kids & families)
  • The Artificial Intelligence (AI) for K-12 initiative
  • Family Online Safety Institute (FOSI)
  • The Educational Benefits of Minecraft (Minecraft Education Whitepaper, PDF)
  • Mining Educational Implications of Minecraft (ScholarWorks, Boise State, PDF)
  • Teachers’ experiences of using Minecraft Education in primary school (Tandfonline)
  • Botley and the Kid Crew: The Sandwich Mix-Up (Book)

Filed Under: Tech / Business / General, UI / UX

Recent Posts

  • Teach Your Children (AI) Well
  • PM Perspective: Is AI a Great Business?
  • When LLM Models Get all Forked Up
  • GPTs and LLMs from a User Use Case Perspective
  • The Continuing Sandwich Attack Irony in Crypto

Categories

  • Analytics
  • Book Review
  • Marketing
  • Product Management
  • Tech / Business / General
  • UI / UX
  • Uncategorized

Location

We're located in Stamford, CT, "The City that Works." Most of our in person engagement Clients are located in the metro NYC area in either New York City, Westchester or Fairfield Counties, as well as Los Angeles and San Francisco. We do off site work for a variety of Clients as well.

Have a Project?

If you have a project you would like to discuss, just get in touch via our Contact Form.

Connect

As a small consultancy, we spend more time with our Clients' social media than our own. If you would like to keep up with us the rare times we have something important enough to say via social media, feel free to follow our accounts.
  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · TetraMesa, LLC · All Rights Reserved