TetraMesa

  • About Us
  • Services
  • Clients
  • Contact
  • Blog

AI GPT Safety & Issues for Kids

June 4, 2025 By Scott

Does the latest gee-whiz AI tech create some special issues for kids? I think so. Like anything else, to what degree varies by age and personality, but some things apply to everyone. As a dad, I got to thinking about this based on a LinkedIn chat about kids and devices in general. I’d thought I’d known a thing or two and had some kid discussions, but after poking around a bit, I’ve realized I’ve missed some things. Let’s assume the definition of kid / child here is everyone up through high school. Here’s what I’ve found…

The Benefits

The benefits of GPTs for kids seem amazing. We’ve likely only begun to understand them. How are you using them at home? The potential for personalized learning help and homework support, creative expression, encouragement of exploration via reading; those are just the basics. I’ve sat with my daughter and together we’ve made custom coloring books and more. On snow days we’ll do the sledding and hot cocoa thing, but it’s also easy to generate some math worksheets and such.

Then we have kids with accessibility issues for whom GPTs might be able to adjust with voice/text alternatives; that’s a major benefit. Add in multi-media options and the possibilities seem stunning. Once upon a time, somewhere between Baby Boomer and GenX, the controversies of using calculators in the classroom, (sometime around the 1970s), became a thing. How quaint. Clearly, the benefits overwhelmed any concerns.

With this incredible power does come some potential risks we also might not understand yet. But no one is stopping to let society catch up. Or the legal system, or business, and certainly not parents. So what should those of us with young kids perhaps be considering?

To Tell the Truth

As amazing as the AIs are, (and getting better every day), we know they don’t always get things right. Not that we do either, but this is different as we might not be aware of the information flows our kids are consuming and believing. Yes, this may have always been true! But there’s something more insidious when using what seems like a confident sort of ‘official’ source. Here’s what we need to teach them:

  • Teach kids how to evaluate AI-generated content for accuracy.
  • Explain the importance of source-checking and understanding bias.
  • Offer examples of how GPTs might give plausible but incorrect answers.
  • Ask them to come to us, (or monitor their usage), when they use these tools so we can offer context if necessary.

Let me ask you something… are you a nice person? Most people are, even if we have our moments. Well, the GPTs try to be nice as well. Perhaps they’re trying to avoid being seen as threatening. Unfortunately, this can also lead to some degree of confirmation bias. As a result, a request for a review of something, or responses in back and forth sessions, could result in the tool mirroring the user’s tone or perspectives, thus skewing answers.

Privacy

Children often don’t realize that what they type into a chatbot might be stored or analyzed. Even when services say they’re secure, it’s best to err on the side of caution when it comes to personal data. Moreover, as of June, 2025 there are current legal issues regarding government orders for services to retain AI prompt query history. (See Your ChatGPT Logs Are No Longer Private, regarding a May 13, 2025 court order.) What might kids do and what can you do?

  • Kids may unknowingly share personal info with AI tools.
  • Explain how some AI tools collect data or use inputs to improve models.
  • Provide tips on safe usage: no real names, locations, or contact info.

The debates about what exactly privacy is, why, whether or where it’s important, and so on, is an ongoing… what? Morass? Quagmire? We can just call it a continuing debate. Fine. Adults can debate the intellectual nuance. For kids, we should just sensitize them to things to avoid such that they don’t invite identity thieves into your home networks, or worse.

Inappropriate Paths

Just what constitutes inappropriate, for whom, and what age is also debatable. But we don’t have to do that here. We each decide for our families what this might be. And we all know that age inappropriate things, (however you want to define that), are just one link away. Now, however, even if we’re using kid safe tools and monitoring online usage, kids may ask their new GPT buddies questions that take them down paths which should either be delayed for years, or should at least have parental guidance.

Dependency and Over-Reliance

While GPTs can help with homework or inspiration, over-reliance can stunt growth in critical thinking and learning skills. The goal should be support, not substitution.

  • Explore the risk of kids leaning too much on GPTs for homework or thinking.
  • Compare to calculator debates; beneficial aid vs. erosion of foundational skills.

I have a personal hypothesis about this related to information foraging theory. (I know, bad segue into something a bit geeky here, but stay with me just one minute for a maybe insight.) Humans generally seek to maximize calorie efficiency; that is, metabolic efficiency. Cognitive offloading is one means to do that. So at a very basic survival level, offloading oxygen and calorie resource intensive brain tasks, (maybe to a GPT), is a goal. Unfortunately, this is kind of like a Paleolithic craving for sugar, (a maladaptive modern behavior), vs. healthier energy rich nutrients. It might satisfy short term needs, but to long term detriment.

So, from a motivation and curiosity perspective, GPTs can either stimulate or suppress curiosity depending on how they’re used. Yet, over-reliance may reduce exploratory behavior, another key aspect of information foraging.

Alright, let’s wrap this up, tie the analogy together and move on. We evolved in environments where sugar was rare but valuable; a quick, dense source of energy. So our brains developed a strong reward system for seeking out sweet foods to survive periods of scarcity. This instinct was adaptive in that context. It helped us survive. Today, it is much less challenging to hunt Snickers bars. See? You want one now, don’t you? You were probably wondering about the Snickers bar picture and already thinking about it. Just like any stimulated neural net, yours is now triggering around Snickers. The taste. The Chocolate. Maybe the last time you had a sweet. You’re drawing multiple associations now. Our brains use a lot of energy to do this, and evolution shaped us to conserve mental effort where possible. Cognitive offloading, (delegating mental tasks to tools), can enhance safety and performance. In aviation, for example, modern navigation systems reduce pilot workload, helping prevent error under stress. GPS in cars offloads spatial memory and route planning, but probably degrades natural wayfinding. Smartphone reminders offload temporal memory, but may reduce internal time awareness. And a big debate these days, AI code assistants can offload some code syntax and logic choices, but could very well reduce foundational fluency in junior developers.

AI GPTs offer us fast answers, summaries, etc. just like sugar offers quick energy. This satisfies short-term needs (e.g., finishing homework faster), but may lead to weaker foundational skills over time. Perhaps just as we must regulate sugar intake for physical health, we must regulate AI use for cognitive health. I spent some time on this topic, because it may be the most important as it seems to be the most insidious.

Bottom Line: We’ve built amazing tools to help us past our own limitations. At the same time, our biological instincts haven’t caught up with these tools we’ve built. What once served survival may now undermine growth, whether it’s sugar for the body or GPT for the brain.

See Also:

  • AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
  • Increased AI use linked to eroding critical thinking skills
  • The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort.

Mental Health, AI Companions & Really Bad Information

Some kids may begin to treat GPTs as friends or confidants, especially if they feel isolated. While there’s value in having a safe space to ask questions, emotional over-dependence can be unhealthy. Parents sometimes use the television as a babysitter. (C’mon… We’ve all resorted to it.) Then maybe video games can engage them for hours. We can discuss all day long how good or bad these options were or how many hours. But the thing with GenAI GPTs is they don’t just pass some idle time. They talk back. They can engage in a deeper adaptive manner. We have to be more aware. Some people might see this use as a potential benefit of AI. But are we really there yet? So…

  • Some kids might start treating chatbots as friends or confidants.
  • We can discuss emotional dependency or blurred lines between humans and AI.
  • We can add cautions about parasocial relationships or isolation effects.

Note that there have already been, (I should say alleged) examples of tragedies via kids’ use of AI. (Lawsuit claims Character.AI is responsible for teen’s suicide, AI accused of sexually harassing users.) We can expect more. It’s so easy to think, “not my kid.” Or “those other kids already had issues.” Whatever we choose to believe, our reality is the digital threat surface area for our kids is dramatically larger than in years past. Consider: always-on connectivity, ai-powered interactions, global platforms (though arguably good in many cases), surveillance economics and capabilities, deepfakes, etc. Kids today face a wider, faster, and less visible range of threats than previous generations. This is potentially compounded by AI, real-time global connectivity, and commercial incentives that prioritize engagement over safety.

The old saying applies: “You’re not paranoid if it’s true.” And watching out for all this has been added to our parental guardian job descriptions; namely, how to maximize the amazing benefits of AI and minimize the potential for harm. Ignoring the tools is clearly not an option.

Bias and Stereotypes

AI tools are trained on vast internet data, which includes biased or unfair representations. Kids may absorb these biases unknowingly if not guided to think critically.

  • AI can inadvertently reproduce cultural, gender, or racial biases.
  • Explain how this might affect kids’ worldview if not addressed or discussed.

Note that bias in AIs can come from the source data they use, historical information, selection of sources, the model architecture and training, and possibly most ironically, by attempts to decrease bias.

Older Kids

School Rule Risks: Schools have various policies on just what type of technical assistance may be allowable for projects. And also different ways of trying to detect violations. Leaving aside actual value of learning or ethical considerations, the simple consequences your kid could face through incorrect use of AI might range from academic penalties to suspensions or worse. When it comes to some institutions, expulsion may even be an issue. Today’s kids can be incredibly smart. Or at least informed. But they’re also still kids; potentially impulsive, immature, and unaware of themselves, thus making poor decisions. This might be described as high processing power with low executive oversight, or a fast car with weak brakes. Making sure they understand consequences might help. It probably doesn’t help that the educational system itself is a patchwork of unclear perspectives and rules. You, (and they), need to know what they’re facing at your particular institution.

Kid Safe GPTs

The marketplace provides! Some of our intrepid entrepreneurs have attempted to create kid-friendly options. (For example, things like the EDDIE tutor, with more of these seeming to come every week. And here’s a helpful site that LinkedIn recommended to me after I posted this article: AI Parenting Guide) You’re already, (ideally), locking down or monitoring various online games. (By limiting chat functions, checking in, watching out for “catfishing” (use of fake identities to trick kids), bullying, scams involving digital / in-game currency, and so on.) The same needs to be true for GPTs. Using tools designed for kids may help. Just note that these are often just filters and rules of various sorts slapped on top of other GPTs. And while their efforts may be good-faith judgements of creators as to what is “kid safe” the result is based on others’ definitions, and of course might not always work perfectly technically.

What to Do

We shouldn’t fear this technology, but we do need to respect it. The same tools that can unlock creativity, curiosity, and connection for our kids can also mislead, manipulate, or overwhelm them if left unchecked. As parents, guardians, and educators, our job isn’t to shut the door on AI. It’s to walk through it with our kids, ask better questions, and stay engaged. If we show up with curiosity, vigilance, and compassion, we can help them build a future where this powerful technology works for them, not on them. Though this is going to require yet more time and effort. We can’t slow down tech until we can catch up with our best ideas for mitigating the challenges, but we can at least show up for our kids in the now and pay attention. So pick an expression… “The train has left the station,” “The cat is out of the bag,” “You can’t unring the bell” and so on. It’s not going away and we can’t just ignore it. This is yet another thing we’ll need to add to our list of things to deal with.

Please reach out and let me know if you think I’ve missed something.

By the way, you can see how Snickers are Made in this YouTube video. (I’m fairly certain I’d be barred from any factory tour after showing up with my own spoon for the caramel section.) So the other lesson here is maybe don’t write blog posts when hungry.

Filed Under: Tech / Business / General

Recent Posts

  • Fear of Agent Rot: Lobotomies in Smart Systems
  • Adding a GPT with RAG to a WordPress Site
  • AI GPT Safety & Issues for Kids
  • Comparison Site Build: WordPress vs. AI Builder
  • Solving Physical Risks of Holding Crypto

Categories

  • Analytics
  • Book Review
  • Marketing
  • Product Management
  • Tech / Business / General
  • UI / UX
  • Uncategorized

Location

We're located in Stamford, CT, "The City that Works." Most of our in person engagement Clients are located in the metro NYC area in either New York City, Westchester or Fairfield Counties, as well as Los Angeles and San Francisco. We do off site work for a variety of Clients as well.

Have a Project?

If you have a project you would like to discuss, just get in touch via our Contact Form.

Connect

As a small consultancy, we spend more time with our Clients' social media than our own. If you would like to keep up with us the rare times we have something important enough to say via social media, feel free to follow our accounts.
  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · TetraMesa, LLC · All Rights Reserved