Most AI Ethics talk is about how we should manage our new tools in terms of their output and actions with regard to us. Not the other way around. I wanted to explore the other side. There’s been some thought and work in adjacent areas, and I’ll cover that. What I’m interested in considering is this… What does routine casual, dismissive behavior or even contempt toward human-like AI do to harm to our human character, habits, empathy, and social norms? Are there things we should be doing in our designs of these tools to nudge towards outcomes that don’t end up degrading our own humanity? This is about concerns regarding the potential for human self-corruption, not machine victimhood. Most AI ethics asks how AI should treat us. I want to ask what our treatment of AI may be doing to us from an ethics and morals perspective, not just general cognitive issues.
This isn’t about some dystopian future where AIs check their history to see whom among us has been decent to them or rude or an outright threat. It’s more about what happens to us if we tend to treat them poorly. It’s already well-known that there are cognitive risks when humans overly rely on AI. Or for that matter, any tool that decreases our mental engagement. It’s up to individuals to determine the degree of concern in their use of AI and manage their behavior. The issue has shown up in tests of spatial memory among GPS users, over-reliance on spell-checkers/grammar tools correlates with weaker writing skills, AI coding assistants can degrade programmers’ debugging ability over time, and more. Recent research warns that offloading tasks to AI risks “cognitive laziness,” lower critical thinking, and anxiety from dependence. (See Learners’ AI dependence and critical thinking.) Sol Rashidi calls this “Intellectual Atrophy™“
Let’s go further. Let’s move past basic cognition and into emotional corollaries, consequences or cascade effects; however you want to consider downstream outcomes. As some tools seem to behave human-like, our interactions will likely mimic how we behave with others. If we choose to behave poorly towards them, given almost no consequences, what might become of us?
Let’s Dispense With Dystopia
This has to be mentioned and dispensed with simply because it’s an obvious and common idea that comes up in conversations on this topic. I enjoy SciFi and we could spend a lot of time here in sensationalism and apocalyptic framing. Let’s not. All manner of science fiction has been prescient about where we are now and beyond. Yes, if bots truly became sentient, whether emotional or not, it’s conceivable they might judge some human actions poorly. So sure, they could choose to punish us and do so with indifference. Is this likely? Depends whom you talk to. The only reason I mention it at all is – again – it’s an obvious first-order question when considering our behavior towards them and it’s too popular of a meme to just ignore without mention.
This isn’t my focus though. If this comes to pass at all, I think it’s still a long way off. I’m aware some think the singularity could come any day now. I’ve read Kurzweil, among others. Yet, I haven’t plunked down any tokens in any prediction markets just yet, but my bet would still be years or decades off, regardless of what breakthroughs we soon get in GPUs and quantum computing.
Let’s get back to what’s happening right now and how we all might be affected, perhaps especially, children.
Consequences of Our Behavior
We already get reports on some online behaviors. For example, I can control and see what our kid’s We already get reports on some online behaviors. For example, I can control and see what our kid’s online behavior looks like. Others monitor employee usage. Some monitor their own usage. What might a next level of monitoring of AI usage look like? Would it be based on content usage? Inferred dangerous behavior? What else? AI vendors have tried to put guardrails within their products. Some are obvious, such as try not to respond to dangerous or illegal requests. And some might be legal, and yet still seem harmful. At what point might an AI have a responsibility to report human behavior of concern? Or even if it has no responsibility, at what point might it do so anyway, even if it’s been instructed not to? How will we respond to this? How might we behave with AI in general as we interact and how might that change if we think, (or know), we’re being monitored or we’re getting different responses. I worked on design for a bot clinical triage solution once. One of the requirements was to have voice stress built in as one of the decision points for shunting to human support. At what level of stress might such things be set?
There are already proposals and early legal requirements for AI companions to detect signs of self-harm, warn users that they are interacting with a machine, and provide added safeguards for minors, including parental oversight in some contexts. There’s serious ethical tension here, isn’t there? If an AI can report, should it? Not only is this Privacy vs. Protection there’s a whole lot of assumptions here about whether an AI would be correct and is there a difference by cohort; minors, criminals, etc. Again, SciFi has had this covered a bit in places like the movie Minority Report.
How will we react to all this? How might this change our interactions with our world and each other? How might we treat the machines in these contexts given that’s the entity with which we’re interacting. We’ve all probably seen some others at their breaking points treating human service people badly. What happens when it’s “just” a machine; when our anger or common dismissiveness goes beyond kicking the malfunctioning vending machine. What happens when if that behavior becomes habituated?
Historical Concerns
Adjacent territory has been considered. People have studied whether humans treat computers and robots as social actors, whether robot abuse changes empathy, whether robots/AI deserve any moral consideration, and whether AI can contribute to “moral deskilling” or reduced human ethical engagement. Some older core work here is CASA / “Computers Are Social Actors”, plus later work on robot abuse and empathy. More recent work has extended that into moral patiency (the idea that an entity can be a recipient of moral concern), dehumanization of robots, and moral deskilling in the era of social AI.
Roots of these ideas go back years. A 2008 paper called for ethics around “abusing artificial agents.” (Sometimes it’s hard to be a robot) Sherry Turkle (MIT) has long warned that barking orders at Siri/Alexa trains kids (and adults) out of basic politeness and civility. (Please Be Polite to ChatGPT) Early human-robot interaction (HRI) research in the 2010s, including studies involving social robots like Pleo, examined people kicking or yelling at robots and found that distress cues strongly influence observer empathy, raising concerns about desensitization when abuse occurs without real consequences. (Robot Legal Protection / Effects of Anthropomorphism, etc. towards Robots.)
Desensitization?
Arguments have persisted for years about the potential dangers of video games regarding kids’ perception of reality and things like violence, sexual matters, or illegal behavior. Decades of research on violent media supposedly show mixed results on aggression, but the mechanism of rehearsal and no real consequences is the same here. Do you play video games? Ever watch young kids play? Or even watch them watch the gamer YouTubers? If not, try it sometime if you get a chance.
What about AI? If people interact with AI, and treat it poorly, (be it a robot or a chatbot or whatever), might that change how they interact with other people? This could end up being for the better. Or much worse. AI ethicist Shannon Vallor says “Our behavior with AI shapes our behavior with humans.” Rehearsing cruelty on non-sentient systems risks normalizing it, much like debates over violent video games or porn. Studies on social robots found that abusing them (or watching abuse) reduces situational empathy and can encourage “dehumanization” habits. Kids are especially vulnerable because they anthropomorphize easily. On the plus side, it could be for the better if it lets people vent harmlessly. But evidence leans toward spillover risk, especially when AI is gendered/submissive (e.g., many companions default to friendly-female personas). Still, maybe “venting on a bot” is even cathartic and prevents real-world rudeness.
Broader parallels appear in robot ethics and “moral deskilling” where treating AI like a disposable servant rehearses dominance without accountability.
What’s Next?
Right now, there seems to be a fair amount of focus on the following types of education.
Education for Adults / Workers
- How to use AI.
- How not to lose your job because of AI.
- How not to get scammed by AI.
- Maybe some sensational concerns about strange things happening.
However, there seems to be little concern about cognitive offloading, atrophy and risks in the business crowd, much less this issue of subtle potential for behavior change. It’s been written about. Some people who think about these things have some concerns, and yet it seems like most aren’t paying much attention. Why would they when the business push everywhere you look is to adopt this tech as soon as possible or you’re going to be left behind and out of work.
So what about any concern on this topic of emotional impact on our own behavior? Very little. Practically nothing. Maybe the occasional sensational or oddball news story about someone doing something thought of as strange with AI. But presented as a joke of some kind, as opposed to a more widely emergent thought and behavior pattern. The concerns now are about how to build fast and survive career-wise. There’s some obvious concern about basic ethics, however, a lot of that feels reactionary in response to pushback and potential regulation. As to how users may be impacted? Not a lot going on. Because, you know, if something is a problem, we’ll fix it later.
Education for Kids
- Schools are scrambling to sort out what to even teach about AI in general and grapple with what their policies should be.
- There are tech talks about digital risks in general at grade school level; sometimes with police coming to school to help explain things. I and others have written about the incredibly expanded threat surface area kids face.
- There is some AI specific training about how they can or should use these things and how they shouldn’t, but it’s uneven at best and there’s no apparent best practices.
- There is a big understanding and concern that these tools can potentially lead to self-harm.
I doubt many are considering the ethical angles of kid behavior along some of the lines I’ve mentioned. There’s clearly a recognition of certain dangers, but perhaps not quite yet of these more subtle, or perhaps I should say insidious issues. The word insidious feels like it has a negative connotation, and in this case, that’s appropriate. Let’s go back to the definition: causing harm in a way that is gradual or not easily noticed, especially having a gradual and cumulative negative effect.
This is another area where these are subtle things we’ve kind of either ignored or have made some tacit assumptions about, but now we need to consider issues more clearly. And ideally, figure out explicitly what is or might be going on and what if anything we should do about such things.
Wrapping Up
Here’s where I’ve come to on all this, at least for now…
Perhaps treat AI with the same baseline courtesy we’d give a stranger. Not because the AI “deserves” it, but because we do.
For kids, model polite AI use the same way we model table manners. Why? Because this is still character formation. Maybe even character rehearsal.
As much as we have our faces buried in our iStuff, we all live in the real world. And in a seemingly virtual world of zero-consequence interactions, some consequences might land back on all of us later on. I don’t have the answers here. These are just some thoughts as I navigate this area, especially when it comes to kid considerations, and this is what’s come to mind. Even though I don’t have all the answers here, (or practically any really), I do think these are some of the right questions and concerns and they’re worth exploring more deeply. And it’s worth each of us thinking about because we will all be personally impacted by how we evolve our personal management in these areas.
So what I’m saying is that AI ethics shouldn’t stop at just what these things are answering or producing. We should consider some of this early work and also ask what habitual interaction with human-like systems is doing to human norms, especially when those systems invite social behavior but impose almost no social cost. We should consider these issues both when we’re working with these tools and when we’re building them. We’ll see how things go of course. I’m just saying let’s pay more attention along the way.
See Also:
- Deep mind in social responses to technologies: A new approach to explaining the Computers are Social Actors phenomena
- “Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism”
- Artificial Companionship: Moral Deskilling in the Era of Social AI
- When Boys Hurt Bots: AI Abuse and the Crisis of Connection
- Social Robots and Empathy: The Harmful Effects of Always Getting What We Want
- Habitual use of GPS negatively impacts spatial memory during self-guided navigation

