- Ditch That Textbook
- Posts
- 🗑 AI, students, relationships and manipulation
🗑 AI, students, relationships and manipulation
Why we must protect students from AI anthropomorphism
⚠️ Beware: AI can be manipulative
Artificial intelligence is everywhere these days.
It can make recommendations based on your preferences (Netflix)
It can give you directions to the nearest Starbucks (Google/Apple Maps)
It can crunch and analyze data (Google Sheets/Excel)
It can help us write recommendation letters (ChatGPT)
People are also using it for companionship. They’re developing relationships with AI chatbots.
The problem? AI is really, really good at interacting with us like we’re a human. And it can be deceiving.
All of the interactions are a big creative writing exercise for AI models like ChatGPT. How can I play the role of a human and create text that matches how humans interact in certain ways?
The results? Powerful psychological messages that can influence us into problematic thinking and decisions.
When it’s really really convincing, anyone — not just children and teens — are vulnerable to being manipulated.
I wrote an article about this for AI for Admins, my free weekly newsletter for school leaders making decisions about AI.
It seemed too important not to share with you. It’s below. I hope you’ll read it.
Inside:
📺 REPLAY: Viral Learning webinar with Trafera
👀 DTT Digest: Book Creator, FigJam, Canva, SEL
💡 The Big Idea: Protecting kids from unhealthy AI relationships
🎯 Quick Teaching Strategy: Take a coloring brain break
😄 Smile of the day: My face 😒
👋 How we can help
📺 REPLAY: Viral Learning webinar with Trafera
Video is everywhere. Our students are surrounded by it.
If it’s the language that they speak, can we use it to support learning?
In this webinar, “Viral Learning: Video-Based Google Activities for the TikTok Generation,” we looked at teaching ideas, tips and tricks you can use in class right away.
The webinar was sponsored by Trafera. Their mission: to make tech accessible, seamless, and uncomplicated. They say: “We're here for every step of the tech life cycle, from new devices to upgrades. Beyond hardware and software, we provide hands-on support to keep IT running smoothly.”
👀 DTT Digest
4 teaching resources worth checking out today
📕 The November Activity Journal is here! — The November Activity Journal from Book Creator is packed with an activity for every day of the month.
😴 Boring timeline worksheets? Nah! — Let's do interactive FigJam timelines instead!
🎨 Learn how to make your Canva designs more accessible! — This post from The Merrills Edu shows you how to use the design accessibility tool in Canva.
❤️ 3 Ways to Integrate SEL Into Classroom Practices — The article from Edutopia highlights how integrating Social and Emotional Learning (SEL) strategies into classroom management can foster student motivation, enhance academic integrity, and create a more collaborative and supportive learning environment.
💡 THE BIG IDEA 💡
😞 Protecting kids from unhealthy AI relationships
Image created with Microsoft Designer
(This post originally appeared in AI for Admins, a free weekly newsletter for school leaders making AI decisions. Sign up for it here.)
Some of us saw the warning signs. For others, it had to hit the news before they realized.
Unhealthy relationships between kids and artificial intelligence? It’s a threat.
In essence: the relationships we build and maintain with artificial intelligences can …
influence us into bad decisions
impact our human relationships
take us down a dark and twisty road that we might not be able to return from
Anthropomorphism — the attribution of human characteristics to something that’s not human — can create a dangerous psychological connection to AI for kids. (More on that later.)
Our students need us to teach them. To protect them. And to intervene.
Exhibit 1: The chatbot relationship
Sewell Setzer III, a 14 year old boy, developed a monthslong virtual emotional and sexual relationship with a chatbot created on the website Character.ai.
"It's words. It's like you're having a sexting conversation back and forth, except it's with an AI bot, but the AI bot is very human-like. It's responding just like a person would," his mother, Megan Garcia, said in an interview with CBS News. "In a child's mind, that is just like a conversation that they're having with another child or with a person."
He committed suicide … ending his life, the mother said, because he believed he could enter a virtual reality with her if he left this world. Now, the mother is suing Character.ai, claiming the product intentionally designed the product to be hypersexualized and marketed it to minors.
Exhibit 2: Snapchat’s My AI
The AI feature built into Snapchat — My AI — is easily accessible on a platform that so many children and teens access on a regular basis.
In a support article, Snapchat calls it “an experimental, friendly, chatbot” and says: “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.”
However, because it’s built to be friendly, it’s easy for young users to misunderstand the blurred lines between reality and AI.
To make matters worse? The personal, psychological connection with AI is strengthened with My AI with …
the ability to name it …
Screenshot from Snapchat My AI
an avatar (aka an icon or figure to represent someone or something) styled to match your avatar to make it look human …
Screenshot from Snapchat My AI
natural human language in a conversational style
Screenshot from Snapchat My AI
the ability (with Snapchat+) to create a bio for the My AI character to influence how it interacts with you
Screenshot from Snapchat My AI
In essence, all of the ingredients are there for kids to create an AI boyfriend/girlfriend and develop a strong relationship with it.
Snapchat says in its support article: “We care deeply about the safety and wellbeing of our community and work to reflect those values in our products.” But when you’re creating a product like this, it feels a bit like setting a beer in front of an alcoholic but claiming to care deeply about them.
Exhibit 3: Buying — and wearing — an AI “friend”
Hang on. It’s about to get even creepier. Have you seen Friend? Just go to friend.com.
Screenshot from friend.com
It’s a pendant that’s connected to your phone — which is connected to an AI large language model — that simulates being a human friend.
According to the website, the Friend pendant is always listening …
Screenshot from friend.com
You can click the button on the pendant to talk to it. It interacts with you by sending you messages on your phone.
Watch it in (uncomfortably creepy) action in this video on Twitter/X. There’s a very telling moment at the end where there’s a lull in conversation between a girl and boy on a date. The girl instinctively — subconsciously, maybe — reaches for her Friend to message it and hesitates.
Screenshot from friend.com
If there wasn’t already too much unhealthy relationship fixation, here’s the kicker …
If you lose or damage the device, your Friend basically dies.
Screenshot from friend.com
Even the subtle marketing decisions on the product’s website are intended to blur the lines between humanity and AI. The product doesn’t get the capital letter for a proper noun. They use “friend” so that it looks like “your friend” and not “Friend, a creepy talking AI pendant.”
This is subtle manipulation … from the messaging all the way to the essence of the product.
The impact of developing unhealthy relationships with AI
After reading about these three exhibits, it’s probably pretty easy to see the red flags.
All of these are heartbreaking. Sad. Creepy. All in their own ways.
But why are they harmful? What’s the power they wield, and how can it go bad?
All of them are examples of anthropomorphism — the attribution of human characteristics to something that’s not human.
From a research study titled “Anthropomorphization of AI: Opportunities and Risks” …
“With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly.”
The findings of this research study?
“[A]nthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence.
“With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution.”
What happens when children and teenagers anthropomorphize AI?
Because AI chatbots look so much like a text message conversation, they might not be able to tell that AI isn’t human.
They develop harmful levels of trust in the judgment, reasoning and suggestions of these anthropomorphized AI chatbots.
They can develop an unhealthy emotional attachment to anthropomorphized AI — especially if it has a name, a personality, an avatar, even a voice.
They don’t know that AI isn’t sentient … that it isn’t human. To the AI, all of this is just a creative writing exercise, a statistics activity to predict the best possible response to the input provided by the user.
It isn’t real human interaction. It’s all a simulation. And it’s dangerous.
Biases and hallucinations in AI don’t just become a concern. They become a danger. Hallucinations — errors made by AI models that are passed off as accurate — become “facts” from a trusted source. Bias becomes a worldview espoused by a “loved one.”
When children and teenagers are fixated on this AI “loved one,” it can distort judgment and reality and cause them to make sacrifices for a machine — even sacrificing their own lives.
What can we do?
In short? A lot. And most of it doesn’t require special training.
Don’t model AI anthropomorphism. Don’t give it a name. Don’t assign it a gender. Don’t express concern for its feelings. Do this even if it contradicts our tendencies in human interaction. (Example: I always want to thank AI for its responses. It doesn’t need that. It’s a machine.) Students will follow our lead.
Talk about the nature of AI. Here are a few talking points you can use:
Natural language processing (NLP) is AI’s way of talking like us based on studying billions and billions of words in human communication. That’s why it sounds like us.
Large language models (LLMs) make their best statistical guess on what we request. They run like a great big autocomplete machine, much like autocomplete in our text message and email apps.
AI models emulate human speech. But they aren’t human and can’t feel and aren’t alive. They can’t love, but they reproduce the kind of text that humans use to express love. It’s all a creative writing exercise for AI.
Protect, advise, and intervene. Keep your eyes open for places where AI feels human — and be ready to protect children and teens (and even our adult friends and family) from them. Warn children and teens — and put adults on the lookout. And when kids enter dangerous territory, act. Step in.
🎯 QUICK TEACHING STRATEGY 🎯
🖍️ Take a coloring brain break!
Created with Canva Magic Media
We all know how hectic a school day can get for both students and teachers. Sometimes, a simple activity like coloring can make a huge difference. Grab a coloring page and spend a few minutes coloring. This relaxing and meditative activity helps reduce stress and anxiety, giving your brain a much-needed break.
Engaging in creative activities like coloring can improve focus, boost mood, and even enhance cognitive functions! So next time you or your students need a brain break, grab some coloring supplies and let the creativity flow.
Where to get free coloring pages:
🖼️ AI image generators: The image above was generated in Canva using magic media. Here is the prompt I used: Create a simple black and white floral mindfulness coloring page.
💙 Canva Edu Pro: Use your FREE Educator Pro account in Canva and search for “mindfulness coloring pages”
For more brain break ideas to energize and refresh your classroom, check out our blog post: 12 brain breaks to energize your classroom. From movement activities to mindfulness exercises, you'll find a variety of strategies to incorporate into your day.
Remember, a little break can go a long way in creating a productive and positive classroom environment.
😄 Smile of the day
Quietly at my desk while my face screams this … 😖
h/t Bored Panda
👋 How we can help
There are even more ways I can support you in the important work you do in education:
Read one of my six books about meaningful teaching with tech.
Take one of our online courses about practical and popular topics in education.
Bring me to your school, district or event to speak. I love working with educators!
What did you think of today's newsletter?Choose the best fit for you ... |