This begins my Letters to Educators series at Curious Tendril.
Dear Educators,
If you have not — and will not — use AI, you’re not alone.
Every time I give a talk on generative AI — whether to students, educators, or other professionals — the room is split. While two-thirds may be dabbling or even diving into AI on a daily basis, a third will be reluctant about AI, or even dead-set against it — and perhaps for good reason?
As I begin this blog, I want to take a moment to acknowledge some of the many concerns about AI.
And if you are the educator, who like me, cannot help but gush about AI — you are the kind of reader I most hope this reaches. More specifically, if you are that educator who simply cannot fathom why anyone might be cautious about an AI future, or about AI in the classroom — if you think, well, these positions are just unreasonably rigid or unresearched — please, take a moment to consider the persisting and emerging concerns about AI.
First and foremost, there’s the environment. AI appears to have real energy consumption1 issues that are growing each day — accompanied by a large water footprint.2 Next, we’ve got accuracy. Beyond AI’s penchant for brazenly inventing or omitting facts, only to apologize later, AI has this persisting accuracy problem3 that … Stephen Colbert might rightly refer to as digital truthiness.4 And perhaps most importantly, AI continues to echo many of the biases of the web and the world.5
Moral quandaries abound.
Clearly, digital innovation is not neutral in the age of AI. And when it comes to that list of concerns, we have only scratched the surface. As I see it, this is the stuff of ethical inquiry that our students should be taking on in our classrooms, so let’s keep it going. How about an old-fashioned, run-on sentence that tries to reflect the full scope of all that’s emerging?
Take a deep breath. Here we go …
Other AI concerns include labor augmentation and displacement6 amidst the use of AI agents as digital proxies,7 the hyper-growth of synthetic media filling the web,8 AI bots that are also overcrowding the web,9 cognitive offloading,10 and the persisting problem of how frontier models default toward authorship erasure11 and user-input scraping,12 and let’s not forget about copyright erosion,13 or the misuse of those image and voice and video generators that can create those astonishingly life-like deep fakes that then get used for cyber-bullying14 and electoral fraud,15 not to mention the threat of AI cyberattacks,16 nor the need for guardrails as we quickly advance toward semi-autonomous agentic interaction and collaboration,17 and what about the slew of AI-led wrongful arrests across America that seem to have gotten zero press18, or the quiet militarization of AI,19 and last but not least, let’s not forget the plausible specter of artificial super-intelligence (or “ASI”) in our lifetimes.20
Need we say more?
And we didn’t even touch on the prickly debates around AI in the classroom.
As I see it, these are concerns that we all should pay attention to — especially those well-meaning, AI-positive educators who might inadvertently push against educator choice by … pushing for mandatory AI adoption.
And to any readers who don’t want AI in the classroom, I hear you completely.
You deserve to be heard — and to have a safe space at your school to share your thoughts — and to have choices about AI in your classroom.
As you finish this post, regardless of where you stand, I want to hear from you. How do you lean when it comes to student use of AI in schools? Are you an advocate, or a critic, or undecided?
Whether you avoid AI, or you dive in — be it on your own, or with your students — I’m eager to learn more about your experiences. Whether you work in K-12 schools, higher education, or workplace learning and development (L&D), I hope you will leave a comment to extend the discussion.
Finally, and this may surprise you, in my letters to come, I will make the pedagogical case for why educators might nonetheless want to dive in, and start playing with AI today.
Stay tuned and warm regards,
Reed
p.s. For a deeper dive into the aforementioned AI critiques, please see my extended footnotes.21
AI is an energy hog. This is what it means for climate change by Casey Crownhart at MIT Technology Review. (May 23, 2024)
How much water does AI consume? The public deserves to know by Shaolei Ren at The Organisation for Economic Co-operation and Development. (November 30, 2023).
Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say by Jon Reed at C-NET (March 24, 2025).
Thank you, Stephen Colbert, for coining the word truthiness back in 2005. A glorious neologism, I must say.
See algorithmic bias and the work of Joy Buolamwini.
See The geography of generative AI’s workforce impacts will likely differ from those of previous technologies by Mark Muro, Shriya Methkupally, and Molly Kinder at The Brookings Institution on February 19, 2025.
See AI Agents: The future of task management and workforce productivity by Source India at Microsoft on December 23, 2024.
Yes, synthetic media might take over human-generated content — not only on the web, but in mainstream news, and in social media. See Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated by Kate Knibbs at Wired on Nov 26, 2024. See also What to Do About the Junkification of the Internet by Nathaniel Lubin at The Atlantic on March 12, 2024. See AI-generated ‘slop’ is slowly killing the internet, so why is nobody trying to stop it? by Arwa Mahdawi at The Guardian on January 8, 2025. See Is AI quietly killing itself – and the Internet? by Tor Constantino at Forbes Australia on September 3, 2024. See Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Website at arXiv, Cornel’s open-source scholarly article archive.
Bots Now Make Up Nearly Half of All Internet Traffic Globally at Thales on April 16, 2024 from the 2024 Imperva Bad Bot Report.
See New Study Says AI Is Making Us Stupid—But Does It Have To? by Lars Daniel at Forbes on January 19, 2025.
When I say the “erasure of authorship” I mean when AI appears to omit attribution as if by design - perhaps from the point that it first scrapes the web? Perhaps this is connected with the idea of Authorship Obfuscation (AO) mentioned in AI Search Has A Citation Problem at Columbia Journalism Review by Klaudia Jaźwińska and Aisvarya Chandrasekar on March 6, 2025. See also Attribution and Obfuscation of Neural Text Authorship: A Data Mining Perspective. More broadly speaking, see Generative AI’s secret sauce — data scraping— comes under attack by Sharon Goldman at Venture Beat (July 6, 2023).
See How to stop the AI you’re using from training with your data by David Nield at The Verge on Dec 7, 2024. Full credit to Anthropic for not scraping user data in Claude.
See America crafts an AI action plan by Casey Newton in my favorite tech newsletter, Platformer (March 18, 2025). According to Newton, in this era of competition with DeepSeek, “Meta also calls for Trump to declare that training on copyrighted data is fair use, and to do so unilaterally via an executive order.”
See How AI is being used to create explicit deepfake images that harm children at PBS by Stephanie Sy and Andrew Corkery at PBS on March 22, 2025. This interview speaks with Melissa Stroebel about the March 2025 report at Thorn. See See Students Are Sharing Sexually Explicit ‘Deepfakes.’ Are Schools Prepared? by Lauraine Langreo at EdWeek on September 26, 2024. See Deepfakes heighten the need for media literacy in the age of AI by Anna Merod at K-12 Dive on February 14, 2024.
See The apocalypse that wasn’t: AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture by Bruce Schneier and Nathan Sanders at The Conversation on December 2, 2024. See also A fake recording of a candidate saying he’d rigged the election went viral. Experts say it’s only the beginning by Curt Devine, Donie O'Sullivan and Sean Lyngaas at CNN on February 1, 2024.
See Cyberattacks by AI agents are coming by Rhiannon Williams at MIT Technology Review on April 4, 2025.
The AI Agent Era Requires a New Kind of Game Theory by Will Knight at Wired on April 9, 2025.
See When Artificial Intelligence Gets It Wrong: Unregulated and untested AI technologies have put innocent people at risk of being wrongly convicted by Christina Swarns at The Innocence Project on September 19, 2023.
See Project Censored Announces the Launch of Military AI Watch at Counter Punch on April 8, 2025.
After hearing the text-to-speech playback of this post, this was further edited for the listening ear — meaning those who, like me, prefer reading with their ears. I hope it sounds good!
Looking forward to it!