My Love/Hate Relationship with AI in Education Right Now

I have been thinking about learning for most of my adult life.

I have spent thousands of days teaching in various classrooms. I have sat in hundreds of classrooms watching teachers work. I have read the research, argued about it with colleagues, tried to translate it into something useful for the people actually doing the work. I have spent years thinking about the gap between what the science of learning shows and what schools actually do (and feeling a particular kind of frustration at how stubborn that gap is). Wondering why we as a system can be so resistant to good evidence, how much the institution can absorb and neutralize before returning to its default settings.

And then AI arrived in schools. And for the first time in a long time, I felt something that I do not feel very often when thinking about educational systems.

Hope.

Not the optimistic-brochure kind, not the conference-keynote kind, but the specific, evidence-grounded hope that comes from watching a technology do something that previously required the most skilled and most attentive human being in the room.

Real hope.

The hope that comes from seeing a system notice a four-minute pause and respond to it correctly. From watching the learning fingerprint finally become readable. From understanding, for the first time, that the specific bottleneck blocking the improvement of education for over a century (the impossibility of individualizing at scale) might actually be dissolving.

I want to be honest with you about that hope. It is real.

I also want to be honest about what I see happening in schools right now. Which is something considerably more complicated than hope.

What I am watching in classrooms

Let me start with what is actually happening, because the gap between the theoretical promise of AI in education and the on-the-ground reality of AI in education is currently very large.

What I see, in many real schools, right now, is a technology that is mostly being used to either automate compliance or enable avoidance.

These are the conversations and work I’m doing with colleagues in classrooms in NY, VA, TX, PA, NJ, FL, CA, OH, and all around the United States.

This week and next week I’m spending 5 days and 45 class periods leading PD for every teacher in a NY school district.

It’s not a siloed conversation, it’s happening everywhere.

Automating compliance looks like this:

  1. A student is assigned an essay.

  2. The student opens ChatGPT.

  3. The student generates the essay, reads it once or twice, makes a few adjustments that feel like ownership, and submits it.

  4. The process takes twenty minutes. The student has not thought carefully about the topic. They have not struggled with how to organize an argument. They have not had the experience of drafting and revising and discovering, in the revision, what they actually think.

  5. The assignment has been completed. The learning has not occurred.

Enabling avoidance looks different, and something like this:

  1. A student is confused by a math problem.

  2. Instead of sitting with the confusion because it is uncomfortable, and requires effort, the student asks an AI system for the answer.

  3. The answer arrives, clean and confident.

  4. The student copies it down.

  5. The confusion that was the beginning of learning is never resolved. The understanding that the confusion was pointing toward is never reached.

I am not describing hypothetical scenarios. I am describing what teachers are telling me is happening in their classrooms, consistently, at scale, across grade levels, across subjects, across schools that are well-resourced and schools that are not.

And I am not here to moralize about it. The students doing these things are not failing in character. They are responding rationally to a system that measures compliance like assignment completion, correct answers, produced work…rather than learning.

When the measure is the product and AI can generate the product, the rational response is to use AI to generate the product. The students have figured out what the system is actually asking for and they are giving it.

The question worth asking is not why students are doing this. The question worth asking is what it tells us about what we built, and what we need to build differently.

The distraction problem is real and it is not simple

There is a second thing happening in schools that is harder to talk about without sounding like someone who wants to take away the devices (which I do not, entirely, though I understand the impulse).

I’ve written about this extensively on this blog/newsletter. But, it deserves another mention.

The devices that deliver AI learning tools also deliver TikTok, Instagram, Snapchat, iMessage, YouTube, and an essentially infinite supply of content engineered by some of the most sophisticated behavioral design teams on earth to capture and hold attention. These systems are not indifferent to whether students learn. They are actively competing with learning, every moment a device is open, using tools that the learning application has no budget to match.

The research on the cognitive cost of this competition is sobering. A study from the University of Texas at Austin found that the mere presence of a smartphone (yes, even face-down, even turned off) reduced available cognitive capacity. The device does not need to be actively used to compete with learning. It needs only to exist in the environment, with its pull on attention, its unread notifications, and the constant availability as an escape.

Difficulty, as we established in the previous post, is the mechanism of learning. The substitution of relief from difficulty (which could be a quick check of the phone, a fifteen-second video, a message to a friend) for the productive struggle that produces understanding is not a neutral exchange.

It is a subtraction from learning.

And it is happening thousands of times a day, in thousands of classrooms, in a largely unmonitored, largely unaddressed way.

I am not saying this to argue for banning devices. The argument for device bans is simpler and more politically satisfying than the reality warrants. The student who spends English class on TikTok may spend the bus ride home using an AI tutor to understand something the classroom instruction did not make clear. These are not separable problems. They live in the same object.

What I am saying is that the promise of AI in education cannot be evaluated without reckoning with the full ecosystem in which the AI is being deployed. A learning system that arrives on a device that is also a distraction machine, in a school that has not figured out how to manage either, in a student body that has been spending years developing habits of attention fragmentation….

Well, this learning system is not operating in the conditions that its designers imagined. And the results it produces will not be the results that its designers (or me) hoped for.

The cheating conversation we are not having

Let me say something that I do not think gets said plainly enough in education circles, because it is uncomfortable.

The current conversation about AI and academic integrity is, for the most part, very incomplete.

The wrong conversation starts with a truth. Students are using AI to cheat.

It then goes into what to do about it which includes statements like, “We need to detect it and punish it and design assignments that are harder to cheat on.” Even statements like, “The technology is the problem and detection is the solution.”

This conversation is not wrong. It’s true that academic integrity matters, and the specific skills that writing and problem-solving develop matter, and there are genuine harms to students who bypass the cognitive work that produces those skills. All of that is true.

But the conversation is incomplete in a way that lets the institution off the hook.

What does it mean that so many students experienced the arrival of AI not as a threat to their learning but as a relief from it? This is what the deeper conversation should be about.

When a student reaches for ChatGPT to write an essay they were assigned, they are not, primarily, expressing laziness. They are expressing something more complicated and more worth attending to. They may be expressing that the assignment felt like a meaningless hoop to jump through, a product to produce, with no genuine connection to anything they cared about or any real audience who would care about the result.

They may be expressing that they do not have enough time and that the assignment is one of eight things due this week, and the cognitive economy of a student managing eight competing demands does not allow for genuine engagement with all of them, so something has to give.

They may be expressing that they do not know how to do the thing being asked and that the skill of argumentation, or the process of research, was never explicitly taught, only assumed, and the gap between what was assumed and what they actually know has always been papered over by compliant effort.

AI did not create any of these conditions. It gave students an efficient way to signal, in their behavior, what many of them have been experiencing for years. It’s also what many of us experienced way before technology in schools.

The educational system was asking us to produce things, not to learn things, and that the production could be separated from the learning without the system noticing.

What I am actually afraid of losing

I want to try to name something that I find difficult to name precisely, because the argument for Learning 3.0 can sometimes make it sound like the goal is to optimize every moment of the educational experience — to eliminate inefficiency, reduce friction, deliver knowledge more effectively, and measure the results.

And something in me resists this. Not the efficiency, which is genuinely valuable. Not the personalization, which is genuinely important. Something else. Something that I have been trying to find the right word for.

The closest I can get is this: Some parts of learning can only happen between people.

I am thinking about specific moments. The classroom that went quiet when a teacher told a story from her own life that connected, unexpectedly, to the poem they were reading. The moment in a seminar when two students disagreed about something that actually mattered to them (not performing disagreement for a grade, but actually thinking differently about a real question) and the whole room leaned in. The teacher who read a difficult passage aloud with such evident love for the language that several students went home and read the whole book, not because it was assigned, not because there was a quiz, but because someone's genuine care for something had become contagious.

None of these moments can be optimized. They cannot be scheduled or scaffolded or delivered at the individualized learning pace appropriate to each student's zone of proximal development. They are not retrievable from memory traces or engagement signatures. They are not readable in any interaction log.

They are the moments when education becomes something other than efficient knowledge transfer. They are the human piece between a mind and an idea, mediated by a human being who has a real relationship to that idea, in a room full of other human beings who are figuring out what they think together.

That’s a lot of “humans” :)

I believe these moments are not supplementary to learning. I believe they are for many students the most important thing that learning ever produces.

And I am genuinely worried that a version of Learning 3.0 built primarily around efficiency and personalization will systematically crowd these moments out. This may naturally happen in a compliance system because AI is better at the measurable parts, so the measurable parts get optimized, and the unmeasurable parts (the ones that only a human can produce) get what is left.

The specific loss I am watching happen

Students who use AI extensively for their written work are, in many cases, losing access to the experience of discovering what they think through writing.

Writing is primarily a thinking tool. The process of trying to put something into words is genuine cognitive work. It builds the capacity not just to write but to think. To organize complexity. To discover your own position on a question by working through the question seriously.

A student who generates an essay with AI has received a product that communicates something. They have not had the experience of discovering what they think. They have not built the cognitive architecture that the struggle of writing constructs. And they may not know what they have missed…because the essay exists, and it says something coherent, and it has their name on it.

Ugh.

This is the specific harm that worries me most. Not the dishonesty, though that matters. It’s the cognitive absence. The experience that was supposed to happen and did not.

I think it requires specific, deliberate countermeasures in assessment design, in how we teach students what writing is actually for, in the culture we create around intellectual effort and the honest struggle that produces it.

So imagine if we deploy AI in education without first doing the work of helping students understand why the struggle matters. We could very easily produce a generation of students who are very efficient at consuming intellectual products and increasingly unskilled at producing genuine thought.

Maybe we already have done that…

Where the love comes back in

I have spent most of this post on the worry, because the worry is real and it is not being named clearly enough in most educational AI conversations. But I started by saying the hope is real too, and I mean it, and I want to end there.

The technology that makes me most optimistic is not the AI that does the work for students. It is the AI helps makes visible what is happening in a learner's mind, that gives students accurate information about their own cognitive processes, that shows the teacher where the understanding is solid and where it is fragile, in enough detail and at enough scale that the human relationship can be directed toward what only the human relationship can do.

Feedback like never before.

The teacher who used to spend forty minutes of class time figuring out where each student is could spend those forty minutes actually meeting each student where they are. The writing instructor who used to spend hours giving surface-level feedback on grammar and structure could spend those hours in genuine conversation with students about what they are trying to say and whether they are saying it. The student who received a grade could instead receive an accurate picture of how their thinking is developing, where it is strong, where it is not yet what they need it to be, and specifically what they can do to close the gap.

Feedback that we could only dream of five years ago.

This version of AI in education does not replace the human encounter. It clears the space for it. Big difference ya’ll.

This version of AI takes the tasks that are measurable and mechanizable and handles them so that the tasks that need to be human can happen more fully, more frequently, and for more students (what a wonderful future of learning).

That is the version I am working toward. That is the version the book I am writing is about.

The question I am sitting with is whether we are building that version. Whether the choices being made right now, in classrooms and product studios in Silicon Valley and procurement offices and policy rooms, are building the AI that does the work for students, or the AI that illuminates the work that only students can do…

Because both are possible. And right now, in most schools I walk through, it is not obvious which one is winning.

Next
Next

The Reason You Study Wrong (and why the system never told you)