Justin Reich on Making Purposeful Decisions About Generative AI in Your School
Chuan Ming Ong for Edutopia
ChatGPT & Generative AI

How to Make Purposeful Decisions About Generative AI in Your School

Avoiding both AI hype and fear, MIT’s Justin Reich calls for school leaders to acknowledge uncertainty and prioritize thoughtful, small-scale experiments.

September 11, 2025

Your content has been saved!

Go to My Saved Content.

When generative AI appeared, it didn’t wait for a formal adoption process like previous waves of educational technology. The tool that gave students the ability to generate essays, solve math problems, and create images in seconds—and teachers the ability to automate some lesson planning, grading, and feedback—simply arrived, “whether or not schools wanted it,” says Justin Reich, an MIT associate professor and director of the Teaching Systems Lab. “There is no procurement process. It’s just something that educators are forced to deal with.”

Since that time, school leaders have been grappling with how to respond to AI—and mostly struggling to keep pace with a technology whose capabilities and shortcomings evolve faster than policies can.

To put it bluntly, leaders “are in a bind,” Reich told me.

In his attempt to serve as a guiding light during this uncertain moment, Reich brings insights grounded in research and in-depth interviews. In the last several years, he has spoken to dozens of teachers and school and district leaders on his podcast TeachLab with Justin Reich, and his research has examined how new technologies spread through schools, how teachers really use digital tools—as opposed to how edtech companies think they use them—and how promising innovations can end up reinforcing existing inequities in access and outcomes.

His books, including Failure to Disrupt: Why Technology Alone Can’t Transform Education and Iterate: The Secret to Innovation in Schools, explore recurring themes in edtech that apply to AI: namely, that hype cycles often outpace reality in the classroom; that systemic change tends to occur incrementally rather than by sweeping disruption; and that the most effective innovations are deeply tied to human relationships and collaboration.

ANDREW BORYGA: You’ve studied tech adoption in schools for many years. How does the arrival of generative AI compare to past waves of education technology you’ve analzyed—in terms of both speed and scale?

JUSTIN REICH: To explain it colloquially: A middle schooler has never dragged a smart board into a classroom. Almost every technology in schools goes through a procurement process where a bunch of people get together and say, “Hey, I think we want to buy this. Let’s try a few products, do a pilot, and then we'll implement it.” This process is useful for school leaders. It gives them control about when things happen. If you run a school that’s committed to improving reading, and you’re making progress, you’ll stay the course and keep doing what you’re doing. You don’t need the new tech that’s out—you can grapple with it on your own terms.

AI is different. It’s one of the first technologies that arrived whether or not schools wanted it, whether or not classroom teachers wanted it. There is no procurement process: It is something that school leaders and educators just have to deal with.

BORYGA: How has your experience seeing the rollout of previous versions of tech—Wikipedia, Google Search, smart boards, etc.—shaped your experience evaluating the adoption of AI?

REICH: For the last century we have a history of a new technology being invented, and people saying, “This is the thing that’s gonna transform schools!” Sure, new technology gets integrated into practices—but usually it is in ways that are smaller, narrower and more targeted than people expect. People also tend to underestimate the mixed results that come down the line. You think one thing will happen, but often it’s another.

Take the Scantron machine, for instance. It offered tremendous gains in efficiency and in grading multiple choice assignments. But are we really happy with the outcomes of the educational system after it became better at grading multiple choice items? I’m confident we’ll look back in 10 or 15 years and see similar patterns in AI, where we perhaps have gotten some of the gains we are looking for, but also we may think, “Well, this went very poorly.”

BORYGA: In this current moment when we’re still trying to understand what AI can and can’t do, what support are teachers asking for from school leaders?

REICH: I’ve interviewed nearly 100 teachers about AI in schools, and they unanimously say they need policy and professional development. They’re telling us that working in an environment in which there’s no shared policy and no shared understanding of how you’re trying to use or manage AI tools is chaos.

So, the hard questions before school leaders are: What policy should I create? What professional development should I offer? Unfortunately, the answer right now, if you’re honest, is that you have no idea. No one can tell you right now that if you want your students to be best prepared for their future, your policy should be X or that you should be training teachers to do Y. There are too many tools coming out, and so much that we don’t yet know. We have no idea the best way to teach a kid to use AI to develop as a writer, or a thinker, for example.

BORYGA: Right. And as a result of this uncertainty, you’ve advocated for incremental changes—“short design trials”—rather than wholesale disruption in edtech use. What does that look like with AI?

REICH: It looks like getting smart people together in our schools, and talking to folks in the community, and figuring out what makes the most sense for right now. It means trying systematic applications of short design trials to figure out what a workable answer is, based on your current needs.

If you’re trying to improve reading scores at your school, for example, maybe there’s a place for a short AI design trial with a select group of students or a select number of teachers. But that does not mean we should be reinventing the science of reading for the entire school.

The message you want to convey is: Here are some things we’re going to try. We’re going to look closely to see how this is affecting our thinking, how it is affecting our learning, or affecting the output of our work. I think that light, iterative mindset is probably the best approach for school leaders right now. But that’s a hard place to be when so many people in your school community are looking for precise answers and rules.

BORYGA: If you were advising a district writing its first AI policy, what are the most important principles you’d tell them to focus on?

REICH: The thing that I’m not seeing folks do—that I really think they should do—is to start from a place of humility. Just be really open about the fact that we don’t know as much as we’d like to know about this new technology right now, but we do know that we have to do something. We know that we can’t stick our heads in the sand. So whatever path we take is going to be experimental. And you have to be humble and admit that. If that had been the message when smartphones came into schools, for example, it probably would have been easier to implement smartphone bans later down the road when we realized they weren’t all that helpful for learning.

The other thing I would love to see more of is drawing together various voices and mindsets from your school and district community to talk through ideas and try things out. There are some districts and schools right now pulling together people from the technology office, teachers, parents, and even students to create core groups that help shape policies and practices and provide excellent feedback on whatever policies already exist so they can improve. That’s great.

BORYGA: What else have you heard from teachers around AI that you feel like school leaders should be aware of when considering potential policies?

REICH: A growing thing is the ubiquity of technology access in the classroom. For a while there was an argument that if we don’t let kids get access to the latest technology in school, they’re going to really miss out on learning opportunities. But I think a growing number of teachers now, teachers like Chanea Bond, for instance, are realizing that isn’t the case. They’re switching to composition notebooks, and in higher ed they’re going back to blue books.

I think something we underestimate is that in a world where you are constantly surrounded by technology, having a place where it’s all shut down and you just have the gift of being with your own thoughts feels really good. It’s sort of like when the computer lab was first invented and it was cool to go there because it was a new, novel thing. Now, it’s the opposite. It feels novel to take the devices and tech away and just be together with notebooks.  

I think there is going to be a desire for more of that, and the real challenge will be figuring out how to collect evidence about the best developmental pathways for when is the appropriate time to use AI and when isn’t. We don’t have that data available to us yet. At some point, we will have more research to help us say, “This is a good age to start introducing AI tools, and these are some good tools and approaches to use that don’t replace key skills.”

It could be true that the most important thing schools can do right now is continue doing the things that they’re already good at doing.

Justin Reich

BORYGA: You’ve interviewed tons of school leaders about this subject, too. What are their main concerns that they’re trying to work through right now?

REICH: The main entry point for most school leaders into generative AI tools is cheating. There was cheating before generative AI, but in the last year the percentage of students who admit to academic malfeasance is really high. For school leaders that is an academic integrity problem, but even more so, that level of cheating means a lot of learning isn’t happening.

The other big, competing concern they have is the ideas that they’re hearing from Silicon Valley and the business world, which is that generative AI is poised to reshape the labor market and anyone who is not an expert on these tools will be left behind. As a result, I think a lot of them feel that they’re in a bind. And that reflects in the policies they’re trying to roll out in response.

Last summer we fielded a big survey of teachers around AI. We’re still processing the data, but the percentage of teachers who say that they’ve gotten professional development for AI or work in a school with an AI policy is low. The percentage of teachers who say they have received really good professional development, or work in a school with a really good policy is exceptionally low—like one out of 20. So as it stands, a lot of teachers feel their school leaders haven’t dialed it in yet.

BORYGA: If there is a spectrum between moving too fast on AI adoption and moving too slow, where do you believe school leaders should try to fall in that spectrum right now?

REICH: I think we have to remember that if you race to get somewhere, you probably will not do that good of a job early in the race. But if you take your time, move more slowly, carefully, deliberately, and check your steps, you might be better off in the long run.

For example, there is a lot of discussion about prompt engineering. But if you’ve used AI, you know that getting it to spit out stuff is not that hard. You don’t actually have to learn some complex set of skills or develop a new mindset or be an expert in prompt engineering. The really hard part is figuring out whether the stuff it spits out is any good, or any useful. There’s no AI skill that helps you do that. It’s only domain knowledge.

The only thing to distinguish the quality of AI outputs is domain expertise. Now, if that is the case, isn’t it fortunate for schools that the thing they’ve been working on developing for years is domain expertise? The irony is that it could be true that the most important thing schools can do right now is continue doing the things that they’re already good at doing.

This interview has been edited for brevity, clarity, and flow.

Share This Story

  • bluesky icon
  • email icon

Filed Under

  • ChatGPT & Generative AI
  • Administration & Leadership
  • Technology Integration

Follow Edutopia

  • facebook icon
  • bluesky icon
  • pinterest icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use
George Lucas Educational Foundation
Edutopia is an initiative of the George Lucas Educational Foundation.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.