How to Avoid 5 Common AI Pitfalls
Even with the best of intentions, problems might arise with artificial intelligence tools. Here’s how to keep things balanced.
Your content has been saved!
Go to My Saved Content.Estonia is a small country of just over 1.3 million people. However, it’s a big international player when it comes to AI. In September 2025, all Estonian schools will give 10th- and 11th-grade students their own AI accounts. This is a part of “AI Leap,” a national initiative with OpenAI to integrate AI technologies into teaching and learning. It’s certainly an intrepid move in an era when many schools are still grappling with their students’ use of such systems and enacting strict device bans.
AI poses a large benefit to schools. It provides adaptive learning platforms and assists teachers with tasks like generating student feedback. Nevertheless, schools must also be mindful of the pitfalls that come with AI use. There will always be wider philosophical concerns about the potential of AI to replace teachers or drain students of critical thinking skills, but I’m talking about pitfalls involved even with well-meaning use of AI in our classrooms.
Pitfall 1: Diving in Without Due Diligence
It’s crucial that we make AI decisions based on evidence rather than exciting ideas. In Estonia’s case, this means piloting an OpenAI version called ChatGPT Edu with a group of students who were likely previously using AI. On the whole, Estonia already has very high AI use. In our own school communities, on a smaller scale, reading research on the topic is essential, such as findings from a survey of educators from 50 states or the special issue of the Journal of the Chartered College of Teaching on the safe and effective use of AI in education. Our reading can revolve around the specific positive and negative effects that AI use has had on education.
A pilot program is a necessary part of implementing AI technology. In my school this year, a group of virtual reality (VR) enthusiast volunteers trialed VR use in their lessons. They had time and space on professional development days to complete basic training and then receive support from an edtech-savvy colleague to implement the VR with middle school students. Our next step is to evaluate the program and collect feedback from teachers in the pilot. We’ll invoke the Stop/Start/Continue framework, just like we did when we evaluated the facilitation of AssessPrep, an AI-powered assessment platform we use with students.
Due diligence also applies to writing and revising your school’s AI policy. Include total clarity on how members of the community can raise problems with AI use and who will address them—likely a colleague well-versed in AI and child safeguarding. My school, which belongs to a larger education group, has a helpful group-wide AI Position Statement and an AI Policy for the “Do’s and Don’ts” of staff and student AI use. In addition to this guidance, we have our own bespoke community AI policy, kept up-to-date with school-specific staff and tools.
Pitfall 2: Ignoring the Fact That Algorithms Don’t Favor Diversity
The biases built into AI systems are human biases, and majority groups have a larger presence in datasets that fuel AI. Because of this, AI has the potential to reproduce stereotypes and further marginalize certain groups of people. For example, facial recognition technology and search algorithms were found to tie Romani people, a group that faces severe prejudice discrimination in Europe, to criminality.
When AI shares harmful or erroneous information, there isn’t a mechanism for correcting or reporting it. This misinformation can negatively affect our students’ well-being. We can teach them how to check for bias in AI outputs, create a plan for reporting negative AI encounters in school, and follow up with our students for well-being support.
Pitfall 3: Compromising children’s agency
It’s our duty to safeguard students’ privacy and security. As such, we can give AI technology a firm place in our schools’ risk management plans and develop a strategy for continually educating students, staff, and parents. Guide teachers on how to proactively discuss AI use with students and offer training on how to supervise students’ AI use and respond when they suspect unethical AI use.
Digital literacy can be part of the curriculum from students’ youngest years, along with learning how to report misuse of AI. At my school, our counseling team advocates for an “upstander” approach—empowering students to intervene if an inappropriate image, such as a deepfake, is shared with them—rather than simply being a passive bystander.
Teenagers are especially prone to making impulsive decisions, and cutting corners on schoolwork with AI is an alluring concept. Thus, we can make sure to routinely ask our students about their experiences with AI and adjust our approach accordingly.
Pitfall 4: Neglecting parent and guardian education
We can bring parents and guardians along with us on the AI journey. My school is planning a parent/guardian education evening in the fall, where we will be presenting our AI policy, reviewing key vocabulary, and using the International Baccalaureate’s document “Evaluating 13 Scenarios of Artificial Intelligence (AI) in Student Coursework,” which gives explicit examples for what constitutes ethical use of AI or misconduct.
In speaking with parents and guardians, it’s crucial to emphasize process over product. Our aim is to see students’ learning and thinking step-by-step to ensure that the final product is something students created themselves. Helping parents and guardians view assessment as a tracked process that’s focused on learning will reduce conflicts later over whether or not their child used AI on an error-free assignment that seemed to appear out of thin air.
Pitfall 5: Relying on AI Detection Tools
AI detection is a big industry that lacks solid supporting evidence. While it can flag legitimate misuse, it isn’t always reliable, and that’s especially problematic where academic misconduct is concerned. The accusations of such can have significant and lasting consequences. In a recent article, OpenAI itself stated that “it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.” In that same article, OpenAI noted that a detector they attempted to cultivate themselves identified Shakespeare as being AI-generated.
If a teacher suspects that a student has used AI unethically, they can ask open-ended questions about the work, compare it with other assignments the student has written, or even give a modified assessment for students to demonstrate the same learning.
It remains to be seen whether Estonia’s efforts to fully embrace and equip schools with AI in schools is the sensible way into the future, but even without such a bold approach, AI is firmly tied to our learning environments now and continues to grow. There is much we can anchor good practice to, such as how it supports the quick and easy development of inclusive resources, yet if we don’t heed the pitfalls that come with AI use, we’ll be engulfed by the parts of AI that can make our jobs burdensome and unpleasant.