Click here to open or close the video transcript
When we talk about AI and education, it’s really easy to focus on the tools, the apps, the plugins, the platforms. But it’s important to remember that the real shift isn’t technical, it’s pedagogical. That’s because we’re really rethinking what it means to teach, to guide learning, and to evaluate understanding in an age where generative AI is increasingly present in our learners’ lives. Now, I personally am staying positive about AI, with a healthy dose of skepticism. And I do this because I truly believe AI is not here to replace the skills we care about as educators, like, for me, as a writing instructor, deep reading, careful research, or the craft of writing. Instead, it’s here to ask us how do we help students develop those skills while acknowledging the tools at their fingertips. Now, we can’t deny that there are risks with AI use. There’s hallucination, bias is built-in, there’s an environmental impact that we can’t ignore. In some cases, AI might assist with academic integrity violations. These are all valid concerns. But addressing these risks doesn’t mean walking away from this technology. For me, it means leading that journey with intention. And for me, I think of it in this way. We don’t need to compete with AI. We can contextualize it. So we can help our learners ask better questions, not just get faster answers. We can support creativity, analysis, and reflection with careful and intentional prompting and evaluation. What we cannot do is replace human cognition, at least for the moment, original research, or the deeper thinking and synthesis that we aim to foster in higher education. That’s why it’s essential that we, as faculty, guide learners in using AI as a learning aid, not as a shortcut. We need to clearly articulate what responsibility AI has in our discipline, what its use looks like in our discipline, and why, when, where, and how it’s appropriate in our disciplines. But just as important as the rules we set is the why behind them. Now, I’m not, as I’ve said, without a healthy dose of skepticism and, honestly, when it comes to the environment, even a little fear about the continued evolution of AI. But I’m choosing to treat this moment not as a threat, but as a chance to reaffirm the value of human teaching in an AI-assisted world. And I’m inviting you to do the same.
While generative AI offers exciting possibilities for education, it’s crucial to understand its limitations and potential risks. Being aware of these challenges doesn’t mean avoiding AI altogether—it means using it more thoughtfully and responsibly.
Accuracy and Reliability Issues
- Hallucinations: GenAI can confidently present incorrect information as fact. These “hallucinations” can include fake citations, inaccurate historical dates, or entirely fabricated research studies. Always verify important information, especially when sharing content with students.
- Knowledge limitations: most AI models have training cutoffs, meaning they lack knowledge of recent events. They also struggle with highly specialized or rapidly evolving fields where accuracy is critical.
Practical response: use AI for brainstorming and first drafts, but always fact-check before finalizing content for educational use.
Bias and Fairness Concerns
- Embedded biases: AI systems reflect the biases present in their training data, which can perpetuate stereotypes or present skewed perspectives on controversial topics. This is particularly important when creating content about diverse populations or sensitive subjects (Shieh, n.d.).
- Limited perspectives: AI tends to reflect dominant cultural viewpoints and may not adequately represent marginalized voices or alternative perspectives that are crucial in higher education.
Practical response: consciously seek diverse viewpoints when using AI-generated content and encourage students to think critically about AI outputs rather than accepting them uncritically.
Environmental and Social Impact
- The human cost: major tech companies use reinforcement learning with human feedback (RLHF) to refine chatbots, often relying on low-paid, vulnerable workers—such as refugees—to perform this demanding and distressing labor (al-Hammada, 2024).
- Energy consumption: training and running large language models require significant computational resources, contributing to carbon emissions. While individual use has minimal impact, widespread adoption raises sustainability questions.
Practical response: teach students about the hidden human and environmental costs of AI, such as the labor behind RLHF and the energy demands of model training (Rowe, 2023; Bartholomew, 2023; Luccioni, et al, 2024).
Academic Integrity and Institutional Considerations
- Policy gaps: many institutions are still developing AI policies, creating uncertainty about appropriate use. This affects both faculty adoption and student guidelines.
- Academic integrity: traditional assignments may need modification if students have access to AI tools. This presents both challenges and opportunities for more authentic assessment and open dialogue with students.
Practical response: Involve students in developing AI use guidelines for your assignments to build shared understanding and responsibility. Set clear expectations about how AI can and cannot be used, and have open conversations about its benefits and limitations. Adapt assignments to focus on personal reflection and original thinking, making it harder to rely entirely on AI, see What’s Authentic Assessment? guide.
Learn more about academic integrity and GenAI from this webinar recording on Key Academic Integrity Considerations in the GenAI Era with Tricia Bertram Gallant.
Moving Forward Responsibly
Understanding these limitations doesn’t mean avoiding AI – it means using it wisely. The key is maintaining human oversight, encouraging critical thinking, and staying informed about evolving best practices in your field. As you continue experimenting with GenAI, keep these limitations in mind and view them as part of developing digital literacy rather than reasons to avoid the technology entirely.
Further reading:
- al-Hammada, R. (2024). “If I Had Another Job, I Would Not Accept Data Annotation Tasks”: How Syrian Refugees in Lebanon Train AI [Coordination by M. Miceli, A. Dinika, K. Kauffman, C. Salim Wagner, & L. Sachenbacher]. (Accessed: 20th June 2025).
- Bartholomew, J. (2023, August 29). Q&A: Uncovering the labor exploitation that powers AI. Columbia Journalism Review
- Luccioni, S., Trevelin, B., & Mitchell, M. (2024, September 3). The environmental impacts of AI – Primer. Hugging Face.
- Rowe, N. (2023, August 2). ‘It’s destroyed me completely’: Kenyan moderators decry toll of training AI models. The Guardian.
- Shieh, E. (n.d.). Stop calling it “AI literacy” if it doesn’t teach history. Civics of Technology [blog].
Discussions
Which of these limitations concerns you most in your teaching context, and how might you address it while still exploring AI's potential benefits?
Please share your thoughts and questions in the comments section below.