Getting Creative (and Critical) with AI Literacy

Anna Mills

Niya Bond

Click here to open or close the video transcript
– Hi, everyone. I’m Niya Bond, the Faculty Developer here at OneHE, and I’m so excited to be joined today by Anna Mills. We are gonna be chatting about a very timely topic regarding AI and pedagogy. Anna, so glad to have you here. Would you tell our community a little bit about yourself and how you got interested in this topic?
– Sure. So, I’ve been an English instructor at community colleges in the San Francisco area for 20 years. And I wrote an OER textbook and kind of, you know, fell into the topic of AI about six months before ChatGPT came out. And I’ve been focused on it ever since, doing kind of resource gathering, faculty development, open approaches to it. It’s fascinating, it’s appealing, and it’s disturbing and there’s such a contradiction in that. And I wanna share that with students. And I really loved the idea of doing that through humor and skepticism and finding the flaws, the surprising flaws in AI. And there’s a researcher, Janelle Shane, who wrote a book on AI in 2018 that is all very funny examples of AI getting things wrong in ways that really teach us to see it as fallible, as not really human, not really thinking, not really sentient.
And I just, I wanna share that with students. I want them to have that experience of just really seeing, oh, it’s copying these patterns and it’s so plausible. And it’s so good on some level, and yet, it just completely doesn’t get it. It’s wrong. And I as a human can see that, right? And it can’t. And there’s that confidence building moment and that recognition of the value of their own judgment. And so, that’s something I’ve wanted to share with students ever since I read that book. And, you know, really internalized that as I work with AI. So, that’s kind of how I got interested in making sure students get practice, seeing where it’s wrong and seeing where it’s plausible and just wrong. I became convinced that that’s a critical part of AI literacy. That AI literacy is not just about, wow, AI is amazing. Look at all these things it can do and I can make it do, you know, fancy stuff. AI literacy is really getting what is the nature of this beast, and having skepticism of it, and having the confidence to constantly check it and to recognize where it’s wrong.
– Yeah, I love what you said there. And I love that there is that human element of kind of diving into that paradox that you mentioned, right? That tension that’s at play there and how essential it is for the humans who have the skills of analysis, and reflection, and evaluation to kind of jump in and use those skills with that tool and really think about its use. And I love the idea of skepticism, like healthy skepticism, you know, as an approach. I’m an English faculty too, and I think that’s just such a great way to frame it.
– Yeah. And I think it’s empowering as far as writing skills and voice in academic context because, you know, students sometimes come in so intimidated and impressed by the facility with academic language that chatbots have and kind of infatuated with that, which is understandable, you know. And they might wanna be able to command that kind of language themselves when they choose. But you know, being able to expose that for, you know, being sort of a surface level skill and recognize the value of their own judgment and their own ability to say something that means something as opposed to just saying it in this, you know, language of power kind of way is really valuable for their development as writers and as college students.
– Yeah, so you mentioned kind of helping learners do these tasks, develop this literacy. Can you share an example of how that happens in your educational environments or how you are building a bridge to those kinds of skills with AI?
– Yeah, so I’m looking for ways to invite students to engage with AI that would support some learning goal that I have and also give them experience seeing where it’s wrong. And where I’m encouraging them and creating the conditions, so that they will have that experience. One way I’ve done that is through inviting them to engage critically with AI writing feedback in addition to human feedback. And really putting in kind of nudges to question the feedback, to see where it doesn’t align with their purpose, to see where it might’ve misunderstood what they were getting at or what the assignment calls for. And giving them example template phrases of that to even just make it seem more, you know, more normal to push back even when things sound plausible and authoritative. Another way I’ve done it is through curating some examples of things that it gets wrong, sharing that. Either you know, sources that it comes up with or I’ll assign readings where they’re looking at articles about bias and AI image generation and seeing examples of that. And then, the next step is to invite them to engage with the chatbot and critique what it comes up with in a similar way. So definitely with image generation, that’s pretty easy to do. So, you know, asking for an image of a particular identity and then seeing does it stereotype that identity, does it misrepresent it in some way, you know, will definitely yield results.
It gets a little trickier with, you know, newer models of AI that maybe are not necessarily going to give you the wrong answer. You can encourage students to ask about things they’re really experts in. And so with my son, we looked at, you know, the history of the creation of “The Simpsons,” and who really worked on the writing and things that he knew a lot about. And so, he was able to identify some subtle errors there. But, you know, I think we can’t rely on that because these systems are evolving and they will have sort of built in fact checks and browsing and things so that it, they won’t always get things wrong. Another approach that I’m testing is to actually ask chatbots for more than one contradictory answer. So that, you know, ask them to be plausible, but give me mutually contradictory answers. And, you know, so I’ve built a chatbot that does that automatically. And if I share that with students and ask them to say, you know, “Well, was one of the answers right or none of them?” They can’t all be right, right? So, I think we have to get creative as the systems evolve, but I think there are ways to make sure they get that practice pushing back and seeing where the systems are plausible but wrong.
– Yeah, so that idea of evolution and sometimes, it seems like it’s hard to keep up with it because it is always evolving so quickly. How are you making space and time to familiarize yourself with those updates, so that you can do all these creative and important activities with learners?
– I think social media is actually probably my primary learning resource. And it is something I enjoy. So, I think that finding the social element and the playful element of that constant learning is really important to keep myself going. And finding a community. So kind of personal learning network on LinkedIn, and Bluesky, and some listservs on AI and higher education. You know, I think that that that’s the way social media is not frivolous. And there are a lot of educators who are really interested in understanding how AI is evolving and in supporting each other and being open to multiple perspectives on it. Dialogue where we disagree. So, I think that’s the space to keep learning and it can be overwhelming, but people also recognize that on there I think, and support each other around, you know, that sense of overwhelm. So, I think it can be more positive space than people sometimes give it credit for, social media.
– Yeah, I would agree. I think I do, it feels like I do most of my professional development on social media these days too. So if we have community members who are interested in this, especially those ideas of skepticism, you know, human evaluation, a practice with the tool. Do you have suggestions for where they might get started if this is totally new to them? You know, you mentioned starting with a learning goal, and then kind of turning to tool use. Do you have any other strategies that we can maybe share with them?
– Yeah, I think kind of one easy thing to do is to ask a chatbot for an example of some concept that we’re teaching and, you know, maybe invite students to give a theme so we’re tailoring it to something they’re interested in. And then to first might be model critiquing that example, and then invite them to ask for examples. And, you know, there’s a great resource, aipedagogy.org, that has a lot of, kind of descriptions of lessons like this. One approach is to ask for, not just for an example, but for how to solve a problem or how to approach an issue and ask students to kind of analyze and critique that. So, you know, that’s something we might do for teaching assistants anyway. So if I wanted to create new quiz questions or an in class of activity involving examples, it’s an, you know, it’s an easy way to create new material, but then we can also model for students how we’re assessing and questioning it. I’ve found that if you ask for anything themed around AI, any chatbot that I have worked with is gonna tend to be a lot more positive than my prompt has actually suggested. So, it’s always pushing in the direction of, a little bit in the direction of AI hype. And so, I’ll point that out to students and kind of, you know, showing that, you know, laughing and rolling the eyes and like, yep, it’s doing it again I think is it, you know, sets the tone. And then, inviting them to do kind of similar experiments. It’s kind of a low prep way to get started with that. And I have a set of like template phrases for questioning plausible AI outputs in my OER textbook. So, I’ll share the link to that if you wanna share that with it.
– Yeah, cool. Amazing. Well, we always like to leave the final thought to our experts. So is there anything you’d like the community to know about this topic in particular that we haven’t covered or maybe a little bit of inspiration? And also, please mention the title of your book so they can go out and look for it.
– Oh, okay. My book is “How Arguments Work: A Guide to Writing and Analyzing Texts in College.” It’s an open free textbook, howargumentswork.org. And I guess I would just add that, you know, students may be ahead of us on this. Many of them have seen lots of funny examples of AI gone wrong. And they know that when they ask, you know, they do a Google search and they get an overview, the Google AI overviews are often off. And so, you know, I think it’s a space where we can encourage that, build on that kind of social media, pop culture kind of awareness of AI that they have. And connect it to a more empowering sense of who they are in relation to AI in an academic or professional context. That they have some tools, they have some habits of mind that they can use in college, in the workplace. And that will help them, give them confidence, give them a sense that they have something to bring. That skepticism is very practical as we’re looking at working with AI. How do you bring something beyond what AI can do? You’ve gotta see what’s where it’s still flawed. And so, I think that’s where the hope of having some sense of agency, some sense of humor, some sense of play in relation to these technologies lies. I think in having that little distance, that skeptical distance. And it’s something we can enjoy with students, I think, which is important at this moment when AI is overwhelming, and there’s fear, and there’s uncertainty, and there’s just a sense of it’s so big, it’s like a god, right? It’s not a god, right? It’s technology and we can, we can find it in ourselves to see it in a very human way and to see it’s flaws. So, that’s my hope is to share those moments with students.
– Well, I love that. It’s so inspiring. And I love that we’re kind of ending on the idea of human agency and empowerment and also connection, right? Because we’re having these conversations human to human, and we’re doing this kind of analysis together. And just that dialogue that you’re having with your learners is really inspiring.
– Thank you. It’s wonderful to talk about it with you.
– Well, I hope we get to chat again, and thank you again for being here with us today.
In this video, Niya Bond, OneHE Faculty Developer talks to Anna Mills, English Instructor, College of Marin, USA about AI literacy skills and the importance of engaging students creatively and critically when they use AI in their learning process. Anna shares examples to illustrate how she uses AI with her students. Examples of Anna’s suggestions to get started include:
- Create opportunities for “Critique the AI” tasks to help students practise spotting flaws in AI-generated answers. For example, do a class activity by asking AI a subject-related question and analysing the response together.
- Teach sentence starters that encourage students to question AI outputs confidently, such as “What evidence supports this?” or “Is this misleading?” Normalise scepticism as part of learning.
- Build students’ awareness of AI mistakes by letting them test AI on topics they know well, such as pop culture or history. Compare conflicting AI responses to discuss reliability.
References:
- Shane, J. (2019). You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place. Voracious/Little, Brown and Company 272p.
- Mills, A. (2025). How Arguments Work – A Guide to Writing and Analyzing Texts in College. LibreTexts.
Useful resources:
- Aipedagogy.org – A collection of resources for educators curious about how AI affects their students and their syllabi.
- Developing Critical Thinking Skills with AI – A slide deck with examples of how to use AI in pedagogy in ways that encourage skepticism of it.
- Template Phrases for Critiquing AI Outputs
- Contradictory Chatbot – A chatbot that gives plausible but mutually incompatible answers to each query
- Contradictory Chatbot for Research – A bot that browses the Internet and gives three contradictory answers to each question, each with a link.
- AIWeirdness.com – The blog of Janelle Shane, author of You Look Like a Thing and I Love You: How Artificial Intelligence Work and Why It’s Making the World a Weirder Place.
DISCUSSION
What’s one small way you could invite students to spot or challenge where AI gets things wrong—while also reinforcing their own judgment or voice?
Please share your thoughts in the comments section below.