Promoting Ethical AI Use Among Educators and Students

Niya Bond

Marc Watkins

– Hi everyone. I’m Niya Bond, the Faculty Developer here at OneHE and I’m excited to be bringing Marc Watkins to you today. Marc and I are gonna talk about AI ethics. Marc, I’d love for you to introduce yourself to the community. I know you do a lot of different things, but tell us how you got interested in AI and a little bit about AI ethic.
– Well, thank you so much for having me. So my name is Marc Watkins. I teach at the University of Mississippi. For the past 10 years, or actually 11 years, I’ve been a lecturer of Writing and Rhetoric. Most recently because of artificial intelligence, I’ve become the director of the AI Institute for Teachers. I’m also Assistant Director of Academic Innovation here. So we started really going into AI way back in May of 2022, about six months before ChatGPT was launched. We became aware of GPT3, that’s the precursor to ChatGPT, and we started basically building some small assignments to test with first year writing students to see if this tool could actually help with their learning or harm them.
We’ve since published research, I think in probably half a dozen different locations about that. The big takeaway from there is that students can learn with AI in very small doses, but they have to have very clear instruction. And our overall sort of takeaway from our students was that they were not very eager to offload all of their writing process to an algorithm. They would use it for small things like counter arguments, research questions, in some cases, small bits of feedback here and there. But they’re not too keen on writing full paragraphs, let alone full pages or essays with connection technology.
– That’s really interesting because I think a fear of some educators is that that’s exactly what would happen, right? It would just be used for entire processing of documents and thinking. So have you noticed if that has changed or if that’s still kind of the takeaway?
– It’s evolving as students become more, I won’t say AI literate, that’s what our goal is. But I would say AI literate is making the choice to use it appropriately and ethically. I do think our students are exposed to far more generative AI technologies than they ever have been. And I will say that’s been probably going to continue. So some students are trying to use this in an academic, I’m sorry, academic misconduct and be dishonest about it. I have some students in my last course, which is a digital media course, submit reflections that are completely AI generated. And of course if you taught writing, you know, the reflective writing is personal. It’s literally first person about your learning experience. And it can be kind of heartbreaking and gut wrenching if you see a student turn to an algorithm to talk about their learning experience in your class. So it is something that we are seeing more of and we do want to try to combat this, but we don’t want to combat it in a way that hurts our students by going through unreliable AI detectors or any type of surveillance software. We really want to shift to talk about ways we can search for student learning using AI. And I think how that starts is by being open and transparent about our own AI usage. This is kind of controversial, at least a few professors I’ve talked with. But now a lot of faculty have access to different types of AI programs that they can build instructional design with. They can do assignments, they can do rubrics.
And my sort of takeaway from all this is that you can have those technologies, but we wanna have a baseline. And one of those baselines is that if you are gonna use this in education, you want to label that you use this technology so it’s clear and transparent for your students. This becomes a really powerful, teachable moment that they can actually model that sort of behavior from them too. So every time I do an assignment that I use any type of generative AI, whether I’m doing like this to help me verify some questions, I’m asking them for something else too. I try to label it and I also try to link to the actual prompt I use too, whether it’s through Open AI’s GPT or Anthropic’s Claude or Google’s Gemini, most of those will support some kind of external linking. And I tell them it’s like, look, this is what I think we should have a baseline for too. ‘Cause what I will tell you now too is that there’s no real way I can prove that you use AI versus there’s no real way you could prove that I used AI. So let’s go into this with a mutual sort of situation of trust where we want and expect an ethical standard of a transparency from each other in the classroom.
– I love that idea and I love how it’s like multi-layered conversation and modeling that you’re sharing. It sounds like one of the big parts of AI ethics is just that, like open conversation, transparency, dialogue. Are you finding that you are having those conversations with your learners?
– Oh, I’m having those conversations all the time. And even students who are really excited about AI will become very hesitant if you talk to them about this too because they realise even after they get done with their studies here too, they go in their career, their possible job situation too, and their boss is going to ask them to use this technology in a way too. And if their boss is not going to be open about using AI or that sort of expectation too, they’re gonna be very concerned about that and be very thoughtful thinking about that means. So yeah, we’re definitely having those conversations. Sometimes you get a receptive audience. Sometimes it’s more difficult to talk with students. I’ll say that, especially after the pandemic, we’ve been having more issues just basically getting our students not only physically in the classroom, but when they’re there, being able to talk with them and be sort of on the same page.
– Yeah. So you mentioned being an early adopter of these technologies even before, you know, ChatGPT officially launched in the form we know it today. What were you thinking about then as far as AI ethics and how has, I guess your AI ethics philosophy maybe evolved to current time?
– So it’s evolving every few days. It seems to be evolving in these cases for sure. I think the best way to approach the technology isn’t as an adopter or someone who wants to ban it and you know, throw a bucket of water on open AI servers and get rid of it completely. I think the best way to approach it is as a curious skeptic. I’m curious because I want to explore what this technology can afford me, maybe afford my students too. But I’m also very skeptical about if this is actually helping them learn. And that I think is gonna be the really hard part going forward for the next few years, is that once we can kinda get to a situation where everyone’s being open and transparent about using these tools, the big question is, how is this actually helping me learn? And that’s what I’m really thinking about this fall, is trying to get a way for my students to not only clearly talk about how they use AI, but then to start really recognising how this actually impacted their learning in either positive or negative ways.
So each assignment, I’m allowing my students to use AI, but there’s an intake form and that intake form talks about how they use this throughout their process and it gives them space to also reflect on how this actually impacted their learning. And there is a series of questions I ask them on those intake forms too that changes what they assign. It changed with the context. The questions aren’t leading in any way. It’s like basically is this helpful to you? Great, tell me how? Was this a burden to you? Did you hate using this? Are you never gonna go back and use this again too. Talk about that. Those are all really important moments too that we’re going to need to find out because from everything we know, AI is not going anywhere anytime soon.
– Now you mentioned having students work with AI on their assignments. As you’re doing that and building it into your assessments, are you putting any limitations or restrictions on it? You know, I know a lot of educators allow it for brainstorming, but not necessarily like full content development. How are you thinking about using it and implementing it?
– So we’ve tried three different things. Three different things have sort of failed. That’s okay, that’s part of what experimenting the technology is. For this call, what I’m gonna do is I will label each assignment and will be a red light, yellow light or green light about the level of AI systems they can use.
– Okay.
– I mentioned reflection as writing beforehand too. Reflective writing is not something an AI tool is really something very good to assist at. Now you can use it for grammar and for sense mechanics, that’s perfectly fine. But in terms of going through your ideas or having actually write for you, that would be a red light. And I’ve explained that in the short little label on the assignment there too, saying this is not appropriate. But for most of the assignments too, it’s gonna have a yellow light more or less. And that’s going to be, if you’re gonna use AI, we want you to be transparent about it. We want you to label it too. We want you to follow the MLA and APA citation guidelines for bringing it into your actual work.
And more importantly, we want you to talk about what the actual experience was to your learning. So that can also be combined with a link to a certain tool. So we’ve used a lot of different tools just to test things with them too. One that I think was been very helpful before was called Lex. And it’s a writing program that has AI built into it. And what I really like about Lex is that it’s not a native text generator. It has that option, but it’s literally just like a normal writing program too. And if students want to call an AI assistance, they can, but they mostly use it to look at AI for feedback in that way.
– Interesting. Alright, well I appreciate you sharing that tool with our community. So you mentioned open transparency, you mentioned approaching things as a curious skeptic. Are there any other tips that you could share with our community on AI ethics and usage and interaction?
– Yeah, your students are just as bewildered by this technology as you are. They’re not necessarily looking at this as a tool to cheat. Some of them obviously will, but that’s not the major takeaway I’ve seen from my students. They’re looking for you to see how you can actually use this tool both effectively and ethically. And that actually gives you a lot of power in this situation. And that’s something too, I wanna really emphasise. I have talked with a lot of teachers ’cause they feel completely powerless by this technology. They can feel completely powerless by the fact too, that you can’t really detect it. But your sort of stance on AI and your ability to guide your learners in through this process is one of the most powerful tools that you have available to you.
And it is going to be how this technology is shaped in the public too. ‘Cause your students today, when they graduate, they’re going to be using these tools in the public, not just with their jobs too, but in everyday life. And how you actually talk to them about this is going to ring in the back of their head in different ways. So that’s a powerful tool to use. And I know we sometimes forget about that too ’cause we’re awash with different AI technologies, but it’s something that I really want to emphasise that we do have a lot of agency in this.
– I appreciate you mentioning that because I have heard that fear echoed in the educator community, that idea of powerlessness. And for my own self, I take heart in the fact that it seems like humans, and especially educators, are still really needed to use this tool and use it appropriately, and like you said, ethically. I don’t know if that will always be the case. Maybe one day humans won’t be needed, but for now we’re essential. Right?
– We’re essential right now. I likewise would say that this is another reason for us to be very skeptical about these tools too and be very thoughtful about it as well. It’s also a reason for us to educate ourselves about it too. We are seeing some very early experiments where they take the human completely outta the loop. And as an educator too, I want to push back against that as much as I can because I do not think that’s helpful. Especially these new waves of multimodal AI where they can have a live camera look at you and talk with you with very low latency, it means there’s no sort of, you know, pause between when the AI’s hearing you versus when it responds. So I think we wanna be very cautious about how we integrate that into education because it is going to affect our students’ learning. It’s probably gonna affect our labour in some ways too.
– Yes. Great points. And I appreciate everything you said so far. I know we are running out of time for our interview, so I always love to leave the last tidbit to the expert. So is there any one thing you wanna say to the community or any words of encouragement for AI ethics that you’d like to leave us with?
– Well, you know, I mean the biggest thing is obviously to be as open and transparent about it too. But the key framework we wanna think about is that you don’t have to adopt AI to talk about AI with your students. And that is, in itself, a really powerful teaching mechanism just to have a conversation about the AI ethics paradigm too. There are wonderful articles that are being written almost daily about AI and ethics too, that you can assign your students to talk about it. So don’t feel like you have to adopt a tool and bring it into your classroom to get your students to start thinking about or talking about AI.
– Well, thank you so much. For those who do wanna potentially adopt AI, do you recommend just playing with the different tools to learn about them first, just kind of jumping right in? Or do you recommend turning to some of those readings you just mentioned and then a little bit of a slow start?
– I always recommend a slow start. I mean the one thing about this technology is that it presents a frictionless experience to the user. You type in something that gives you an answer. Sometimes that answer is incomplete, and sometimes it’s incorrect, it’s hallucinated. We wanna slow things down and start thinking critically about what’s gained by this, which is usually time if you’re using AI tool versus what might be offloaded or lost, which might be a skill. So be cautious, be thoughtful about your adoption of it too. They’re wonderful resource hubs. Harvard just put out their AI Pedagogy Project that has assignments on too. The Writing Across the Curriculum has the TextGenEducation Collection, which I actually submitted two articles, two assignments. So there’s a lot of them there out there too. They talk about different teaching experiments and give you step-by-step guide about how to use these tools and how to approach it as thoughtfully as you can.
– Awesome. Well, I know our community is gonna love to check out those resources and check out this interview where you’ve shared so many insightful tidbits with us. We really appreciate your time today and we hope we get to chat again.
– Well, thank you for having me. I really appreciate it.
In this video, Niya Bond (OneHE Faculty Developer) talks to Marc Watkins (Director of the AI Institute for Teachers and Assistant Director of Academic Innovation, University of Mississippi, USA). Marc shares insights from the work of their institute and how he is updating his assignments to incorporate the ethical use of AI among his students.
Below are the key points from the interview:
- Adopt a curious skeptic stance; these tools have benefits and drawbacks, and we should always be aware of both.
- Always go for the slow start—try turning to articles and research first, then adoption second; consider how/if the tool can help with learning and return to that idea throughout use.
- Maintain open transparency about AI use; when using it to build assignments, cite that you did so. This helps model ethical AI use for learners.
- Remember that you don’t have to use GenAI (Generative AI) to talk about it; this is a tool and technology that learners will encounter in the public, so it’s relevant to course conversation, even if it isn’t integrated at this time.
References and useful recourses:
- Rhetorica – Marc’s Substack newsletter.
- Make AI Part of the Assignment – Marc’s article in The Chronicle of higher Education, October 2024.
- MLA-CCCC Joint Task Force on AI and Writing – resources, guidelines, and professional standards around the use of AI and writing by The Modern Language Association and Conference on College Composition and Communication.
- Harvard’s AI Pedagogy Project – a collection of curated assignments that integrate AI tools.
- TextGenEd: Teaching Experiments Using Text Generating Technologies – the WAC Clearinghouse guide.
- UVA’s Generative AI in Teaching and Learning – a collection of resources by the University of Virginia.
Discussion:
What key points regarding Ethical AI use resonated with you, and why?
Share your thoughts in the comments section below.