Exploring the Ethics of GenAI

Catherine Denial

Click here to open or close the video transcript
– Hello everybody. It’s so great to be with you here today, wherever you are in the world. I was about to say this afternoon, but realize some of you are ahead, so good to see you. Please, as we’re going along, feel free to chat to each other, and please put questions into the Q&A box at the bottom of the screen. So this is me, I am Cate Denial. I am the Bright Distinguished Professor of American history, the Director of the Bright Institute at Knox College in Galesburg, Illinois, in the US. And I am an Associated Colleges of the Midwest Leadership Fellow from 2025 to 2027. For anybody who has reading or visual difficulties, I’m gonna read the text on each slide aloud. And all of the photographs that illustrate my slides came from pexels.com, which is a free service, so you can go use these in other presentations you might be making.
So let’s talk about generative AI. Here’s what I cannot do this afternoon. I cannot provide the magical one-size-fits-all fix-it to this situation that we find ourselves in. But what I can do is suggest some different ways to approach the problem of generative AI and how it intersects with our teaching. So where to begin? Well, with some caveats, there are some popular myths about how to detect generative AI in student work. There are no reliable tools that can detect the use of ChatGPT and its ilk. So I wanna get that out of the way, first of all. Em dashes are not a sign of GenAI use, long words are not a sign of GenAI use. So where do I begin? Well, when this all first exploded, I sort of said to myself, “Oh my gosh, this is gonna be a disaster, right?” And then I calmed myself down, reminded myself that when I do things quickly, I usually regret them, and decided to become much more slow in my response. I also went back to my values, and one of my key values as a teacher is that I trust my students. Our classrooms are not full of students with nefarious intentions. They may be misguided, they may be experimenting, and of course, there may be students who straight up, just want to not do the work, but we can deal with outliers as and when they arrive rather than suspecting every one of something terrible from the outset. I also believe in transparency. I want to be able to articulate my pedagogy in positive terms. So instead of saying, you can’t do this, because I said so, I wanna be able to say, “Pedagogically, there’s a really good reason for the choices I’ve made around gen AI in our classroom.”
So I landed on ethics as a way to have a conversation with my students around so many of the issues that surround GenAI. At its best, post-Secondary education is all about encouraging critical thinking. What do our students need to know to make educated decisions about generative AI? I once gave a version of this presentation at an institution where someone said that, “Our students sometimes don’t have ethics.” But the issue is everybody’s got ethics, we just need to surface them. And there are also something that can be taught. There were many things with which I wanted my students to grapple, and I wanna give you just a taste of some of them right now. First, I wanted them to think about how large language models actually work. ChatGPT and other similar products, do not generate knowledge, but instead work by means of sophisticated predictive text operations. And this link is available, the slides are available, and so you’ll be able to link to these things yourself to see the articles that this is based on. The reason there is this picture of a cell phone there is because ChatGPT scaled up much more sophisticated, that is still pretty much the same as the predictive text feature on your phone. So when you’re texting someone and it predicts what the next word might be in the sentence and helps you type faster, that’s the same mechanism that’s at the heart of ChatGPT.
I wanted my students to think about labor practices. “To teach Bard, Bing, or ChatGPT to recognize prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence, and sexual abuse. The people who are doing this are overwhelmingly children from West Africa,” and this piece in The Guardian was key in surfacing that. “Here in the United States, stress, low pay, minimal instructions, inconsistent tasks, and tight deadlines, the sheer volume of data needed to train AI models almost necessitates a rush job, and are a recipe for human error.” And then there’s the environment which often comes to mind when people think about the ethics of ChatGPT, and its ilk. AI needs water to generate the electricity that powers the servers and water to cool them, and that water has to be pure drinking water. It cannot be dirty water, it cannot be salt water. The ethical considerations are enormous. When we consider global water shortages, climate change, and profit motives. Then there are things like the physical infrastructure, includes electrical hardware, GPUs, processes and cooling systems. And according to the Shift Project researchers, and this report only just came out, “The sector is on an unsustainable climate trajectory.” There are real questions of access and disability to wrestle with too.
AI innovation is predicated upon eugenicist belief that humans need to be perfected through technology. And a wonderful resource for this is Rua M. Williams’ new book, “Disabling Intelligences,” which is a really fantastic book, all about the eugenicist roots of the ideologies behind generative AI. There are questions of access. Developers are moving to become the first in the field to do something and are rarely thinking about access issues as they do so. Then there’s the question of GenAI bending our reality. These quotes come from a Rolling Stone article that just came out about a man who went missing, after developing a really personalized relationship with his ChatGPT output.
And this quote, “Part of what keeps us sane is other people’s perspectives, which are often in tension with ours. When you say something questionable, others will challenge you, ask questions, defy you. It can be annoying, but it keeps us tied to reality, and it is the basis of a healthy democratic citizenry. Truth is intersubjective, meaning that we need other people, their testimony, their experiences, their rationality, to be well-informed. And chatbots are not people. They don’t have experience. They are not witness. They are fancy wordplay.” I really want my students to think about where their information goes when they sign into some kind of ChatGPT interface. And so this is an article that talks about many of the privacy issues that surround chatbots. And then there’s the issue of how we talk about AI. Eric Salvaggio says, “New technologies need simple metaphors to thrive.” And those simple metaphors are really powerful. That is why so much reporting on GenAI tends to sort of uncritically repeat newest releases from GenAI firms. “Myths and metaphors,” he says, “Aren’t just rhetorical flourishes, they’re about power. They tell us who has it, who should have it, how they should use it, and in the service of what goal.” So having my students pay attention to what some of those narratives and mythologies are around things like ChatGPT, is really instructional.
So I have students read these articles, then what? Well, I have my students do reading reflections. These are ungraded. They do not get any response from me on the actual reflection, and they’re completed after every reading in my class. They always ask the same questions. “What new things did you learn from the reading? What do you think it’s important we talk about today? What left you confused? What questions do you have? And is there anything else you want to share?” Here are some responses from some of my students after reading some of these articles about GenAI. These are anonymized, and they are all used with permission. So Wendy said, “I’ve never heard about this before.” Emily said, “I did not know anything about this topic.” Vic said, “Thank you for having us learn about this. I’m really glad I know about it now.” And Dakota said, “Thank you for bringing this issue to my attention. I didn’t know anything about it.” Here are some longer responses from that same exercise. “I think it’s important that we go beyond the conversations about academic integrity surrounding ChatGPT to address the effects that AI is having on folks, including children in the global south, and think about why this is not a bigger part of the conversation around the ethics of AI.” Jordan said, “Most of the time we only talk about AI in terms of academic integrity, which is important, but this information frames it in a new way.” Jean said, “As a society, I feel like we never care about what goes on behind closed doors, instead we’re content with the shiny new toy and want to see what it can do, and leave the rest for someone else to worry about.” And Alex said, “We are asking the wrong questions. ‘Can we do this’ instead of ‘Should we.'”
So this kind of exercise is scalable. You can do this very intimately in small seminars and small groupings of students, but you can also work with these kinds of things at much larger scale. So for instance, you can do this by asking people questions and having polling software take the temperature of the room on issues related to generative AI, and then have the results projected up there in graph or pie chart forum as impetus to discussion. You can also have your students write position statements. So have students do the metacognitive work of articulating their position on generative AI. Will they use it? In what ways? To achieve what ends? If they won’t use it, what has shaped that decision? When I first worked with students around these issues of ethics, this is how I finished up that unit. I had them write their own AI policy, collected those in, and held them to those policies for the rest of the term. Key to really working around some of the things that ChatGPT and its ilk offers is to get into the habit of drafting and redrafting things with our students. So in the humanities, for instance, in my class, often I will have students write something for homework, but then when they bring that to class, there’s in-class peer review. So they have to talk about their motivations, their hopes, what they’re aiming for with their writing, and then work to improve it with someone else right there and then. There’s in-class time for redrafting. So I’m able to go from student to student and talk to them. I’m able to sort of see everybody literally working on their stuff.
And then there’s Scissors Day, which is one of my favorite days of all. And this is when students bring in single-sided, printed copies of their papers, and then they cut everything up into small paragraphs. They take those and they shuffle them like a deck of cards, and they hand it to one of their peers. And that peer has to take the deck of cards and put together a paper. Now, in all the time I’ve been doing this, I’ve had two students who put the paper back together the way it had been written. And what you get from this is the ability to talk about transitions and argument and evidence. And you get to have students right there with things spread out on a table or all across the floor, work on wording and argument together. But outside of the humanities and the social sciences, there are other ways to do this. You could have students take a quiz, for example, but then do class review of the answers, rather than you grading them all yourself, and have time in class to rework answers in light of things that people may have discovered or surfaced with their conversations. This is all about building critical thinking skills with our students.
If you’re someone who does wanna let some GenAI into your class, but you want to think about how to phrase it and set parameters around it, Leon Furze has some wonderful recommendations with his AI assessment scale. And as you can see, he goes from no AI, all the way to full AI. Within this website, there are all kinds of suggestions for ways to phrase the rules that you wanna set, the boundaries that you wanna set. And he also has a couple of different ways of using visual markers for this. So if this particular table doesn’t work for you, he has this too. And I like to offer students different versions so that they can use the one that makes the most sense to them. Overall, spending time with students on their process is so important to thinking about these issues. Meeting one-on-one where you can, but knowing that you can’t always, and that it is totally okay to meet with students in groups to make that more manageable for you. Ask questions and design assignments that give them the opportunity to work through their thinking, and talk to them directly about the fact that their thinking is unique, and that things like writing help us work out what we think, that putting things into words is how we distill all the ideas in our head into something we can communicate to others. For more articles on ethics and AI, I have been curating a long list over at my blog, so please feel free to go over there and see what you can see. Everything is in categories, so you can pick, say, one thing from environment and one thing from labor issues, and one thing from how large language models work. And you can read them yourselves, you can share them with students. It is a public resource. So thank you so much for listening to what I had to say about ethics and AI, and we are going to take questions, and hopefully offer some answers.
– Amazing. Well, we already have one question from Heather. “Any ideas how these can be applied for fully async online courses?”
– Yeah, I think that on, well, Niya, I think that you probably have lots of suggestions, first of all. But I also think that’s one of the harder places to be able to do this, ’cause you’re not sitting, like I often am in with the room looking at my students, right? And so I’ve heard some wonderful suggestions from people, but Niya, why don’t you jump in here, and offer some of your experiences.
– Yeah, so I teach almost exclusively async online courses. And one productive way is to do it in a discussion board or several to have that community collaboration and dialogue about it very early on in the course. So that, as Cate said, you can kind of set the stage and maybe even map out how each individual student wants to kind of approach AI. Another thing is always offer clarity. Like, the examples you just showed of kind of the spectrum of AI use, if you are inviting it into your assignments, clearly explaining that to learners with icons and descriptions and definitions. So one thing I’ve done is break it up into three tiers. So you know, this assignment allows you to use AI if you want for like a brainstorming tool. Or in other cases when learners have had to use it, then I say why they’re using it and have them reflect on that use. It depends on instructor autonomy and institutional policy and procedure, of course. But I think part of caring, and you know about this, Cate, is clarity and conversation. And so having those two things around it.
– And I think that point about clarity is a really good one, no matter what modality you are teaching in, right? One of the things that I wanna make sure my students understand is, the positive reason for the choices I’m making around generative AI. And so, for instance, in my class right now, I have banned the use of generative AI. And in my syllabus, it says, “Because I am interested in your unique thoughts and the way that you express your ideas, I wanna hear from you and not from generative AI.” And that’s what I mean about positive reasons for the decisions that you have made. It’s not about forbidding something just for its own sake, but there’s a real pedagogical reason that I’m choosing to put that policy in place, and students should be able to ask us about that and know about that even without asking.
– Not yet. Can you talk about how students have responded? First of all, we’re having people say they love your affirming stance. How have learners responded to that in a world where they’re encountering generative AI in lots of other places and spaces, and this is kind of one space where they won’t be intentionally?
– Right. So my students have overwhelmingly welcomed it as a positive development. One student in my class that I’m teaching this particular semester, when he saw my AI policy in the syllabus was like, “Oh, thank God, somebody’s being clear.” So they just wanted to know what the parameters were. They didn’t even really care about the details. They just wanted to know what they were expected to do. Students have overwhelmingly also been really grateful to get into these ethical issues, because so often, it’s being treated only as a piece of technology, sort of as if it’s not connected to these huge human questions that we really need to wrestle with. And so they’ve appreciated new information, they’ve appreciated the chance to have a conversation about it with their peers and with me, right? So I think, like you said, having the discussion board where people can talk about this, that’s a really important component of dealing with this, is making sure that there’s space for everybody to air their opinions, to ask provocative questions, and for everybody to talk around these issues together.
– Yeah, that’s so wonderful. And it is so important, as you just said, to create safe and brave spaces where any opinion, any insights, you know, can be talked about and discussed and considered critically.
– And one of the things I love, is when they start asking questions of, “Okay, who’s this journalist? I’m gonna look them up, right? What’s this publication? How do I know I can trust it?” I’m like, “Great, this is all critical thinking. I’ve love this.” So no matter which direction it goes in, it’s a really wonderful use of our time.
– Yeah, well, you are inspiring people to think bigger or broader than AI. Someone said, “I love the point, around not just questioning the technology and stating your position upfront. It’s important for AI, but do you think we should be doing this for more or other technologies, and just generally?”
– I do, I do. I think that the decisions we make in the classroom should be pedagogically defensible. And so I think that students should be told why we’re doing the things we’re doing. It also helps a lot when students, are resisting something, right? Well, they have an assignment, and they’re like, “I don’t wanna do it.” To be able to actually explain like, this is the reason why we’re doing this thing at this time in this semester, in this way, really takes a lot of that resistance away, and helps them see a lot of the work that goes into teaching because they don’t often know what it takes, right? And I don’t mean that in the sense of putting responsibility on them to feel bad for us or something, but just for them to understand we really take this seriously, and we consider all these variables and come to very serious decisions about what our classroom spaces are gonna look like, no matter the modality.
– Yes, now someone had a question, oh, there’s many questions coming in. “How do you integrate AI in a course without adding more content or deviating from the topics of the class?”
– So I tend to spend time at the beginning of the term, really establishing community norms and expectations, and getting everyone to get to know one another, because I find that the content is so much easier to deliver and work with when there is that sense of burgeoning community among students. So this is why I write my syllabus a certain way. This is why I borrow from Rémy Clair, and have my students annotate the syllabus and then we talk about it. So it’s not a unidirectional document, it’s an opening to conversation around what’s gonna happen in our classroom. I have community guidelines that we talk about, about how we’re gonna have conversation in our classroom. And then this AI work is part of that establishing norms and thinking about expectations. So it is clear to them that this is a space where nothing’s gonna go unconsidered, right? There’s no sense of this is just the way it’s always been, but instead, it’s like you are invited to have an opinion on all of these things. And so no matter what the AI stance is, right, they’re invited in to really have an opinion about it that they get to share. So this is, for me, part of setting up the course. I’m making sure that the entire course really functions well.
– Now, something else you said in regards to surveillance has struck someone. Brian, “I work in educational development with faculty across my university, and we often hear that faculty suspect their students are using AI in unapproved ways, and they want to use AI detectors to catch them or hold them accountable. Can you share some questions or ideas that I could pose to support faculty to make pedagogical decisions that are aligned with their values?”
– Hmm, well, first of all, I think that being able to pose the question just as you did there where you said, “Align with your values,” right? So flipping that and saying to instructors, “What are the values that are at the core of the way that you teach? And how do we then deal with the GenAI stuff coming from that place, pivoting from that place?” Somebody who’s very, very wise once taught me that whenever you come up against something where you’re like, “I don’t know what I’m doing,” to always go back to the things that you value or make your decision from inside that perspective, right? So if you can have people identify their values, to begin with, and then move from there, that helps sort of grease the wheels somewhat. I think also providing information. A lot of people don’t know that much about generative AI or about the idea that generative AI detectors don’t work. They are notorious, especially, for singling out language that is often used by second and third language users as being AI generated when it is instead an artifact of the particular way they learned English or their continuing progress in learning English, right? So it really can honor some real inequities in our system. And so I think being able to do a little googling for some of those articles and providing them to people, is a great way to approach this, and just sort of say, like, “I didn’t know any of this. Here’s this thing, did you know about this, right?” And just make it a genuine inquiry.
– Yeah, that kind of touches on the next question, which was about guidance for people who are working with faculty who seem to wanna reject AI but embrace surveillance as the only answer.
– Yeah.
– And I did read that information is so important there.
– Yeah, I mean all of these articles that I shared are great to share with faculty and instructors of all kinds too, to share with staff who are working with students. I think that while we are allegedly living in an information economy, right, it’s actually not that easy to come across this stuff to sort of trip over these things. And so you have to purposely go look, and some of the difficulty there is knowing what to look for. So that’s why, for instance, I curated my list of ethical stuff on my blog so that people don’t have to do that work. They can just go grab a bunch of information from there and use it in whatever way is helpful.
– Well, and it sounds like your learners are benefiting from that curation too, ’cause you’re providing them with all those pieces too, like critically consent-
– Right.
– Yeah, which is wonderful. Another question, “Do you think there is value in customizing AI statements to the class topic?”
– Hmm, that’s a really good question. I think whatever brings clarity and makes your position really clear to students is what students want, right? They wanna know what the boundaries are, they wanna know exactly what they can and cannot do. And so if tailoring it to the content of your class, is one way to make that transparent, that’s a great idea.
– Okay, “How do you handle, since learners are not using AI in your course, if you have suspected AI use, and they said, ‘I’m thinking about a first draft, when you explain you want to read their thoughts,’ but in few time.”
– So I spend a lot of in-class time working with students so that, for instance, I will have them brainstorm, and once I give out the assignment sheet and we’ve talked it through, I will have them brainstorm their first ideas, right then and there. And then we can talk about them, and they can talk about them with peers, right? And just making those moments, sometimes you can’t dedicate an entire class period to working on drafts, but you can spare 10 or 15 minutes, right, to make sure that it’s clear that you’re touching base, that you are really truly invested in what they think and what they produce. The other thing that, I think, helps me is that I’m someone who practices ungrading. So there’s never a moment where my students are handing something in and it’s make or break, it’s fail or pass, right? It is all about process, and it’s about getting stronger and stronger and stronger.
So this week, for example, I sat down with each of my students in turn, and I could do this in larger groups too, in a bigger class, and had a conversation with them about their first papers. And I was able to make the point there that with evidence from their papers, that they were working out what they were thinking by writing, and that this paper was not an end point, but was rather a place to launch a whole bunch of new ideas, right? So having that openness to keep revising, to keep revisiting, to branch out from something they’ve done is something I’ve built into my classes to really avoid that sense of, here’s the artifact, here’s the thing I produced. And now, based on only this, we’ve gotta decide what’s the fate of this particular learner’s work, right, or them, right? And what their standing is in your class, and it can get so much bigger than that, right? So I avoid that by practicing ungrading. And I know there are resources about ungrading on OneHE. So, and also, please feel free to contact me through my website, and I’d be glad to tell you all of that ungrading too.
– Well, thank you. I know we are at time. We do have a couple more questions, but I wanna respect your time and everyone’s time. What you just said reminded me though about one of your key points, which was to trust learners, right? Like, you are building a space for them to be empowered, and you are trusting them and showing them that through policies and your philosophy of teaching, and all of your interactions.
– I think that if we don’t trust our students, and unless I said before, when there are outlying situations that show that the trust has been broken, we get to deal with those outlying situations, right? But approaching this not as all my students, are just rushing to do something that’s not permitted, right, is a terrible sort of dynamic or relational dynamic to set up in the classroom. And so instead, being able to say, you know, like, “I trust you,” it just changes the way that all of these conversations are happening. And I really think that has to be the bedrock of what we do.
– Well, I think that’s a wonderful sentiment to end on. I wanna thank you so much for your time and your generosity, and sharing all of these ideas and resources with our community, and for offering for people to be able to contact you and for curating resources on your site that people can turn to.
– You are so welcome. Thank you for inviting me here. Thank you everybody who came, and have been asking questions and putting things into the chat. It’s been wonderful to spend time here today.
This Show & Share webinar recording is facilitated by Catherine Denial. Catherine is the Bright Distinguished Professor of American History, Chair of the History department, and Director of the Bright Institute at Knox College in Galesburg, Illinois, USA. Catherine explored the ways she and her students have analysed the ethical issues surrounding generative AI, and the way those conversations have shaped her students’ work.
Below are the key discussion points with timestamps from the recording. Hover over the video timeline to switch between chapters (desktop only). On mobile, chapter markers aren’t visible, but you can access the chapter menu from the video settings in the bottom right corner.
- 01:07 – Where to begin?
- 03:13 – Ethics of GenAI
- 09:22 – Students’ reading reflections
- 11:25 – How to scale conversations about ethics?
- 14:35 – The AI Assessment Scale
- 16:33 – Q&A
Learn more with OneHE content:
- Introduction To A Pedagogy Of Kindness with Catherine Denial – Course
- Being Transparent in Your Teaching with Emily O. Gravett – Course
- Introduction to Artificial Intelligence in Teaching and Learning with Niya Bond and Vincent Granito – Course
- Understanding AI Bias: A Chat with Courtney Plotts and Lorna Gonzalez – Free Resource
- Ethical AI Use in Assessment with Vincent Granito – Free Webinar Recording
Useful Resources:
- Ethics of GenAI (PPTX, 11.4 MB) – webinar slides with the links mentioned during the recording
- Denial, K. (2024). A Pedagogy of Kindness. University of Oklahoma Press
- Denial, K. (August 2025). Against Generative AI. Cate Denial website – a collection of useful links about problems with, and flaws within GenAI.
Upcoming Webinars
- 22 October 2025 – The Opposite of Cheating: Teaching for Integrity in the Age of AI with Tricia Bertram Gallant and David Rettinger
- 5 November 2025 – Generative AI in Practice: A Guided Introduction for Educators with Lew Ludwig and Todd Zakrajsek
- 19 November 2025 – Not Your Default Chatbot: Teaching Applications of Custom AI Agents with Derek Bruff
- 3 December 2025 – Supporting Students in Developing Information Literacy with Craig Gibson and Sara D. Miller
DISCUSSION
How do you approach ethical discussions around AI with your students? If this isn’t something you’ve done so far, what’s one thing from the webinar recording you would like to try?
Please share your thoughts in the comments question below.