The AI Educational Double Bind: When to Lean In—and When to Refuse

Kevin Yee

Click here to open or close the video transcript
– Thank you so much. So I would like to share again that I’m from the University of Central Florida. That’s in Orlando. Special call out to my colleague from Tampa, where I used to work. So I have two jobs here, and there’s both relevant for this talk, actually, I’m the director of our teaching and learning center, as well as the sort of special assistant to the provost for AI, campus coordinator for AI fluency efforts. And both of these are going to be relevant to what I’m going to be talking to us about today. So I’m gonna start with the idea that there are definitely people and arguments who say that our reaction in the higher education space to AI really must include leaning in, by which people mean that AI is here. It’s never going away. There will always be an AI. And things have changed. And then the big argument here is that employers are expecting fluency out of our students, AI fluency. And they’re not been great about defining what that means, what AI fluency means. And so I’m actually gonna turn to chat and ask you to start guessing, like, what are some of the individual elements of AI fluency, just one at a time and then hit enter. So when we say that we want students to be fluent in AI, what does that mean exactly? What are those sub skills? So I’m watching in the chat… Ability to edit. Okay, that’s an idea. Prompting, the ethics, where it gets its data, good. Meaningful responses, questioning AI, like it. Now more about prompting, reading past the flattery… I love it. Okay, so thank you. In the interest of going quickly, I’m not gonna wait until we’ve exhausted the list, but I’m seeing a lot of good responses that line up with ours. So on our campus and in our teaching center, even before AI was much more than just beginning to turn viral, these were our answers. Understanding how AI works. In other words, how it was trained, the fact that it hallucinates, all those things, right? The ethics of it, when you should use it, when you should not, when it harms other people by using AI and, you know, without human in the loop. Choosing the right AI tools, because there are differences between them. And sometimes on some campuses, there’s an AI tool that is protected where there’s, you know, data privacy. At our campus, for instance, it’s Copilot, where the, it’s a walled garden is the term people use for that, where it doesn’t phone home to the company. Yes, ethics. Someone just commented about the environmental impacts, not in scope for today’s talk, but that often comes up when we’re talking about the ethics of AI as well. Number four, prompt engineering. A lot of people said that in chat. Five is the one that maybe needs the most explanation. There is going to be a new AI model or tool or app that someone’s gonna tell you, hey, you’ve got to check this out every month for the rest of your life. And so I think we need to get to a point mentally and our students do as well when they’re in these jobs that they don’t roll their eyes at that, but they sort of recognize that adaptability is required now. That it’s AI is not like the internet. You learn how to use it once and you just collect URLs after that. It’s a constant learning process and we’re going to have to lean into that. And then several of you said that number six as well. Doing more than just creating AI. It’s also evaluating and it’s adding human value, those sorts of things. So I’ll give you a quick example of what we are urging our own faculty to do, which is to train students to do some of those things. We don’t wanna take it for granted that students will learn effective prompt engineering. We don’t take for granted that students will learn to question AI. And so our suggestion is that faculty are gonna have to make this part of their courses, even classes that are not AI specific classes. So we came up with a number of ways to use like a discussion board assignment, let’s say, where the students learn through the process of doing this small task and then getting a small grade… They sort of learned a little bit more about how AI works. And this is one such example. In this case, they’re getting an argument from the LLM, then they’re crafting a rebuttal. That’s more of the questioning AI rather than just sort of accepting it. AI as a thought partner. So we came up with a large number of those. You’ll be getting these slides later, so you don’t need to worry too much about the grabbing the QR code or the bit.ly link. But we came up with 60 some ways to give small discussion board assignments. I think this should be worth points or the students won’t do it. That would then over time teach the students, here are the limitations of AI, here are the affordances of AI. So this is our response to AI fluency, which you saw on this pie chart at first. I’m gonna say a little bit more about the pie chart here. I’ve labeled the left side as lean in and AI fluency is part of that. The other half of it is co-creation, which I’m gonna talk about in the coming slides. But the left half of the pie chart is going to be in a different direction than the right half when we get to it. So co-creation means recognizing that not only the employers want someone to know how to use AI and generate an output, they might want people who are skilled in using AI from the very beginning. Right? You’re brainstorming with AI from the start. You’re writing with AI, in many cases. Right? So even though we do it in higher ed, where we ask students which parts did you write and which parts did the AI write, you know, an employer in an office setting is not gonna demand to know, you know, did AI help you with this TPS report? They’re not gonna ask. So an employer is really gonna expect just co-creation. So I think this also falls to us at some point in the college career to teach, like, go ahead and use AI. In fact, I want you to get good at using AI so that the product that you’re generating for your employer is more effective. Okay, that’s the moment where I wanna switch slides. And I’m technically moving into the middle of the diagram here and pointing out that there have been several studies now of this concept of skill erosion or de-skilling. The short version of this is that when you use AI after a period of time and then people measure how well you are, the ability of, your ability before there is even AI is less good than it was if you had never gone… I’m saying this sentence wrong. If you had never gone to AI at all, you would have stayed stronger in your skills. Okay? And that’s for people who are trained professionals. So what we’re actually worried about are students who aren’t even yet trained professionals who then are using AI as shortcuts. We’re running the risk against is that they’re never gonna be better than the AI output. So what we worry about there, I don’t know how many of you have seen this 2006 movie, “Idiocracy.” It’s not about AI necessarily, but the thrust of the movie is that everyone around us, all of society has gotten to a plateau and they’re just not getting any smarter. In fact, they’ve gotten less smart as the decades go by. And so there’s a real risk that if we all do complete co-creation, that we won’t have skilled architects. We’ll just have ChatGPT level answers for what architects can do, and society might not advance much more. So that creates the need for the right side of the chart, informed refusal, especially in the higher education context where it’s not just skilled medical workers, like you saw in the previous slide or two slides ago. Now we’re actively needing the students to avoid the shortcuts because they’re supposed to be training their brains. So I’ve got two halves, two quadrants to informed refusal. The top one, the top one is labeled writing without AI, meaning I’m gonna advance an argument here that maybe we can still assign essays to students. We just have to convince them that these shortcuts are not in their best interest. And then the bottom one goes in a completely different direction. Actually, you could argue that the top and bottom on the right hand side contradict each other. So something maybe we could talk about in the question and answer period. So first is writing without AI, still assigning essays, but convincing students these shortcuts are not in their best interest. So we’re gonna start this by, so I forgot to point this out the last time we had this obnoxious yellow slide. These are moments for you to type into chat, and we’re gonna do the, you know, the guessing game. So basically, it turns out that there are different reasons why students cheat. I’m going to be showing you a summary of the national literature and in a minute. My experience is that people have good guesses here. So I’m seeing things like they’re lazy or they have no time. They’re worried about the grades. They don’t care, lack of interest. Many of you are really on the right path here. So I’m gonna give you that summary slide. Again, this is my summary of the national literature. The slide was written before AI went viral. So it’s not about AI necessarily. So only extrinsically motivated. A whole number of students believe that I just need the degree, the degree gets me the job, then the job will tell me what it is I need to know after the fact. Employers hate that attitude, but so many students have it. And so they feel like it’s okay to cheat. Everyone else does it, I’ll fall behind if I don’t cheat like them. Okay, I understand that one. And then three, four and five are highlighted yellow because they seem in my mind particularly aligned or lined up with AI. Like they might feel that it’s a victimless crime to have AI generate my “Hamlet” essay. Number four, they feel like they might feel like they’re not gonna get caught… Students are not stupid. I think they’ve picked up on this narrative from the very beginning that the AI detectors are problematic. That doesn’t mean this is uniform. I still see some posts from students saying that they’ve been accused because of a detector, I still see some faculty demanding to get a hold of an AI detector, but on our campus, we don’t rely on them. We don’t even have access to one. But if you use one privately, it can be part of the assemblage of, quote, proof. And then number five, I need to cheat to pass. I don’t have enough time. So I’m actually gonna combine number two and number five a little bit. If they feel that they’re not gonna get caught and then therefore they’re gonna use AI to generate that Hamlet essay, one ironic outcome is that number two comes and bites in the rear end the person who was more honest. Like it could be that AI generates a better “Hamlet” essay than the person who writes it out of their brain. So there’s another ethical overlay to that that we need to think about. So what we need here… Oh, and then the other thing about cheating is that these five things don’t have to all be present. Right? Some students cheat only because of number one. Some students cheat only because of number five. Or some cheat because of two and five or whatever, maybe even all five. So, you know, I was giving workshops about preventing academic integrity abuses before, even before COVID, that talked about the need to address all five of these in various strategies, but we need to find something that can do that with AI. So I’m going to suggest that the phrase at the top, better than AI, is where we need to point students where they, they’re also hearing these headlines about job losses and layoffs and so forth. So there have been multiple rounds actually in Amazon of slashing a large number of jobs. And so I think the students are sensitive to the idea that AI might be taking jobs and the solution to being someone who still has a job or not laid off or is able to get that job is that you’re not just giving AI output. So to be better than AI they need to do all the things in green. So I was able to come up with, I think, an argument for most of these, these supposed reasons why students cheat having to do with AI and job markets, right, they need to be actually prepared, they will not be able to keep that job because they won’t have the background and then they would get caught, right, passing won’t do me any good without these skills. So it is a variation of cheating is cheating yourself, which we’ve always said, I stopped saying it out loud, because it’s such a cliche. But the idea we have is finding ways to convince students without always just saying that phrase. And so for us, the focal point is, you know, saying to students at least once that using AI to write your “Hamlet” essay without you doing any thinking is like lifting weights in the gym with a forklift, right? The forklift, the weights are being lifted, the essay is being written, but nobody is doing any heavy work. No one is doing any lifting, any struggle. So I could just as easily have said the struggle is the point. That’s true in the gym. If lifting weights doesn’t have any strong, you don’t build any muscles. Same thing with their brains. And so, you know, we need to find ways to convince them of that. And so, we came up with another set of 50 or 60 small things you can do in your class sort of scattered throughout the semester that all move in the same direction of trying to give the students the impression that the shortcut is hurting you. So obviously there are changes you can make on the syllabus that doesn’t just mean, you know, making assignments that are hard for you to achieve. It also means having some sort of statement way up front on the top page about why this class is gonna help you land and keep a job. We’re gonna train you to think. And obviously, you know, you only, I only want you to use AI in the moments where I tell you to use AI. So that’s in the syllabus. It’s in the way you give the assignment. It’s in the reminders you give when the assignment prompt is there. It’s in the interactions where you have with them when you hand back feedback and so forth. So sort of throughout the semester. You know, we wanna find, and I don’t know that I ever do all 50 of these, but find lots of different ways to blanket across the weeks of the semester, these messages that the shortcuts are harming you. They’re harming your future. Forklift metaphor… I don’t know where… I’ve been using that for two years. So it is possible I heard it from somewhere else, but it didn’t come from the digital delusion, as far as I know, but it could be that the person I heard it from took it from Horvath. So this is open source… Both of these books are open source. You can share them around with folks at your institution. It’s just there. If you click on any of these things, you’ll see they are quite small little chapters that just suggest something to do and a way to do it and also why you would do that. So let me turn to the last one, AI friction, by which I mean we actually wanna create friction so that AI could not complete the assignment for you. And so this is what I mean when I say that in some ways the top one and the bottom one are in contradiction. The top one is more like, you know, trusting students, the bottom is more like, I don’t trust you. So I’m not entirely sure how it looks to do both of them, but I get a lot of faculty requests for friction. Friction means two things. Deliverables that AI cannot deliver for you today. And there’s only a few of those. And then AI resilient assignment prompts. So I’m gonna start with the resilient assignment prompts. If you work in online education, your instructional designer is gonna probably use this phrase, authentic assessment. And what it means is that you make the assignment so interesting, so personalized to them, filled with smaller tasks and there’s reflection that they’re not as tempted to cheat. Before you say it, it is true that, especially in a fully asynchronous online class, you can’t really stop them. The only hope you have is that you make the assignment so interesting, you know, they go to a local business… Well, I think I have this slide here. They go to a local business, they identify a challenge that that business is encountering, they propose some solutions… This is interesting stuff, especially for, you know, a business class. Could you put all of these things into AI and have it create an answer? You could. And so one of the other ideas is simply grade the output because most of the time AI gives bad answers to something that is an authentic assessment. Yes, the slides will be available afterwards. It’s coming in, I think she said two weeks. So, you know, they’re also doing a final presentation. So if you’re fortunate to have a face to face class, this makes people nervous about using AI to do everything and they still have to give you a final presentation. And then there’s always reflection and this kind of authentic assessment. Yes, I know AI can fake giving a reflection as well. But coupled with that convincing angle we said a minute ago, maybe this will work. Here’s one that is probably my favorite. Diffuse assignment prompts means that it’s uphill for students to just take what you’ve given to them in the LMS and then paste it directly into ChatGPT. Right? So at least don’t make it easy. So I’m gonna give you an example on the next slide. So the assignment is to create an improvement plan for a business client. And the way they do it is that they are looking at these three lectures and templates and readings. And then they are combining them into a project that is then going to face the client. A variation of this, and I don’t think I have the slides, a variation of this is to… Imagine doing it within your LMS where you tell them to watch this video in module four and then apply the principles of that video to the discussion board posts on module five. So they can go get the video and upload it to AI. They can go get all the discussion board posts and upload that to AI. It’s just more uphill. So, you have help. You could ask your own AI of choice, paste your current assignment and say, how do I make this more resilient? How do I make this difficult for AI to just complete the assignment for them? And so this, we don’t have time for this. I’ll just go past the last interactive. Alternative deliverables are things that AI cannot deliver today. Capstone exams. So I know this is common in other countries. It’s less common in the United States. If you’re in an industry like nursing, which has the NCLEX of an accreditation exam, you’re already set. But those of us who teach history or social studies or whatever, a lot of times that’s not there. And we need to start having those conversations where the students are being held accountable for their learning across four years. Oral exams or for online recorded video… The recorded video has a shelf life. Pretty soon the AI avatars can really take over there. And yes, the AI can write the script, but students, because the presentation has a grade, students practice so much, it’s almost as if they wrote it. Blue books. I get a lot of faculty interested in coming back to blue books if it’s a face-to-face class, but it’s not completely equitable. Not every student has good handwriting. They might have been taught that essay writing is slow and they might not be neurotypical. And so then finally, I wanna say something about curriculum redesign. I think we have to do every part of this chart. Lean in on the left, lean away on the right. But when, right? When do we do these things? That needs to be a conversation within every discipline, with every degree program. So figure out where the fluency is coming. Maybe it’s coming in multiple places. Personally, I would not do co-creation until closer to the end because you want them to know the fundamentals. So a lot of the lean away from AI stuff is in the center part of the curriculum, in my opinion. So with that, I’m going to thank you for paying attention for 21 minutes. We have just under nine minutes left for additional questions. So I’m gonna scroll back at the additional questions. I’m gonna leave this screen on, screen share on for just a sec before I look at Q&A. Dasha, do we have anything in Q&A to go look at?
– Yes. So we’ve got two questions in the Q&A. Would you like me to read them or would you like to look at them and read them out loud for the purposes of the recording?
– Go ahead and read them out loud if you wouldn’t mind.
– So the first question is, do you have syllabus examples that could be included on AI?
– We do. That book was called “Coach for the Approach”, where, that was one of the QR codes on screen, where we were suggesting ways to adjust the syllabus. We don’t say the words for you, but we say what sorts of things to include in that opening paragraph about convincing them about the shortcuts. I think in a couple of cases we might actually have some examples directly there. So, do we have that second question?
– Okay. So the second question is, thoughts about faculty using AI tools like those in Canvas or Top Hat to create assessment questions?
– Do you have another 60 minutes? So… All sorts of thoughts. So number one, the faculty might risk de-skilling themselves, which we talked about earlier, although that’s not nearly the ethical problem of students using it as a shortcut. But there’s the optics to consider. If you use AI as a shortcut and the students don’t get to, there was already one case where students sort of sued a university, it’s either Northeastern or Northwestern, because of that exact scenario. There was AI grading going on, but the student was not allowed to use AI themselves. So… And then there’s that, when we talk about ethics, one of the things I like to point out is that I don’t think AI is a good idea for the first draft, even the first draft for things where there’s a human’s future at stake. So letters of recommendation, peer evaluations, peer reviews, and grading. So part of that is based on what I know about the way humans react to learning and so, yeah, the very first time any of us uses ChatGPT to grade, we’re definitely gonna read everything it says, and we’re gonna be fastidious about regrading everything. We didn’t save any time. But over time, what we’ll start doing is spot checking. And then over time, we’ll start trusting that it’s giving good answers. And eventually it’s going to result in some miscarriages of justice. So I am a little worried about using AI without humans being strongly in the loop. People talk about human in the loop. There are some problems with having the human review happen just after AI has already created something. It looks so correct, it feels so confident, that it’s uphill work to actually stay suspicious of an AI output. So if it generates test questions for me to give to students, I’ve got to fight against how correct these are. At the same time, test questions generated by AI, as long as I’m careful, are the ultimate Chegg killer or the Course Hero killer where the students are sharing my previous test questions. So I’m seeing a question from Pete about cognitive surrender. Do you know this paper? Are there ideas similar? Yeah, so when I give the 10 hour version of this talk, I have a slide about… It’s not Ireland, it’s Wales. There’s a sign in Wales that is in English and in Welsh, that nobody speaks. And the translators, nobody speak it, so they send it off to a translating company, it comes back in Welsh, and they put it on the sign and the hammer in the ground. And only then does someone tell them, you know, the Welsh part says that the translating team is out of office and we’ll get back to you on Monday. So it’s an example of cognitive surrender. There was no suspicious thought that, you know, what came back in Welsh was anything other than correct. All the AI is very similar. It’s swimming in temptations to surrender to it cognitively. So yeah, I believe those are similar concepts.
– Okay, is that okay if we just tackle another question from the Q&A? So the question is, how does this affect rubric presentation? How can students appeal when they have a rubric?
– How can students appeal? Is that what you said? When they have a rubric?
– Yes, appeal.
– So at least on our campus and most campuses I’m familiar with, there’s always an appeal process, right? And the appeal process will ask things like what’s on the syllabus, what is the grading rubric, et cetera. If AI was used for grading, and it doesn’t give you a rubric output, then the professor is on much shakier ground. So that would imply that the professor would need to give very specific prompts about providing specific grades for individual parts of the rubric and justifications for each one of those as well. This is the sort of thing where AI is generally gonna look like AI slop. So the students will suspect, just as we do when we see AI output, that the professor was not doing the grading. When we talked about grading, I realized I forgot one very important thing. It changes the complete nature of the relationship between students and faculty. It makes the entire thing completely transactional. There is no vision that the students have that we’re interested in their learning at all. Particularly if you’ve got a small enough class to show individual attention. There are probably lots of ways to convince students not to use AI just by virtue of forging connection with students. Where it gets difficult is large asynchronous online classes. The ethical answer is to never use AI… It can’t be trusted… You’re mentioning environment again. I’ve seen some compelling evidence that the environmental argument makes the most sense when you think about where data centers are going, not where we are so far today. So I’ve seen one study that said that worldwide, all the data centers since 2022, amounts to the equivalent of a new city in the middle of the Midwest. Just one new city. Actually, I think it was a New York-sized city, so not insignificant. But that’s worldwide. They’re more worried about the demand for bigger, bigger, more, better, and it’s never-ending, which is true. The water side of the equation is overblown, though. There is more money wasted in the United States on golf, golf courses than on AI data centers. A lot more is used on agriculture, although we use agriculture to eat. So I’m a little suspicious of that part of the argument. The data centers create… To the point about data centers and where people live, it’s creating social injustice as well. Right, people are paying enhanced electricity bills for electricity that’s going all over the country. So there’s a NIMBY aspect to this that we shouldn’t forget about. So…
– Definitely. I’ve got another question in the Q&A. I’m worried about having the time to become an AI expert to be able to create assignments that don’t encourage AI dependence. Can you comment on that, please?
– There’s a slide, I think it was two slides ago. You’re not alone on this. Ask AI literally to help you create AI resilient assignments, or resistant, if you want. I will tell you that you need to exercise some suspicion, is that the right word? A caution with AI’s advice. Some of what it advises you to do, a student can still use AI to generate an output for. So it is difficult to get comprehensive answers to that. So just as we’ve built two previous open source books on this, we are building one now that is completely flying in different directions. Part of it is about co-creation, again, small examples. And then part of it is about friction, resilient assignments especially. So we are getting more examples of those assembled in the next month. So basically, I would say just contact me, which I’ll type my name and my email address into the chat here. Make yourself an appointment request to contact me in a month and I’ll have a rough draft of those specific examples of resilient assignments.
– Brilliant. Thank you so much, Kevin. That’s very kind and generous of you. I’m mindful of people’s time, but I also want to acknowledge that we still have some questions in the Q&A. Have you got five more minutes to go through more of them, questions?
– I have 20 more minutes, but I think you should feel free to release people who don’t wanna answer more.
– Yes, so, I would like to thank everyone for joining us today. And if you would like to stay for another 5-10 minutes as we go through the rest of the questions in your Q&A, please feel free to do so. If you’ve got other commitments, it’s great that you’ve been here with us today.
– Strategizing students self-interest in the context of markets, employers, jobs. Additional…
– So the question is asking to what extent it is the educator’s job to be teaching all the AI and employability skills.
– Almost no one I know wishes this upon us, us educators, that we have to now care about this. I know a great many, and I come from a humanities background, and most of the folks in my background would say that I don’t wanna play a policeman. I don’t wanna play job employment preparation or job prep. The role of the academy is one of deep introspection and analysis and exploration. And I don’t disagree with any of those things. But I am, to quote some German here, looking at realpolitik here, where we are stuck with this. Like if I have ignored the AI fluency component of this and the AI co-creation side of things, then I am worried that students who graduate with whatever that is, a humanities degree, a history degree, are not gonna be able to keep jobs if they’re not able to show how they use AI to be more productive. Because I think that’s the direction the employers are going, more so than the layoffs. I think they’re expecting greater productivity out of the existing people and the people who they onboard. Because if you move at the speed of a 1996 worker, that’s not gonna work out. So the answer to that question is that it’s forced upon us to have to think about their future employability, even though it is historically not part of what we did in most of the academy. So, I get it. It’s a context change for who we are, what we signed up for. In every workshop I do with faculty, there’s usually someone who says, thank God I’m retiring in a year or two. And I understand that sentiment too, but it is the reality we have at this moment. And so if we don’t do that leaning in, we run the risk of our institutions looking like they’re graduating people who aren’t ready.
– Great. Thank you again. So, one more question. If requiring students to provide oral responses to questions, so questions come back over-reliance on AI usage, should all class assignments be audio instead of written assignments?
– It’s an interesting thought. So for face-to-face classroom, I have taken to doing oral exams as a replacement for the final exam. It’s just an oral interview for something like 10 minutes, actually. Our LMS, which is Canvas, makes it very easy for me to populate a bunch of times. They pick one and it disappears. So they just show up at my office. The problem with that is that’s during finals week. They pick their own time. To do that much exams in the face-to-face time is difficult unless you’ve got a vanishingly small class. Now, about the maybe recorded at home video. I did mention briefly that if you make the presentation part of the grade, they will rehearse so much that some of it will have sunk in, even if they used AI to generate the text in the first place. I have recently seen a piece of software, it’s not commercial yet, but the students are required to type out loud the question, type the question again before they can click to submit their answer. Then they type their answer and it has to have so many words. And then they also have to speak out loud their answer. The system is also recording how long it took them to do that in words per minute and so forth. So if they are using AI to generate these things, there’s gonna be some real paper trail. Not paper trail, but Hansel and Gretel trail of indications that they did use AI. So I think the AI tools are gonna start catching up on the academic integrity side. We’re just in this weird middle place right now where it’s not there yet.
– Great, thank you. One last question from the Q&A. How accomplished does a student need to be with AI? Is it pretty simple to use? Why devote significant time leaning in if it’s a simple tool to use that vastly undercuts skills, cognitive growth?
– So a student, today’s college student in the United States anyway, who has not received any formal training on AI, treats it like Google. They ask a half-baked phrase. Like, you know… Well, anyway. It doesn’t give any context. You get better results if you give it a role, if you give it specific requirements like length, duration, level of specificity, level of sophistication. They’re all better responses when they do that and they’re gonna need to do that level of specificity in their employee jobs later on. That’s number one, is they do need to prompt it better. Number two, AI does not mean one thing. It never did, actually. It also meant expert systems and computer vision. Large language models and generative AI is just one corner of the AI universe. But even that corner is expanding. Right? We’ve got agentic browsers. We’ve got agents, desktop agents that can perform things for you. Right? So the accountant of the future might be expected to use an AI that is resident right there on their desktop to perform certain functions in Excel just to get that job done faster. So the students need to be exposed to those things. I didn’t turn it on, but I do have my wearables nearby. So AI wristbands that listen to everything I say and generate summaries and a transcript. The AI wearable glasses where we can look up information at a moment’s notice. AI is a moving target and the LLMs are only the beginning. And so all the things we had under AI fluency are part of this, too. The ethics of using it, when to use it, when not to use it. So the big student-facing message is don’t use it to replace your thinking. It’s a thought partner. In fact, it should maybe evaluate your thinking rather than generate an initial draft. That’s the safest way to go.
– Thank you. Thank you, Kevin. I think we’ve covered all the questions in the Q&A. And I’m going to post the link to our upcoming webinars. Feel free to save the link. We keep updating our webinars. The next webinar is with Flower Darby on the 28th of April, where she’ll be talking about online teaching and her new book.
– Her new book about joy.
– Yeah. “Joyful Online Teaching”. So, thank you very much, everyone, for spending the last hour, half an hour with us. Thank you, Kevin, for your energy and your time. Lots of interesting tips and lots of things to think about. Thanks for sharing the open educational box that you created with your colleagues. And so we’ll be sharing the slides and the recording in the next two weeks. So, thank you.
AI is here to stay, and employers are seeking students who know how and when to use AI. But higher education is not here only to focus on job skills. While we train brains to think, students find AI shortcuts extremely difficult to resist, meaning we must find ways to prevent cognitive offloading in large chunks of the curriculum, while also finding space for AI fluency. In this webinar recording, Kevin Yee from the University of Central Florida, USA, shared practical strategies such as how to craft AI-resilient assessments or methods of convincing students that AI shortcuts will endanger their future careers. Kevin Yee is Special Assistant to the Provost for Artificial Intelligence and Director of the Faculty Center for Teaching and Learning at the University of Central Florida (UCF), USA.
Download the webinar slides (PPTX, 18.2 MB — opens in a new tab)
Useful publications:
- Yee, K., Uttich, L., Giltner, E., & Bojanowski, A. (2025). Coach for the approach: The educator’s new role in the age of AI. University of Central Florida STARS.
- Yee, K., Uttich, L., Main, E., & Giltner, E. (2024). AI hacks for educators. University of Central Florida STARS.
- Yee, K., Whittington, K., Doggette, E., & Uttich, L. (2023). ChatGPT assignments to use in your classroom today. University of Central Florida STARS.
Suggested OneHE content to explore
50+ AI Hacks for Educators: An Interview with Kevin Yee – short interview
AI Boundaries: Setting the Rules of Engagement for Your Classroom – webinar recording
DISCUSSION
Which concept or strategy from the webinar challenged your current approach the most, and how could you realistically implement it in your teaching?
Please share your thoughts in the comments section below.