Using AI to Multiply Your Teaching and Time

Graham Clay

– Well, welcome, everyone. We are thrilled to be here today with Graham Clay to talk about using AI to multiply your teaching and time. We have a Padlet for today’s session that I’ve shared here in the chat. And for those of you watching afterwards, it’ll be shared in the community so you can participate no matter when you’re watching this webinar. And Graham, I really am just gonna turn it over to you, let you introduce yourself, and talk about all the interesting things you’re gonna share with us.
– Okay. So as you already know, I’m Graham Clay. My, you know, day job, as it were, is assistant professor of philosophy teaching track at UNC Chapel Hill. And I also though, at night, work on AutomatED, with the capital ED, at automatedteach.com. It’s a newsletter and blog depending on how you wanna experience it, whether you want it in your email or you wanna browse. And what it’s about is integrating tech and AI in pedagogy. So kind of like I was experimenting with these things myself for a while and decided to start writing about them so that others can see what I’m up to and so that I could learn from what they were up to. And now it’s morphed into its own thing. I release pieces every week and I also consult with professors and centers for teaching and learning. So today’s topic is related to my area of expertise, which is using AI in particular. And my focus today is about multiplying your teaching and your time. So I’m gonna have a pedagogy component and a productivity component, or at least that’s how I’m gonna conceptualize it.
Okay, so here’s the big picture and kind of the frame for the whole presentation. Generative AI is already massively powerful. So when put in the right context with the right information and the right prompting, it can already do a lot of the things that are kind of typical of our fields, at least at the level that our students are operating at. So it’s made certain tasks easier for both our students and for us. Whether this is good or bad depends on the context, including how we structure it. So it might be bad, for instance, if students can shortcut some of their essays with AI, depending on the details. Might be good if we can respond to emails faster using it. So whether it’s good or bad, that it’s kind of sped up the rate at which we can complete tasks and our students can complete tasks depends on the context itself and kind of how we set the context up. How we set ourselves or our students up for success. So how do we respond? How do we set up the context so that the learning objectives are achieved in the case of the students or so that our, you know, pedagogical objectives are achieved in our own cases. So that’s kind of the big picture for today’s talk. I’m gonna have two sections. The first is focusing on personalization. Generative AI has made it easier for personalization to occur so we can personalize more of our students’ learning experiences. And I’ll explain a bit more about what I mean by that in a minute. And it’s also made things easier for us.
So I’m gonna talk about some tasks that, in particular, AI makes easier for us to complete as educators. I’m gonna kind of zoom through this. So a bit of a side note on my pace. I’m gonna go quickly because this is being recorded and the recording will be made available later. You could always screen capture this or however you wanna kind of look back at things. And at the end, we’ll have a bit of discussion if there’s some folks with questions. But I’m gonna cover a lot of ground quickly ’cause I’m assuming you’re gonna be able to kind of go back if you want to. Okay, so as I said, the first section’s about personalization and how I’m gonna conceptualize personalization in this context is a way to create role players in our students’ educational experiences. So our students are kind of in our classes and there’s different people in their classes with different roles. There’s you maybe, the teacher, there’s maybe someone with an intervention helping a particular student with some specific thing they need help with. There’s their peers. So there’s other students that are going through the experience together with them. And AI can help personalize in the sense that it can help kind of fulfill these roles maybe when otherwise they wouldn’t be fulfilled. So in class, we can have AI fulfilling a role of personalizer. So the tools I have in mind to make this a little more specific are, on the first level, the generic large language models. So Microsoft Copilot, ChatGPT, Claude, Google Gemini. These are the main ones. So the Microsoft Copilot is the kind of counterpart to Google Gemini. And while ChatGPT and Claude are independent companies, Copilot itself runs on ChatGPT. So it runs on a version of ChatGPT that Microsoft has made kind of unique to their environment. So these are all free depending on the level, depending on whether you pay for a more powerful version or not. And depending on, obviously, your plan. So, for instance, certain versions of Copilot might require your institution to have Microsoft 365. So what do these tools do? The big picture idea is that they radically lower the cost of producing competent text. Their main focus, historically, has been on text. But new frontiers are opening up. So images, video, and audio are becoming more and more viable. So these tools are more and more capable of producing the right sort of images. So that’s what I mean by competent. Basically, what someone who’s an expert in the relevant subject matter would expect given the context provided by the prompt.
So the user puts in a prompt, the large language model produces an output. And as these tools have gotten more and more effective, they’ve been more and more effective at producing competent texts. Texts that an expert judge in the relevant field would deem to be high quality or satisfactory for the prompt given. So why does this matter? Well, they can do things like develop good feedback like peers. So suppose you have students who are assigned to different groups for in-class activities. As we all know, different students have different strengths and different groups have different strengths because the different students that are in them. So some students might be shy, they might be reticent to help their teammates, their group mates. Other students might just be better at the subject. Maybe they have prior experience or other skills that kind of set them up for success in that class. So the groups that you have in your classes might vary in the quality of the kind of peer feedback that they can provide. Enter large language models. So large language models can simulate a good peer maybe for students who wouldn’t otherwise have one. So this could go in class and also, by the way, at home. So maybe where they don’t have any access to peers at all because they’re not in the classroom anymore. Large language models can also duplicate or replicate or create, as the case may be, tutors. So if you don’t have tutors in your educational context or teaching assistants, say, they can provide that sort of role. So they can play the role of not just a peer, not just someone who’s helping students from the perspective of a fellow taker of the class, but from the perspective of someone who already has sufficient expertise in the class to kind of have better judgment over, you know, kind of the content covered. And they can also simulate subject matter relevant roles.
So I’ll give an example of this in a moment. Before I get into that though, I kinda wanna take a step back and talk a bit about the tools themselves because I want to give you some kind of work that you can go pursue on your own after this. And I want you to kinda have a framework for understanding kind of the option space for the tools. So I’ll get into the use cases again in a moment. So this is a table of the tools. So on the left side, we have the tool. In the middle, the amount of customization that you can apply to the tool. And in the right side, we have a column that indicates how they can be shared. So how you can disseminate the tools. So on the first row, we have the ones I mentioned before. These are kind of the stock options. These are the publicly available large language models that have different tiers, but broadly, they have these same features.
You can customize them through conversational context. So in other words, say a student could communicate with ChatGPT, begin to develop a context from a conversation, maybe paste in information, just type in their own thoughts or whatever. They can also upload some files maybe, some information that’s relevant to the context. And through that information that’s provided, ChatGPT will be better able to customize its responses to their needs. These can’t be shared. So you can link a chat. You can, like, link your, you know, the student can link the instructor a chat to say like, “This is what my conversation with ChatGPT was.” But you can’t share the conversation yourself with other speakers, other people who wanna interact with ChatGPT, say. This is why OpenAI, that’s the company that makes ChatGPT, has introduced a next level, which is called custom GPTs. Basically, these are like prebuilt conversations. So you can customize them as the professor, for instance, in advance before your students interact with them by giving them instructions, file uploads, and you can make them talk to other programs as well. That’s a pretty advanced use case that I won’t mention beyond this. You can also share them via hyperlink. So you can share, that you can keep them perfectly private to an individual, you can share them with specific other individuals, or you can make them open to the whole world through the GPT store. So it’s kind of analogous to Google Docs. You can keep a Google Doc private, you can share it with specific recipients, or you can share it with anybody who has access to the link whatsoever.
So the advantage of the custom GPT is that you can pre-build it with the sort of kind of context that you want it to have. I’ll mention a use case in a minute where this might be particularly relevant, but in a big picture, this enables you to kinda frame future conversations that maybe your students have with that large language model. So this framing enables you to, in a sense, put guardrails and expertise into the ChatGPT and thereby, kind of improve your students’ educational experience. So for instance, maybe you think ChatGPT natively doesn’t produce very good outputs relevant to your subject matter, but with some prompting and some background information, it gets a lot closer to the mark. Well, that’s what custom GPTs are for. You can structure their future interactions by kind of training them in advance on what good outputs look like. Google’s NotebookLM is a kind of more limited version of this. So you can’t share these across your class, say, and, you know, your students can’t share them across a group, but they can build, each of them can build one themselves. So they can, for instance, create one that helps them study or they could create one that helps them review some of the content from a given kind of part of the class or a module. So they can upload their notes. And this is hence the name NotebookLM and the large language model can kind of parse through those and help them kind of analyze, summarize, glean information from them. And then finally, the last one I have on the list is Microsoft’s Copilots. These are kind of like Microsoft’s internal versions of custom GPTs. So they’re working on this. It’s something called Copilot Studio. It hasn’t been publicized that much, but it is available to be used. If your institution’s a Microsoft institution, you might wanna check that out. In a sense, it’s kind of like a sandbox version of OpenAI’s custom GPTs. Same setup. You can share them across your community. You can even share them publicly if you set them right. You can give them instructions in advance. You can upload files to them and that’ll structure their future interactions.
So it’s kind of like a degree here. We start at the beginning with the free large language models. They have kind of limitations but they’re more accessible. Well, as you go to the more kind of advanced models, you might have to pay. You might have to have an institutional subscription. I’m kind of trying to cover all the bases here. So I’m mentioning a range of options. Custom GPTs, for instance, cost $20 a month. That’s general access to that sort of functionality. So for instance, if you wanted your students to use your custom GPTs, you would need to get them all to purchase the $20 a month plan or otherwise, subsidize it through some funding aspect of your institution. While the tools in the top row are free. Okay, so what are some examples? I said I was gonna give you some use cases. Let me give you a few. So maybe in a political science course where students are learning about international diplomacy, the professor could create a custom GPT that is pre-programmed with a particular nation state’s policies, history, viewpoints.
So then it could simulate maybe some leader from that country at that particular time. So maybe students could be assigned roles as diplomats from different countries, maybe at that point in time in history that you’re teaching them about. And then maybe they would need to negotiate with this simulated leader of that country at that time on a global issue so that they could better understand someone with that sort of perspective. So this is an example of a role-playing kind of task that you can put the custom GPT to that might kind of make real the subject matter and maybe make much more memorable the learning experience that the students have in relation to it. Another example might be a tutor or a teaching assistant. So here’s one that I’ve made actually for this semester. I have a class, Philosophy 170. In philosophy, there’s a lot of focus on arguments, which is just ways to justify claims. That’s what we call them in philosophy. And I created an argument assistant. So basically, a teaching assistant that helps students create their arguments so it helps them create them, criticize them, analyze them, that sort of thing. So kind of the central subject matter of the class is arguments, looking for them in other people’s texts, creating them ourselves. And this custom GPT helps my students do it. So it’s kind of like multiplies me across their learning so that they can interact with a version of me. It’s not as good as me, but it’s still pretty good, even if I’m asleep or can’t talk to 40 of them at once, for instance.
Okay, next section. Productivity. As I said, I was gonna go quickly, so it’s that time of the year. So I’m gonna focus on assessment. I know many of us are grading. I’ve got 30 papers to do today and tomorrow, so I gotta speed it up. So tools to try here, same as above, the same as the ones I mentioned before. But the more advanced models will do better if you’re asking more of them. And you might wanna consider more secure models depending on how you use them for assessment. So you might use Copilots if you’re in the Microsoft ecosystem. There’s a lot of ways in which you could use these AI tools for grading. And so this is a big thing that I really wanna emphasize. You can have a ton of different perspectives on how much they should have access to. There’s legitimate positions on the spectrum from have access to nothing at all about my students to give them my students papers, suitably anonymized maybe. And there’s a ton of positions in between. However, whatever your position is, there’s ways in which you can use large language models to help you assess. So for instance, you could give the large language model your assessment of the student’s work if it doesn’t have any student data in it, for instance. And you could say, “Listen, I want help making this more psychologically motivating. Make this more friendly.” Maybe you write down really harsh notes. I’ll talk about this in a minute. And you want them to be converted into something that a student would receive better. This is a case where you’re not sharing anything sensitive with large language model, but it plays a role in kind of reducing the friction of producing psychologically effective and pedagogically impactful feedback. Other end of the spectrum obviously would be giving it the kind of student work itself. Much more complex to make this work and do it ethically.
So I’m not gonna touch on it as much, but there are pathways there. They just require a lot more caution, preparation, maybe student consent, maybe disclosure, transparency, and so on. I’ll get into that at the end, if I have time. But broadly, whatever your approach, however you approach these large language models for assessment, you need to provide as much information as you can for them to produce good outputs. This is a general rule about large language models and generative AI. A lot of people prompt them with very limited information and they’re frustrated at the quality of the outputs. That’s just like prompting a human with no information and they don’t know what to say to you. It’s the same thing. So in the case of assessment, you really need to convey what it is you’re trying to get and that’ll result in much better outputs. So I’ll just give one example from my own practice. Like I said, I’m in the midst of grading right now and I wanted to create something that’s basically like a super secretary. I don’t know if this is the right term, but basically, someone who can help me kind of, like, take my disorganized thoughts maybe on student work or maybe jot it down quickly. Maybe they’re good thoughts, but they’re just not really packaged properly and put them in a neat package. So not the student work but my own thoughts and packaged well, okay? And so I made some custom GPTs that helped me do this and I found that they helped me grade two to three times faster.
So when I’m grading, like, one and a half page reading responses two or three times faster. So before, they would take about 15 minutes per reading response and I can do ’em in five now. And now when I’m doing papers, which is what I’m working on now, normally, they would take me 30 minutes and I can do ’em in about 15. So speed is nice, then I can get through more quickly, I can have more office hours so I can meet with students to give them more personalized, you know, feedback or preparation for my exams or I can, you know, take a break, whatever suits me. One thing that’s also cool about them is that they enable me to give students substantial customized feedback. So obviously, the more feedback you give that’s custom to the student, the more time it takes. Generic feedback you can write more of, but it’s not customized to the student’s needs. So you can maybe copy paste in generic feedback because you’ve seen the same sort of problem repeatedly and, you know, you don’t need to type in something specific to that student. But the nice thing about these custom GPTs is I can produce more customized feedback faster. Generally, that’s better. All else equal because it’s more responsive to that specific student’s needs. And crucially, as I foreshadowed, they don’t have any access to student data. So there’s no privacy issues here. There’s no need to get student consent, nothing. So I’ll talk a bit more about that at the end. So here’s an example. So what the custom GPTs do that I build, they take strengths and weaknesses that I list and I just type ’em out as fast as I can. Sometimes a lot sloppier than this. I made a nicer version for you guys ’cause I didn’t want you to see my misspellings and things like that. But I make a list of strengths and weaknesses that I just bang out in a Word doc as I have the student submission on another screen. So once I’ve got these kind of rough lists of strengths and weaknesses, maybe relative to a rubric, it depends on how I wanna approach a given assignment. But I could mention, for instance, parts of a rubric. Then what it does is it converts them into something like this. So the sort of thing that I would write if I, you know, had more time and better, you know, packaging, better presentation of the same content.
So this is the same content from the prior screenshot. I probably didn’t give you enough time to read it, but it’s packaged in the form of a kind of a more professional, more digestible, friendlier bit of feedback. It has a bit of a sign off. It says come to office hours if you’d like to discuss further. It says my name and it does flag though that I did use ChatGPT-4, but I say, “Look, this was just based on my notes”, which is true. “Didn’t provide any of your work to it.” It’s just a way for me to kind of take my rough feedback and convert it into a form that would be more digestible, more effective at conveying that information to the student. So that’s why I call it a super secretary. There’s no content kind of knowledge required. There’s just kind of the ability to organize information provided by me as the subject matter expert in a way that’s more useful, more impactful for my students. So if you wanna see more about kind of the details of how to do this, I don’t have time in this show and share to show you. It would take probably an hour to talk through all the deets. You can go to this piece that I just released on AutomatED at automatedteach.com. It’s right near the top. You’ll see it. “How I Grade Two to Three Times Faster with ChatGPT.” And you can see all the nitty gritty and you can just copy paste all my prompts. It’s all kind of built in there. And I give you a structure for your prompts so that you can kind of plug in whatever your assignments are and get similar results. So again, the beauty of this is it speeds things up. That’s nice. We are all out of time at the end of the semester. It doesn’t use any student data, but it allows us to customize our feedback without some of the costs that normally come with that. Namely, it’s super costly to produce the customized feedback but also package it, kind of make it all a cohesive whole.
So I can bang out the customized feedback and it gets synthesized and produced into a nice whole for my students. Okay, so now I’m gonna conclude and we’re gonna pivot to discussion. If I could wrap this up quickly with some broad ethical considerations that I’ve kind of adverted briefly, but I just wanna quickly flag at the end here. So there’s a lot of concerns, ethical concerns, legal concerns depending on your, where you’re at in the world, about the data being used for these AI tools. So there’s inbound concerns. There’s concerns about the ways in which our data is being used to create or produce or train the AI tools. There’s outbound concerns too. So there’s concerns about how kind of how the student data that’s kind of interacting with the tool is being secured and things like that. So it’s, like, how did we get to the AI tool and then what does the AI tool do or how, you know, how leaky is it or and that sort of thing? Okay, so there’s a lot of options. One option is to retreat. You could, you know, don’t use AI to handle any sort of student information at all. You could go to a custom GPT strategy like I just mentioned. You could go even more minimal than that. You could, you know, completely back out. It’s pretty costly though ’cause then you’re not kind of getting the benefits. You could limit your use to certain safe categories.
So in this case, I don’t quote anything from my students’ papers when I use my custom GPTs. I might wanna, you know, just keep it entirely to my own content. That’s a strategy there. You could change the consent paradigm. So maybe you wanna run the student submissions through the tools themselves. Like, so provide the full student assignment to ChatGPT, say. There you would need to get consent, generally explicit, written consent, at least in the US. To do that, you would need to change the consent paradigm. So you would need to kind of somehow streamline the process of getting student consent to do that sort of thing. You could do that, but you could anonymize or pseudonymize the student submission. So that would be broadly removing anything that makes it identifiable to a specific student. There’s lots of kind of details there, but broadly kind of keeping a lot of the content in there but kind of cleaving it from the linkage to this particular student. You could use AI within your IT ecosystem. So that might mean you use Microsoft Copilot with the data protection rather than using ChatGPT. You might use Google Gemini on your school account if your school uses Google Workspace or you might combine these. So it might depend on the case. So you might do custom GPTs like I do, but then when you have more sensitive use cases, you pivot to your institutional option. And I think this should be kind of paired with your syllabus policies.
So you should think about, you know, how are my students using these? How am I using them? And you can be permissive, restrictive. So you wanna kinda cover your students’ use, your own use, and you wanna be kind of transparent about where you’re at on the spectrum. So you wanna kind of be all out. So when I use it, even as the secretary, the powerful super secretary, I’m very transparent with my students that that’s what I’m doing and I’m very transparent that that’s all I’m doing. I’m not using their student, their sensitive data in any way for that. And I talk with them about this in class before I do it and afterwards, I talk about whether they like it, is it better, and so on. So I think transparency and kind of honesty is crucial from all parties. So all of these kind of set the rules of the road, but it signals an openness room for improvement ’cause I think we’re in such an evolving moment in education when it comes to AI and so we need to kind of listen and we need to take on feedback and changes as needed.
Okay, that’s all I’ve got. If anyone has any questions about any aspect of this, I would love to hear ’em. Oh, it looks like we have a question in the Q&A about how I created the tutor. Yes. And I’ll just get the link for you. It’s relatively kind of complicated to explain verbally, but it’s easy to read. So I’m just gonna give you the link. So if you wanna see the details on how to create the custom tutor, it’s at the link I just provided. Looks like there’s a question, “What’s the difference between ChatGPT and ChatGPT Plus?” So ChatGPT is publicly, freely available. You go to chat.openai.com and you can use the free model, which is called ChatGPT three and a half, 3.5. You actually don’t even need an account anymore to do this. You can just go there and start typing chat. ChatGPT Plus uses ChatGPT-4, which is a more powerful model. So it’s better at reasoning, gives more contextualized answers, can handle longer demands. It costs $20 a month. It also comes with more control like privacy controls so you can limit the use by OpenAI of your data or your students’ data. So for instance, my custom GPT that I use for tutoring, it’s set so that it doesn’t share any of its conversations with OpenAI, but it does cost students $20 a month. So that is the wrinkle.
Okay, so there’s a question about, “How reliable the information is given by ChatGPT? Most of the time, I’ve found they’re wrong.” Yeah, so it really depends on the use case, the context, the background. Generally, if the query that you’re giving to ChatGPT is highly dependent on a specific field’s nuances, you’re gonna need to provide more of the background information for it to answer competently. So for instance, if you ask it something specific about a specific book, maybe like what, you know, what happened in a given chapter, what was the position of the author? Depending on the book, it might have trouble. However, if you give it the book as an upload or part of the book or maybe your notes on that part of the book or whatever, it’ll do a great job. So often the prompting and context are really important. If you’re just trying to search for general information, ChatGPTs probably not what you need. There, you’d wanna use either Google or some other search engine. Maybe Perplexity is kind of the most prominent AI-based search engine. But fundamentally, ChatGPT is not trying to solve this problem. It’s not ultimately like an information searching device. Okay, so there’s two questions in the Q&A.
One is, “I’m interested in the approach your university is developing around policy and a position on the use of the tools.” Yeah, so my particular university is pretty hands off, UNC Chapel Hill. Their approach is kind of, like, we want to figure out what everyone’s using these tools for, and then we’re gonna kind of develop a policy around that. It is a Microsoft school, so they’re trying to integrate Copilot, which is Microsoft’s version of ChatGPT into a lot of use cases. And my department has no policy. So some professors ban it completely and others are leaning in. But yeah, lots to be said about policy. I think it really depends on the context. “When it comes to uploading a file”, someone asks, “Can you do this in the free version and how do you go about that?” If you’re gonna use the free version, you might wanna use Claude, which I’ll type in the answer here, claude.ai. And that is probably better if you’re gonna go with the free version for uploading files at the moment. If you’re uploading very large files, Google Gemini has a very large context window and can handle very large ones. So for instance, you could give it a full lecture recording and ask it to produce quizzes based on your content. It can do that quite well actually, but it’s not as good at the fine details and I find Claude’s better for that. Both of those are free. Let’s see. We’re over time, but should I keep going now?
– If you wanna answer one more, yeah, go for it.
– Okay. All right, so there’s at least one more. “What is the common practice regarding letting students know AI was used to create assignments, exams, class activities, et cetera?” Yeah, I don’t think there is a common practice right now, if you mean like an industry standard. I think that’s in the process of kind of forming. I think my view is that we should be as transparent as we would expect from our students for their own work. People disagree about this though. My own view is that if I use ChatGPT for any aspect of my teaching, I flag that to students. And particularly as it gets more kind of involved. So like, my tutor, it’s very clear, you know, how that works. I explain the whole background behind that. In my secretary case, the one where I kind of convert my rough feedback to organized, kind of well-packaged feedback, I include a disclaimer and a note about that. So I think transparency’s king at this point in time, but other people disagree.
So some people, for instance, argue that, you know, “I don’t disclose to my student that I use various productivity hacks built into Outlook or into Gmail when I respond to their emails. I don’t need to, I don’t need to tell them if I use ChatGPT to help me lesson plan, it’s my problem.” So yeah, there’s a lot of different perspectives on it and I think you need to make them continuous with your broader perspective on disclosure, transparency, expectations, and so on. So I think my students should be transparent. So I think I should be transparent. If you don’t have a similar view about transparency, you don’t care whether your students use AI for whatever reason, you might have, you know, a lower bar for how much you think we all need to tell each other about the various tools we’re using to produce our communications. So I think there’s not an industry standard. I think now’s the time to kind of figure out your view. And it should be obviously sensitive to your institution’s policies, your department’s policies, your context, your field. This is a generic presentation, so I’m kind of trying to cover all the bases, but at the moment, there’s no unified story as far as I can tell. Okay.
– All right. Well, I appreciate that. I think that’s a really important point that, you know, we may have to consider these tools and technologies as part of our teaching philosophies moving forward and what they mean to us and how we wanna share about them. Thank you so much, Graham. It’s always awesome to talk to you and you have so many interesting things to say about this topic. If you all enjoyed this webinar and you wanna learn even more, we are very fortunate to have several interviews that we have done in the past with Graham on the topic of AI and even some walkthrough demos on the platform of how to use these tools. So in tandem with this webinar, those are really amazing resources on this topic. We also have one more webinar on AI in June. It’s Key Academic Integrity Considerations in the Gen AI Era that you can sign up for on June 12th. And I will leave the last word to you, Graham, if there’s any last thing you wanna share with our audience here.
– Sure. Yeah, thanks a lot for having me and thanks for coming. I guess my one big takeaway is get out there and experiment. I think until you’ve kind of tried these things for your own use cases, it’s hard to take my word for it. You gotta go check it out yourself. The options are there. It’s on you now to kind of give it a roll.
In this Show and Share webinar, Graham Clay shared a range of ways in which AI tools can effectively play a role in supporting the work of educators in colleges and universities. Graham talked about different types of Generative AI tools and their capabilities, shared various use cases of how AI tools can be used in the classroom and how he has used AI to create a ‘super secretary’ to help grade his students’ work 2-3x times faster. Graham concluded the webinar with some ethical considerations around AI use.
Listed below are the key discussion points complete with timestamps from the recording. Hover over the video timeline to switch between chapters. Note that chapters are only supported on Desktop. On mobile devices, the chapter markers aren’t visible, however you can access the chapters menu from the video settings on the bottom right hand ride.
- 03:45 – How to use AI in the classroom
- 07:38 – Differences among available AI tools
- 12:43 – AI use cases
- 14:39 – AI and grading
- 21:47 – Ethical considerations
- 25:24 – Q&A
Graham Clay, Ph.D., is a teaching Assistant Professor of Philosophy at the University of North Carolina at Chapel Hill, USA. Graham runs a newsletter, AutomatED: Teaching Better with Tech, that distills his and his team’s research and consultations on AI and pedagogy for 1000s of readers every week.
Learn more:
- How I Grade 2-3x Faster with ChatGPT (Graham Clay, AutomatED: Teaching Better with Tech)
- How AI Can Help With Grading, Feedback, And Assessment: A Chat With Graham Clay (Interview with Graham, OneHE)
- Tips For Effective Prompting In Generative AI Tools (Interview with Graham, OneHE)
- Using ChatGPT To Create A Study Agent (Interview with Graham, OneHE)
DISCUSSION
How have you used AI to assist with your work?
Please share your thoughts and questions in the comments section below.
