Not Your Default Chatbot Teaching Applications of Custom AI Agents

Derek Bruff

Click here to open or close the video transcript
– I am Derek Bruff. I’m an Associate Director at the Center for Teaching Excellence at the University of Virginia. And yeah, we’re here to talk about custom AI chatbots. So there’s a lot we could say about this subject, but I’m gonna try to keep things fairly on track here. The main idea is that if we are thinking about particularly learning applications of AI, things we might have our students do with AI to enhance their learning, many times the large commercial chatbots, AI chatbots like ChatGPT or Claude or Gemini, don’t really do the things we would like them to do. They aren’t kind of configured for learning and end up sometimes being less helpful than we would want. And so the question is, can we actually build better AI powered chatbots for our students? A number of tools are available for doing this. I’m not gonna talk about tools so much today, but you can learn how to do this in Gemini or Co-pilot or ChatGPT. I’ll mention a couple of others that some of my colleagues have been using as we go.
But I really wanna talk about kind of the teaching and learning piece of this, kind of a why would we use a custom AI chatbot? I’ve been working with a number of faculty at the University of Virginia, and to a lesser extent, Washington University in St. Louis who are experimenting and exploring with AI chatbots this fall. And so I’m gonna be drawing on some of what they’ve been teaching me about these tools. So a little bit of framing. I’ve said this many times in the past. My favorite educational technology is wheels on chairs. I’m a math teacher and when I teach, I walk into the classroom and I have a lesson plan. I have some goals in mind. It’s really great when the furniture in the room can be responsive to the plans that I have for student learning. And so when the furniture has wheels, we can move it around and we can do small groups or big groups or discussions or whatever we need to. And so I maintain, this is my favorite educational technology. Certainly if I’m teaching in an onsite classroom. And I mentioned this for a couple of reasons. One is that our pedagogy should drive our technology use and not the other way around, right? I don’t walk into a classroom and say, what can I do with these chairs? No, I walk in with my goals and my activities that I’ve planned. And I really want the technology to be responsive to that. So I’m looking for technology that can help support the kind of teaching work that I want to do. I think it’s important to have that order most of the time. Sometimes we go the other way, but usually we should drive with our pedagogy first. The other is that we all teach with technology. Some of that technology is analog. Some of that technology is very familiar. When I walk into a classroom, I know what I can do with a chalkboard, right? I know what it’s good for, I know what it’s not good for. It’s the newer digital technologies such as generative AI, where we are all collectively figuring out what can we do with these things? Where are they useful? Where are they problematic? And so that’s why we spend a lot of time talking about that, which we’re doing today. One other concept that I think has been very helpful as I think about working with AI over the last little bit is something that comes from the world of software development called the Rubber Duck Effect. And the idea is that if you are trying to write some computer code to do a certain thing and you can’t get it to work the way you want it to do, one might take an inanimate object from one’s desk. I have a little Canada goose here for some reason. And you would just explain the problem you’re trying to solve to your goose or to your rubber duck. You don’t get much back, right? But the act of articulating the problem you’re trying to solve can sometimes help create those light bulb moments where we solve the problem. Not always. But when I think about working with generative AI, there’s one mode of thinking, which is to say, I have this thing I need to do. I’m gonna have AI do it. And maybe I’ll be very clever in prompting AI. So it generates this thing that I need. Another way to think about working with AI is I’m working on something and it would be nice to have a kind of thought partner or interlocutor here to help me articulate my own thoughts, solve my own problems. And so I think this is a much better way to think of what AI is good for, is that you can have a conversation with it and it can help you work through stuff yourself. It’s what you make of that conversation that matters, not the output of the AI itself. And so if we keep this rubber duck effect in mind, I think it helps us use AI in more useful and productive ways. So I’ve been looking, as I said, at faculty who are experimenting with custom AI chatbots and writing sometimes very long detailed prompts to direct a chatbot to do certain types of things. And so I’ve identified right now kind of five categories that I’m gonna talk about. These are the pedagogical use cases that I’ve seen for AI chatbots. I would really like this to be a comprehensive list of categories ’cause I’m a mathematician and I love putting things in buckets, but it’s not, it’s just five that I have something to say about.
The first one is what I’m calling a Course Assistant. I’m not the first one to call it this either. I think early on when faculty and other instructors realized that you can give a document to a chatbot and have it base its responses on the document that you’ve given it. There’s lots of names for this, but that’s the core dynamic. An initial idea was, hey, let’s give a chatbot our syllabus, and then we can have students ask it questions and answer those questions based on the syllabus. And this could be helpful for students. And to some degree that is, but I’m more interested in a slightly more robust version of that where we try to design chatbots that, yes, they have the syllabus, but maybe they have some other materials or other insight we’ve collected to help them, to help students navigate the course as a whole, a little more holistically. And so I’ve got an example here. This is a chatbot by John FitzGibbon. Teaches political science at Boston College. And so this is actually the prompt, the start of the prompt that he used. And I should say, I think dash is putting a link to my slides in the chat. Most of my slides have a URL in the notes field. So if any of these examples catch your eye, you can go to the slides, go to the notes field, and usually find a reference for some more information. In this case, he’s asking this chatbot, “You are an AI assistant for an undergraduate political science course. Your job is to assist students in understanding how the course works and helping them study well. The goal of having you available to students is to help them find basic information about course logistics, help them understand assignments and act as a guide to how to best learn in the course.” And there’s a lot more here. This is just a snippet of the prompt that John wrote. And I learned about this chatbot in an article by Tim Lindgren at Boston College. He called it a Course Assistant as well. And here’s what he wrote about this course assistant. “By helping to make the expectations and requirements of the course as transparent as possible, this chatbot can be particularly helpful for those who need more support to make the hidden curriculum of college more visible.” And so there’s a lot in this about answering basic questions from the syllabus, but also advising students on study strategies and ways to navigate a course very effectively. So that’s an interesting use case.
Here’s another. I’m calling this an Assignment Coach. This is kind of taking the same idea but scaling it down to a very specific assignment. I should also say I’ve come up with an AI generated image for each of these categories. And sometimes this was pretty easy. I gave it a little paragraph by what I meant by assignment coach and it came up with something I think kind of interesting. Other times there was a lot of back and forth trying to get something useful out of the AI. But again, it’s a process, right, not a product. With the assignment coach, the idea is that you’ve tuned a chatbot to try to give very directed help to students as they navigate a particular assignment or even a particular part of an assignment. So is there an assignment that you have where students kind of always struggle with this one piece? And is there potential for an AI chatbot to try to provide them some useful questions or prompts to think about some guidance. Again, not doing it for them, but creating that kind of discussion partner that might help coach them through an assignment. So here’s an example. This one’s by Isabella Hesse who teaches English at the University of Sydney. You’ll hear me mention the University of Sydney a few times today. They’ve been doing some great work. They have created this tool called Cogniti, which is a platform for educators to design custom chatbots for particular purposes. And so I’ve been learning about what they’ve been doing with it. We’ve got a kind of small group of faculty at UVA who are playing around with the same tool right now. In this instance, the prompt, again, this is a much shorter prompt actually, that Isabel used. You are an experienced tutor in English literature at the University of Sydney. The user is a third year student asking for feedback on their essay question. Assist the user in refining their essay question to ensure it is specific and focused. Guide the user in identifying key words and phrases for conducting the thorough literature search. Aid the user in summarizing their key sources and in crafting a paragraph that outlines their theoretical framework. So do they have an interesting research question? What keywords and phrases could they use for their literature search? And then help them summarize some of the things that they find. Do not rewrite texts for students, but provide critical and constructive feedback. Do not give quotations, et cetera, et cetera. Do not make up quotations, even if prompted by the students. And then Isabelle was able to kind of embed this in her course management system. So as the students are working on this assignment, there’s this kind of short assignment for them. What is your research question? You can use Cogniti to refine your initial idea and post the final question below. What are the keywords you’re using? You can use Cogniti. So trying to kind of anchor the use of this very specific chatbot in the assignment itself as the students encounter it. Hopefully providing them with some useful feedback. I believe she told me that… It did pretty good at the first two tasks, but the kind of summary work was kind of shoddy. And so that was a little bit problematic. But the idea was to be available to help students refine their research question maybe at 2:00 in the morning when Isabelle was not. And to again, not answer for them or kind of create a definitive research question, but to give them some questions and prompts to help develop their own thinking in this area. So that’s type two.
Type three. I’m gonna call this a Tutor bot. And over in the STEM world, this is by far the most common use case that I hear about. It’s wanting to create some type of AI chatbot that will answer student content questions. Again, often the use case is 2:00 in the morning, I’m not around, they’re not coming to office hours then. How can I provide my students with something that’s better than a Google search? Maybe not as good as talking to me, but certainly better than a Google search to help them answer content questions. So I interviewed Isabelle as well as Matthew Clemson and Danny Liu for my podcast, intentional teaching, a few months back to talk about this Cogniti project. Danny is the lead developer in Cogniti. He’s a faculty member there. Isabelle’s in English, who had this assignment coach that she tried out. Matthew Clemson teaches biology and he wanted to create what he called Dr MattTabolism. He was wanting to create kind of a digital duplicate of himself who could answer a course questions. And so this was the prompt of his tutor bot. Act as a kind and encouraging instructor for the… So this is a template actually that he would’ve filled in, right? Act as a kind and encouraging educator for these topics. What kind of student is there? What does the student need to do? You teach by asking questions instead of just providing the user with an answer in the style of Socratic tutor. It’s a lot more in this prompt about the context, the role, the instructions for the chatbot and some kind of rules and formatting. All fairly well detailed in the system prompt used to create this chatbot. So there’s two big questions that I think folks are exploring when it comes to this use of tutor bots. One is, can you prompt the chatbot to be, what Anna Mills likes to call, an ethical tutor? Not a tutor that does the work for you, but a tutor that is actually asking you questions in kind of that best case scenario tutor example. Some people call it a Socratic tutor, right? Because again, the commercial chatbots aren’t always helpful here. They love to answer your questions with great authority even when they’re wrong. That is not helpful in a tutoring situation. Ideally the chatbot is not answering student questions, but is asking students questions that are gonna lead them to the answers themselves. And so it’s kind of an open question, can we design chatbots that essentially work against their own programming to be that kind of tutor? And Matthew’s found that, yeah, with enough prompting you can kind of get it there. The other question this raises is, do you need to train your chatbot on your own course materials in order for it to have more accurate answers? ‘Cause we don’t want it to lead students astray either. And so Matthew took all of his course materials, his lecture notes, things like that, and gave it to the chatbot so that it would draw on that material as it answered students’ questions and interacted with students in the hopes that it would guide it towards kind of more accurate answers and answers that are pitched at the level of his students. Danny told me that you don’t need to do that. That the large language model training that is powering these AI tools knows a lot of biochemistry already. And I say knows in somewhat facetiously. It doesn’t know anything, but it’s got a lot of training, a lot of very large statistics about words associated with biochemistry that allows it to answer lots of questions accurately. And so for some fields you may not need to do much training. The model itself may already kind of come with enough understanding of that field to be a useful tutor bot. I’m not gonna summarize any research today ’cause we don’t have enough time for that. But I will point out that the Harvard Physics Department, some faculty led by Greg Kestin published this really interesting study this summer where they were comparing an AI tutor bot with in class active learning in a introductory physics class. And you know, I’ve been advising faculty for 20 years to move away from the lecture and into more active learning. We know that active learning is demonstrably better for most students. They thought, hey, let’s pitch that best case scenario with this new thing, this AI tutor bot, and see how it goes. And the short answer is the students learn better from the tutor bot than from the active learning experience. And they did it in a little less time as well. So lots more we can say about that. But I want you to know about this article because there is a growing group of research looking at some of these things and their implications.
Category number four, we’re already on number four, is what I’m calling a feedback bot. Can we coach an AI to give students useful feedback on their writing or on their other work? Well, I interviewed Pary Fassihi from Boston University on my podcast, gosh, a year and a half ago now. She was teaching a writing course. This was, you know, a couple years ago in the AI era. And there was a a snow day. And so her class did not meet, she wasn’t able to do the peer review that she was anticipating. And so she thought, “Hey, let’s have my students use AI to get feedback on their drafts.” And she gave her students pretty specific prompts. Evaluate the evidence used to support the main argument. Is the evidence relevant, sufficient, and effectively integrated into the argument? How well does the paper analyze the implications of digital technology on academic integrity and authorship? What insights or unique perspectives does the paper offer? So she was giving students the prompts, right? This is not a full on chatbot, but she was crafting some better prompts for students to get that feedback that was tied to her learning goals for the assignment. You can kind of see these are kind of right out of her rubric, right? And it turns out that the students got some pretty useful feedback here and then had to decide what to do with it, right? Do I take this feedback as gospel or do I use it as a provocation for my revision? There was an interesting study last year by Jacob Steiss and colleagues that looked at the quality of human and ChatGPT feedback on students’ writing. This is a K-12 grade, 6 through 12 example. And again, I’m not gonna go into the details of the study, it’s worth reading. They did find that the human, the trained teachers gave on average better feedback on students’ writing, but not much better. And if you looked at how well was the feedback based on the criteria for the assignment, ChatGPT did a little bit better than the humans on average. And so my takeaway from this is that ChatGPT can give terrible feedback, but it can also give pretty good feedback if you’ve based it on specific criteria and kind of coached the chatbot to go in that direction. And that’s what my UVA colleague Spyros Simotas has done. He did something similar where students were doing some writing and then using a prompt to get feedback from ChatGPT. And he found the students weren’t doing that rubber duck thing. They weren’t going back and forth, they were just taking the feedback as gospel. And he’s like, I want them to do better. He said in a presentation earlier this week, I teach French, not prompt engineering, so I don’t wanna teach my students how to prompt better, that’s not my job. Instead I’ll make a chatbot that has kind of built to have the useful interactions that I want it to have. And so the student puts in their draft of their, in this case the writing in French. The chatbot looks at it and says, here are the main error categories that I noticed. Five different categories in order of priority. Which one do you wanna focus on? The student picks articles and determiners, that’s great. Here’s an example of something you might need to work on. And instead of fixing it for the student, asking them some questions that might help them fix it. This goes on and on through the different categories as the students control it. And at the end there’s a report that the chatbot generates summarizing the interaction. Again, it’s early days to know kind of how effective this is, but I’m impressed at how structured the interactions are with this chatbot. The categories, the priorities, the summary at the end. Spyros spent a lot of time writing that prompt to control the chatbot and testing it and iterating it and changing it and updating it. There was a lot of load to create a prompt that made a chatbot that do that did this very specific thing.
Category number five, very briefly, a Conversation Simulator. There’s a lot of fields, education, nursing, health professions, law business that already have some type of simulation environment, clinical simulation as part of their teaching where students are interacting with their teacher who’s playing a role or a trained human actor who’s playing a role. Your AI can play a role too, if it’s kind of coached in the right way. This is an example from a nutrition instructor at Westchester University of Pennsylvania who wanted her students to practice a particular kind of health consultation. And so she had the AI play the role of the client coming in. And I like this because the AI is unpredictable, inconsistent and sometimes doesn’t tell the truth just like real clients, right? So it’s in some ways a more authentic experience for the students. So five categories, again, these are early days to kind of see which of these are really gonna take, but this is where I’m seeing the use cases emerge. And I’ll leave you with a couple of big ideas before we go to the Q&A. One, is this terrible Midjourney… No, this is a ChatGPT image. It’s the best I could do. Good fences make good chatbots. The chatbots are, all chatbots are designed to do something. They’re not always designed to do what we want them to do. And so sometimes we can put some fences around its behavior and coach it into doing something that’s pedagogically useful for our students. But that does take some work to design those fences, to design those system prompts that make the chatbot behave in some ways and not behave in other ways. And that leads me to my second wrap up point that close is easy, exact is hard. If you want a chatbot or an AI to do something very precisely, you’re probably out of luck. Maybe you’ll get there with a ton of work. If it’s something that’s kind of close enough, like simulating a client who’s a little bit unpredictable, that’s great, that’s a lot easier to design. And so be thinking about when is the feedback or the tutoring, when is it gonna be a case where close enough is kind of good enough or do you wanna put the time and effort like Spyros did to make something that’s really robust that does exactly what you want but may take a lot more time and effort on your part to develop. So, I’ll leave it there.
We’ve got a little time left for questions. I am reading these live. Do you ever fear you are participating in training your replacement? Ha ha. Okay. There is a smiley face there. So I will take that hopefully in the way that it’s intended. You know, Dan Levy co-wrote a book called Teaching Effectively with ChatGPT, and I heard him on a podcast last year where he said, you know, if you’re designing a feedback bot to give students feedback, you are now the designer of the feedback and not the giver of the feedback. The designer of the feedback is still really important, right? And so when you do a peer review activity through students and they’re giving each other feedback, right?, that’s not coming from me. But if I’ve done that well, I’ve designed an environment where students are equipped and ready to give each other useful feedback. I think we’re still gonna need a lot of humans to do that kind of work. Again, kind of close is easy, but exact is hard. And so the kind of really quality teaching that we see humans do, I don’t think AI is gonna get there anytime soon. I think also all of these are use cases that are embedded in the social environment of learning, students in classes, either online or on site, instructors who are building relationships with students. What we learned from the MOOC Mania of 2012 is that a few students don’t need all that. They just need good learning resources and they’ll go to town and educate themselves. Most students need scaffolding, they need structure, they need support, they need relationships with peers and with mentors, right? That’s the kind of stuff that I think humans are really good at. And AI is just an approximation and probably won’t ever replace. So, no.
Second question. Have you ever used NotebookLM to do some of these chatbot tasks? Ooh, okay. Very good question. So NotebookLM is a Google product and it is based on AI. The interface is very different. You can give it a bunch of documents and you can have a chat with it about your documents, but you can also have it generate an audio summary or a kind of PowerPoint style summary of your work. It can create mind maps of your text. Very different in a way interacting. And I would say, one, it does kind of different things, right? And it’s kind of better at some things. I think it’s pretty good at summarizing, maybe better than some of the other tools ’cause that’s kind of its main thing. It certainly makes those cheesy audio summaries that, you know, are unique, I guess. I would say also that I have predicated this session on the assumption that the chatbot is a useful way to interact with AI and it might not be long run, right? We are currently talking to AI like it’s some little robot buddy of ours, but we don’t like, that’s not how you have to use AI. There are other ways to interface with AI. And so I’m also interested… One of the things I like about Google is that it shows like there’s a way to use AI that’s not a chatbot. And so that might inspire us to think about some other interfaces that are a little bit different and not so chatty, which again come with a certain set of problems that we may have to solve pedagogically. Andy says… Andy Van Schaik, it’s good to see your name. Andy and I worked at Vanderbilt together for a long time. What is the simplest tool to create chatbots of the type you described? So all the leading vendors have some option for this at some point, at this point. You can go with ChatGPT and if you pay $20 a month you can do their GPT and create one. Copilot’s got a version of that. Claude’s got a version of that. Gemini’s got a version of that. There are other tools that are I think more geared towards educators that are worth exploring. Cogniti out of the University of Sydney is certainly one of them. BoodleBox I keep hearing about. I’ll put these names in the chat. BoodleBox. Playlab is another one that I keep seeing. I don’t know those as well, but those are the places that I might want to try to experiment with first since they are kind of for educators and in some cases designed by educators. I’m sure there’s others. Those are the ones that keep crossing my radar.
Rachel says she uses Playlab and it’s been great. So maybe that’s a great place to start. Yeah. You know, we didn’t touch on this, but I have a friend in town who teaches at Lipscomb University, which has a site license to BoodleBox. And one of the things you can do in BoodleBox is have your students create AI chatbots that you have visibility into. You can see how they design them, you can see the interactions that they’re having. So she has an assignment that has her students building chatbots to do certain things. And that’s an interesting pedagogical move as well, kind of students as producers of chatbots as a way to understand how these things work and how AI works. Justin asked, are paid versions of generative AI better at these type of chatbot creations? Yes. In some cases it’s a restriction piece where you can’t even create your own chatbot or agent if you’re not using a paid version. Certainly for the commercial providers, the paid versions either provide better models, more robust AI or more typically, they just give you more access. So you can do, you can ask, you can kind of use the power AI more often each month than if you’re on the free version. I don’t know about PlayLab, if folks know more about that and want to say that in the chat, I don’t know what the cost structure of that is at all. Cogniti is a whole different thing where you can, you’d have to go to University of Sydney to try to kind of access that platform. And so, you know, on some level all of these tools are paying for computing power to do the AI work under the hood. And the question is, who’s paying for that? Right? So maybe it’s you, maybe it’s your institution, maybe it’s a commercial provider, right? I don’t have a clear answer, but certainly paying attention to where the money goes is an important part of this. Tony asks, are there ways you’ve seen people, instructors or learners misusing chatbots? Oh. What can go wrong? I mean, the first thing that comes to mind, and I almost shared this, but there’s a kind of inverse of the tutor bot. The idea with a tutor bot is the students come with questions and they hopefully have a useful productive interaction with the tutor bot. There are also faculty who keep trying to design what I would call a practice bot or a quiz bot where you want the AI to generate a whole sequence of questions for students to try to answer and get feedback on. That seems to be one of those examples where close is easy, but exact is hard. The faculty of those who’ve tried to use existing AI tools to create an endless series of useful practice questions struggle. The questions aren’t formatted right, the answer choices are bad, the correct answers are incorrect, right? That seems to be a struggle. And that’s where I feel like, we’ll see how this goes. I think it may take more software engineering resources than individual faculty have to make one of those successful. We may need that from a larger organization that can spend more time in getting it exactly right to do that kind of work. So that’s my theory right now. I’m gonna do the one with a question mark.
What are the concerns around AI models becoming weaker or less accurate as time goes on? I don’t hear a lot about that. I think in general they’re gonna get better. I think there are some dangers of… You have to train an AI on a large body of natural language text. The big models have basically hoovered up huge sections of the internet to do that. As more and more of the internet is produced by AI, then it’s kind of like eating its own dog food. That’s the wrong metaphor. But like it’s using its own output as input and that could lead to some problems. I’m not enough of an AI expert to know how problematic that is. I think there are some ways from what I’ve heard, to do that really well and thoughtfully. But that is one problem. I think the bigger problem actually is not the large language models and how they generate text, but the design choices that the creators of these chatbots make in how the chatbots work and how they act. So one term to know is sycophancy. So AI chatbots tend to tell you what you wanna hear and agree with you even if you’re demonstrably wrong. And in fact there was an update to ChatGPT this summer that seemed to increase the sycophancy and people like rebelled. Like, yeah, we like a little flattery, but actually we wanted to be correct as well. And so that’s where I feel like things may start to skew in really weird ways. You may be familiar with the AI chatbot Grok from the company that used to make Twitter seems to have a whole different set of rules governing its output than most other AI chatbots. And so that’s the kind of thing that I worry about, is kind of some of the products will be unusable because of some of the design choices of their creators.
There are a lot of ways (good and bad) that an off-the-shelf AI chatbot like ChatGPT or Claude can be used in teaching and learning, but the default behaviours of these chatbots don’t always align with our pedagogical goals. There are, however, a variety of tools for designing custom AI chatbots with particular purposes.
In this webinar recording, Derek Bruff explores some emerging teaching applications of custom AI chatbots, from tutor bots to course assistants, to assignment coaches, and beyond. This isn’t a demonstration of how to make a custom bot, but more of an exploration of some of the reasons why educators might want to set one up. Derek Bruff, Associate Director at the Center for Teaching Excellence, University of Virginia, USA, writes a weekly newsletter called Intentional Teaching and produces the Intentional Teaching podcast.
This webinar was inspired by Derek’s blog post “Not Your Default Chatbot: Five Teaching Applications of Custom AI Bots” (2025).
Below are the key discussion points with timestamps from the recording. Hover over the video timeline to switch between chapters (desktop only). On mobile, chapter markers aren’t visible, but you can access the chapter menu from the video settings in the bottom right corner.
- 03:10 – The Rubber Duck Effect
- 04:40 – Course Assistant
- 07:35 – Assignment Coach
- 10:51 – Tutor Bot
- 15:14 – Feedback Bot
- 18:51 – Conversation Simulator
- 21:06 – Q&A
Useful resources/references:
- Download the webinar slides.
- Intentional Teaching podcast Episode 068. Teaching with AI agents with Matthew Clemson, Isabelle Hesse and Danny Liu.
- Intentional Teaching podcast Episode 035. AI-Enhanced Learning with Part Fassihi.
- Kestin, G., Miller, K., Klales, A., Milbourne, T., & Ponti, G. (2025). AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting. Scientific Reports, 15, 17458.
- Steiss, J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., Warschauer, M., & Olson, C. B. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learning and Instruction, 91, 101894.
- Scaling Yourself with Chatbots: Instructional Design Meets AI Collaboration Padlet by Heather Brown.
Recommended OneHE Content:
- Introduction to Artificial Intelligence in Teaching and Learning (Course)
- Getting Creative (and Critical) with AI Literacy (Interview) Free
- Using ChatGPT To Create A Study Agent (Demo)
DISCUSSION
What insights or ideas from the recording inspired you?
Please share your thoughts in the comments question below.