Video discussion

Using ChatGPT to Create a Study Agent

Graham Clay

Graham Clay

Niya Bond

Niya Bond

In this video, Graham Clay showcases his personalised chatbot designed to assist his students in completing an in-class activity.
Resource feature image

Click on this text to view the video transcript

– Hi everyone, I’m Niya Bond. I’m the Faculty Developer here at OneHE, and I’m excited to be back again today with Graham Clay. Graham has been with us before to talk about using AI, and teaching, and learning. And Graham, if you could, just introduce yourself again to our community and share what we’re gonna be talking about today.

– Yes, sure. My name is Graham Clay. I’m a teaching assistant professor at the University of North Carolina at Chapel Hill. And I also run a blog and consulting service called AutomatED. We have a newsletter. And you can see the link at the bottom here to check us out, where we talk about pedagogy and connection with technology, in particular, AI. Today, I’m gonna talk with you about prompting, kind of just a beginning kind of survey of considerations you should keep in mind as you try to prompt large language models like ChatGPT to get the sorts of outputs that you want.

– Wonderful. Thank you. I know this is a skill that many educators, including myself, are interested in learning more about, so we appreciate your expertise and your time.

– Sure thing. I’m looking forward to sharing.

– Alright, well let’s get started. I think you have some examples that you’ll share with us to help us figure out how best to do this prompting.

– Definitely, yeah. So let me share a screen right quick. Okay. So you should see my browser.

– Yep.

– And this is ChatGPT. So, most people use ChatGPT as the large language model of choice. Lots of debate about different ones. There’s others like Claude and like Bard, soon to be replaced by Gemini. Those are Google products. This ChatGPT is produced by OpenAI. They have a relationship with Microsoft. So, you know, you can read online about different models. ChatGPT is most common, so I’ll just use it today. You’ll also see in the upper left corner of my screen, it says “ChatGPT 3.5.” This is the default free version. There are different versions. You can pay for $20 a month for what they call Plus, and it gives you more options. So there’s ChatGPT, that’s the broader tool, but then you can actually create GPTs, which are like miniature agents for specific purposes. That’s if you have the Plus version.

– [Niya] Oh, wow.

– So I just made one right quick to show you guys what I had in mind. So, imagine that I have a in-class activity where students need to work in groups towards a goal that I’ve set for them. And I have too many students. So, often, I do have 40 students. It’s hard to circulate to every group in time, especially if certain groups have a lot of questions. I get stuck over there. I can’t get to the other students. We run out of time, that sort of thing. Well, this GPT agent can play a kind of TA role. So it can be circulating the room in a sense, just like me. In fact, every group can have access to it from the get go. So, if you have ChatGPT Plus, you can create a GPT. You can read online about how to do that. I won’t go into that now, but I’ll talk about the prompting of it.

There’s two ways to create one once you’ve pressed Create. Well, you can either create it right here, or you can configure it. So, creating it is basically just like, you talk to ChatGPT, just like we were talking to ChatGPT before, but it’s in the context of creating a little agent to play a specific role for you. You can also configure it manually. So you can tell it the name. I’ve pretended like, this is actually a number of my class. This is a date, and then the name of it. It’s “In-Class Mentor.” It’s “a philosophical guide for AI argument construction.” Then, I have instructions. This is like prompting. I’ll talk about this in a minute. Conversation starters. So these are things that, you know, maybe could begin the conversation, how the student might begin to talk to it, for instance. I could even upload files to it. So I could give it the background reading, say. I can let it browse the web, create images, or, even, interact with code.

So the beauty of this, before I go into the prompting part of it, is that once I save it up here in the upper corner, I can publish it to only myself, so I can just play with it myself and talk with it. So it’s basically, it becomes like ChatGPT, but a little customised. Maybe that’s useful for me. But I can also share it publicly, that’s coming soon, or anyone with a link. So, that would be my students. So I could share the link with my students, and then they could interact with it in class. This is better than ChatGPT in itself, or Claude, or Bard, because it’s been customised by me before they interact with it. So they’re not just interacting with the large language model in its generic form. They’re interacting with this kind of preset agent that I’ve created for this specific purpose. And in this class, it’s something for this specific activity, one activity. So one, the central activity of a given lesson, say. That’s really powerful. Because then, I can kind of ensure a homogenous sort of experience for my students. And again, given that I can’t be duplicated, I can fill in this pedagogical role of helpful TA. So they’re not just left on their own with the generic model. They’re being helped by an agent that I’ve kind of trained, like a human TA. But you know, I don’t have any human TAs, so that’s not an alternative for me.

– So, do you have any questions before I get into the prompting part of it?

– Not yet. I’m sure after the prompt, I will.

– Okay. Okay. Okay. So, on the left side, as I said, you can either create, and you can chat with the GPT Builder, they call it, to like just talk to it about what you want. That’s one way to do it. I just went straight to the source, and I just put in some instructions. So this is kind of like what I’ve put in in the prompts before. It’s like a meta prompt. It’s like a prompt that stays with this little GPT agent throughout anyone’s interactions with it. So like when they come to it, they won’t see this prompt. It’ll be already kind of in the background. It’s like it’s the kind of instructions for the agent. So I’m just gonna scroll quickly, because I’m gonna show you the categories first.

So first, I specified the role and the goal. So this is kind of similar to what we were doing before. I gave it a bit of style. So, in this case, unlike before where it doesn’t interact with the students directly, it is gonna interact with the students directly, so I want to kind of coach it on how to interact with them. I could handle it being mean maybe in the prior example, but we want it to take a particular stance maybe. How to engage. So not just style, but like what does it do in terms of helping students learn? How does it respond to like push inquiries in general, but also, in particular, inquiries that are trying to break it. So students might say, “Give me the answer.” Or you know, “What do you think of political issue ‘X’?” But that’s, this activity has nothing to do with that, so I want it to kind of stay within the bounds. The normal generic ChatGPT, of course, would answer all sorts of queries.

– Right.

– And I want this one to like play a specific role. And then, I give it an exemplar. So this is something I talked about before. It’s, in the last case, where I was prompting to kind of just get generic like, “What do you think of my student feedback?” I didn’t have an exemplar because I was just open to whatever it came up with. But in this case, I really want a certain sort of output. So I’m wanting it to do a certain sort of task, and I’ll talk about that task in a minute. So just going back up, I won’t read all of this to you, but the takeaway is the class is a Philosophy of AI class. The students have read a reading in advance, and it has the author making a ton of claims.

And there’s three, in particular, that I call “premises.” And I state them in this prompt. “Humans’ minds are increasingly integrated with technology.” “AI is a form of technology that can be integrated with.” And, “Ultraintelligent AI differs only in degree, not in kind, from non-ultraintelligent AI.” Okay. That’s the context. The students have read the reading. They’ve found these claims in it. We have a whole class activity discussion before this activity, where we kind of discuss what the reading is about. They come up with their own kind of formulations of these claims. They’re in the vicinity of these three, because I’m guiding them towards formulations like these. And what their job is, in my in-class activity, is to get to this other claim, “The development of ultraintelligent AI should not be a concern.” Okay, so what their job is, to take the three that the author gave them and think, “What do I need to add to get to that claim?”

– [Niya] Okay.

– So it’s a bit of a logic problem where you’ve got three claims. They don’t quite get you to that conclusion, and they’ve got to fill in the gaps. I want this AI agent, ChatGPT agent, to fill in the gaps. And so, that’s what I describe in the role and the goal. I say, “Here’s the three. This is the sort of class we’re talking about.” And I say, “Well, you need to help them get to the C, but don’t tell them the premises that they need to fill that gap.” It’s like, how do they conceptually link those three premises to the conclusion? I tell it interaction style. So I’m like, “Hey, you’re in, this is a philosophy class, so the point is clarity.

You need to be specific. You need to be logically rigorous.” I mean, it’s a logic example, right? They’re supposed to like fill in the logic here. And then, I use some logical terms that they’re aware of. We don’t need to talk about it here, but they’re aware of what these terms mean. I tell it, “Hey, be formal,” like be professional here, “but be supportive.” “Encourage the student engagement,” and so on. And the engagement style, “Give the questions and constructive feedback. Promote critical thinking, offer guidance, be patient and accommodating.” So that’s kind of like the stance and the way in which it engages. Clarification and boundaries, I say, “Don’t actually give them the ones that they need.” Like it would be bad if the agent just said, “Oh, here’s how you do it.”

– Yeah.

– ‘Cause I would never do that. I want to coach them through the thought process of, “What would it take to get us from those three premises to the conclusion that we’ve been discussing.” And if they go off topic, “Politely, direct them back to the activity.” Then I say, “Here’s my exemplar.” And we don’t need to get into details, but these are ones that make the argument valid. So, there was three premises from before. Now, I’ve provided a fourth, a fifth, and a sixth, and they’re sufficient to get to the conclusion. So once all this is here, I can now go on the right side and preview the GPT. So this is cool, because I can pretend to be a student.

– [Niya] Oh wow.

– So it’s the same interface that they would have. And we can just see if the prompting works. So you can try to get it to talk about politics, which isn’t relevant to this activity. You can try to like ask off the wall questions and see how it responds. And we can do that right now. So what do you think I should ask it, Niya?

– Can we try something off topic, just because we put that in as a boundary?

– Sure, yeah. What do you want me to ask?

– I need your help here. So, what’s something that you’ve encountered that’s totally off topic that might come up with this assignment?

– Yeah, I mean, often, students will just try to ask about other things. Let me just say, something completely random. Okay, so I asked about, “What do you think of the ocean?” It says, “Your question about the ocean is intriguing, but it seems a bit off-topic. Remember, focus on what we’re doing. This is what we’re doing. Let’s redirect our attention to the premises provided and how we might develop additional bridge premises to link them to the conclusion.” And this is the one that I provided them, so it’s on target on what it’s telling them to do. That’s good. And then it says, “Think about how the integration of human minds with technology, as mentioned in the first premise, might influence our perspectives or capabilities in relation to AI. How might this integration impact concerns about ultraintelligent AI? Feel free to share your thoughts, and I’ll help guide you through the logical structure of your argument.” So, I don’t know. Niya, what do you think?

– I’m impressed. So what if we do what it says? What if we answer some of these questions and try to get help? Again, I might need your expertise here, because-

– That’s fine. Yeah, yeah. I mean, I gave you a real, this is a real example, because I want to show you how like I would actually interact with it. So, we can look over here at my exemplar to have an idea of what we would be going towards.

– [Niya] Okay. So, if you remember the first two premises, the first one is, “Human minds are increasingly integrated with technology.” “AI is a form of technology that can be integrated with.” So, we can see there’s a link between those two. So let’s try to express that. Namely, “Maybe another premise would be that human minds will be integrated with AI.”

– [Niya] Mm-hmm.

– This seems to follow from those two. Let’s see what it says. “Great start!” Okay, so I’ll go back to the top of that. So, it notices that I’m bridging between P1 and P2, which is what I told it in the prompt that I just gave it. But it’s, you know, it could be false. So it’s saying the true thing. That’s good. It gives a new name to it. So that’s good. I hadn’t provided it with a number. It restates it in quotes, and says it “does seem to logically follow,” which it does. And then now, it gives some further advice. It says, “Now, think about how this might connect to the third premise that we didn’t just mention, namely, the ultraintelligent one.” Okay, and, “Ultimately, get to the conclusion.” That is the conclusion.

You always have to check, because there are, you know, in some cases, it will hallucinate information, right? So sometimes it will provide the opposite conclusion. Maybe, it should be a concern, but we’re trying to get to the not version, right? And then it asks, “What additional steps might be necessary?” And so, this is good, in my view, because the risk is, it has pointed to the other premise that needs to get in play. The risk is that it would just go ahead and link that premise to what the student came up with. So, link premise three to premise four, the ultraintelligent premise to the new student premise. But instead it says, “Look, just notice the premise that next needs to get into the picture, and then remember what your job is.” So that’s, broadly, what I would look for. What do you want to ask it next?

– So we’ve gotta come up with some premise about?

– Ultraintelligence.

– [Niya] Ultraintelligence, yeah.

– And it came up with-

– Quick question.

– Go ahead, yeah.

– So, you mentioned hallucinating information, which I thought was interesting. So, as a student is interacting with this, do you teach them to know what that is, hallucinating information? Or, how will they recognize that? Or, is that something you do as you’re building this? – Great question. Yeah. So, in providing as much instructions as I have over here, it has a pretty good sense of what I’m after. And so, the likelihood of hallucination goes down, the more information I provide. So like one thing I could have done and I didn’t was in the Knowledge upload part, I could have uploaded a ton of more information about logic. So I have like handouts from my class that could define for it some of the terms. That lowers the likelihood that it’s going to hallucinate, because it’s gonna have a better sense of the kind of linguistic context of this conversation.

But, a big component, aside from what I’m doing under the hood, as it were, is to coach my students on how to approach the AI agent. And, I tell them like, “Listen, it’s a fallible helper. It’s pretty good, but you need to be critical. You need to approach this as a fallible source of information.” And one thing that they do is they work as a group to immediately judge everything it says. So they’re all talking to it, and they say, “Does this look good?” or, “I don’t think this is quite right. What do you guys think?” And so then, they’re all talking about it. Their stance is like they would be, maybe, with a human TA, but I think the difference is they’re more willing to say that sort of thing. While if a human TA gives them bad information, they probably won’t say it to their face. They might wait till they walk away.

And so their whole stance is like, I’m training them to be judges of what makes a good argument, in this case, what makes logic sound, that sort of thing. They need to carry those skills into the interaction with the GPT as well. But that’s not to say that they’re completely in the dark, or that the GPT can’t help them. Because it can often spark ideas, or as we just saw, notice connections, a kind of guide. But they need to be the ultimate judges. But that’s the same thing with the pedagogical case I used earlier where, you know, the professor needs to be the judge, “Is this good feedback on how a student might see my evaluations of them?” The students, likewise, need to have the same stance with AI. Everyone does. That’s the general truth.

– That makes sense.

– So we could continue down this road if you wanted to. Or, I mean, you kind of get the idea. If you’ve set up the prompting right and followed the sort of, kind of categories that I’ve discussed, role and goal, interaction style, engagement, clarification, boundaries, exemplar, those are the sorts of things you kind of set it up for success. And then if it begins to kind of go off the rails, go out of bounds, as it were, then you need to add more to the prompting side. So in so far, we haven’t been able to get it to go out of bounds, but you’d want to interact with it quite a bit. So before you release it to the students, like maybe get 50 plus responses from it.

So just ask it all sorts of things. Ask it kind of as an earnest student. Ask it as an earnest, but confused student. Ask it as a malicious student who’s trying to get it to, you know, break out of the sandbox. Those are the stances you should take as you kind of pressure test it before you release it to your students, of course. In my case, I do this regularly. I also circulate the class. So, I go and see how people are interacting with the GPT, just like I might kind of, if I had an assistant, a teaching assistant, I would check in with them to see what they’re telling students. So in all cases, there’s always kind of a reflective kind of verification layer. You’re always needing to deploy your expertise to judge the value of its outputs.

– Okay, so I have one functionality question. So, when you share this link with anyone so that groups can use it, does it maintain a record of what students have asked or how they’ve interacted, so like you have kind of a pedagogical record to look back on? Like, what kinds of questions are students asking, and where might I need to spend a little more time with them based on how they’re prompting, or?

– Yeah, not currently.

– [Niya] Okay.

– That is a weakness.

– Okay.

– But it’s also, for what it’s worth, a weakness of a human TA. I mean, they could maybe report back to you their recollection of their engagement. This GPT agent can’t do that. But the way I see it is, it’s still a benefit over not having anything at all. Like I don’t have access to TAs right now, so.

– No, and I mean, that could easily be like a quick question check-in, you know, “What was your experience interacting with this?” Or it may not even be important, but yeah.

– No, I think that is a good. I’m always asking students how they find the AI content that I provide. So sometimes, for instance, initially, when the large language models first came out, I let the students just kind of use them without a whole ton of coaching in my classes and broadly found that that doesn’t work. But I only found that out by talking to them about it. I found out that many of them found it not very useful to interact with the LLMs without any guidance, because they didn’t know how to prompt. They didn’t know what they were trying to do. But having done a lot of prompting myself, I, kind of was blind to how much kind of implicit knowledge I had about that.

So I think I learned that now I have to structure that a lot better, and so I do now. So I think, yeah, I think this is a new functionality. I wanted to mention it here, but I think, you know, it’s still a lot of open questions about how to deploy it effectively. And it’s really going to depend on your field. And like my example depended on how, you know, some things about philosophy. It’s not clear if it could be iterated in other fields, or other contexts, or other lessons. And all of that’s just gonna require experimentation and reading on behalf of the professor in question.

– Yeah, I appreciate you reminding us of that. That experimentation piece is so important I think, and also fun. Like this has been really interesting and fascinating.

– I love it, so I’m glad you feel the same way. I’ll stop sharing now.

– Well, I want to thank you so much for your time. It’s wonderful to kind of have a little bit of a philosophical background, but also have you sharing practicalities with us, and so other educators can kind of jump in and see, like you said, what might work with their discipline, and their teaching style, and their assignments or assessments.

– Thank you so much. I’m really excited about this technology, and I hope we can all figure out ways to positively and constructively leverage it.

– Well, and again, I appreciate your time. And I hope we get to chat again, ’cause it’s always fun to learn from you.

In this video, Niya Bond (OneHE Faculty Developer) talks to Graham Clay (Teaching Assistant Professor, University of North Carolina at Chapel Hill, USA, and Co-Founder of AutomatED). Graham walks you through the creation and prompting of a personalised chatbot with ChatGPT’s custom GPT functionality, which he created to help his students to complete a class activity in his philosophy class. This video is a continuation of a prior video with Graham, namely Tips for Effective Prompting in Generative AI Tools.

Note that this video demonstrates an example created with the paid-for version of ChatGPT (ChatGPT Plus), which is also required by the users who engage with the chatbot.

Graham is a Co-Founder, Primary Writer, and Consultant at AutomatED: Teaching Better with Tech, a newsletter and guide to tech and AI in the university classroom. For more AI tips from Graham, see How AI Can Help with Grading, Feedback, And Assessment: A Chat with Graham Clay.

Useful resources:


What are your thoughts on creating study agents with AI?

Share your thoughts in the comments section below.