AI is a joke

AI-generated image of a handshake, which is comically inaccurate. One hand appears to have 10 fingers.
TC Talk
AI is a joke
Loading
/

We reflect on AI text generators, creativity, technical communication, writing instruction, algorithmic literacy, magic, and more. Importantly, we reveal the results of our Twitter experiment: Are we funnier than a robot? (Results were mixed.) Also, find out what happens when we drink an AI-generated cocktail recipe and ask ChatGPT to write a stand-up routine about the ethics of artificial intelligence. 

Sources and further reading

Transcript

AI am Professor Abigail R. Bakke.
B And I am plain old Benton Bakke.
AOld is right.
BYeah. Lay off, you youngster.
ANo, but you have, you have a curious history career wise.
BIt’s true.
ABut that gives you all the more to draw from as we discuss the scholarship of technical communication.
BYeah.
APizza delivery driver to engineer to
BLet’s not forget, librarian.
AAh, I too was a librarian. Well, not officially,
BRight, same.
ABut I also got to sit at the desk and tell people to use the self-checkout. So I was really worthless. And now you have swapped the desk for the great outdoors.
BI have worked my way up in the world to cutting trees. In fact, sometimes way up in the world.
ANice.
BAs many as 70 ft up.
AOkay, bottoms up.
BWhat in the world? What is this?
AThis is probably a good time to tell you that we are drinking a cocktail recipe generated by artificial intelligence. What do you think?
BIs it an evil artificial intelligence?
AI don’t think it’s bad. I was hoping it would be bad because I want reassurance that AI isn’t getting too intelligent. You know what I mean?
BWhat is this?
AWhat part of it?
BWhat is in this?
AI basically said, give me a recipe with peppermint schnapps because that’s the one thing in our liquor cabinet that I just don’t know what to do with it. So it gave me a recipe and I asked it to name the drink as well. So we’re drinking a peppermint schnapperscream. That was when I told it to give me a cute name for it. And then I asked it to give me a sophisticated name for it. Here’s the sophisticated name. Minty snow float. So it’s basically okay, it’s ice cream. And our cat snuck in when we weren’t looking.
BOur cat is not an ingredient in this drink.
ANo. Where was I? Ice cream, peppermint schnapps, club soda, and then it said to garnish with crushed candy cane. See what I did?
BAround the rim?
AThis was my own little twist on it. Yes.
BI’m not a real fan of mint.
AUgh, fine. Today we are not talking about disaster. Which feels weird because it’s been literal months.
BIt’s true. And, you know, frankly, I am excited. I am so glad to be out of the hot seat and controlling things. In fact I would be pumped, turning it over completely to an algorithm to run this podcast.
AOh, I see what you did. No, but I really enjoyed being the reactor. You know, you being the researcher and I being the reactor. That was a fun dynamic to experiment with. And I also figured it out. You are the Peter Jackson of podcasts. Do you know why?
BTell me, Frodo.
AI don’t know if you’ll like my answer,
BUh oh.
ABut he took one book and spread it out into three movies.
BYou are in this case talking about The Hobbit.
AYes. But that’s okay. That is a style that some people appreciate. Meanwhile,
BThere was that much story to tell.
ADid I ever tell you this? The first time I saw the Fellowship of the Ring, I was disappointed because at the end of the movie I was like, they didn’t throw the ring into Mount Doom.
BRight.
ASo likewise, I would like to be able to condense multiple books into one episode, but that usually doesn’t happen either. But today, we’re not talking about books. We are talking about AI text generation, creativity, magic, technical communication,
Band the end of the world.
AWriting pedagogy.
BOh, right.
ASpecifically, chat GPT, which is all over the Twitter discourse. Everyone has something to say about it. And somehow I think I can add to the conversation so here I am. Chat GPT. I don’t even know what the GPT stands for, but it’s an app. It’s an AI tool that you can ask it a question and it generates an answer from who knows where, everything on the web ever? And you can ask it to write in different styles or genres, like a recipe, for instance, a cover letter for a job, an essay for your English class. Hopefully, we can, we can put a little bit of a unique spin on it. Or at the very least, we can have some fun with it. Because I did. Professor twitter has a lot to say about it and people from different fields are weighing in and experimenting with it and trying to comment on how stupid it is or how smart it is and how this will change education for good or for bad, you know the drill. As with any new technology, we’re hearing the gamut of, of reactions. One of the main concerns, of course, is cheating and writing instructor Twitter. That particular corner of Twitter, I should say, has more nuanced takes about it than in the mainstream. And I think that’s because good writing pedagogy already guards against other kinds of plagiarism, so maybe there’s less concern on the cheating front. For instance, having really specific localized writing prompts versus write a five-paragraph essay arguing in favor of…
BCannibalism.
ANo, that is not a paper that students write. Should hope not. You know, like, what’s the most generic prompt you can think of? The value of a college education, say.
BDefend the next $60,000 of your life.
AOh yikes. Or write an essay comparing and contrasting, whatever. So with that kind of prompt, AI does pretty well, decent enough to cause some alarm. But teachers aren’t asking students to write that kind of paper. I don’t think. It’s also been a very long time since I’ve taught first-year writing. I teach almost exclusively technical, yeah, just technical writing at this point. And I feel like tech comm by its nature, is very specific and exact. I’d like to think that’s harder to replicate with AI. But, AI, even if it’s not that intelligent now, it’s only going to get more so in the years to come.
BThat is of course, unless we’ll say, techno anarchists feed bullshit to the AI in order to keep them stupid.
AI love it.
BI need to pitch that to Hollywood right now actually.
ATrolls for freedom. That’s the title of the movie?
BTrolls versus terminators.
ANice. Even if we don’t have to deal with a super smart AI right now, that’s not always going to be the case. So I don’t think we need to freak out, but neither can we rest on our laurels. Let it be known Benton finished his drink before I did.
BYes.
AThe other thing is that it really depends on the field and on what you’re asking it to do, whether you’re gonna get a legit response. So apparently chat GPT was able to pass an MBA exam.
B I have seen that it passed the bar in, I don’t know what state.
AIf it’s multiple choice I can see that. There’s also been times where it’s gotten basic historical facts wrong or math wrong. And like, there’s an actual word people use for this, it hallucinates sources and facts.
BWhat is it, Fox News?
AOoh. So there is the fear response, there’s also the excitement, like this is going to revolutionize the way we work. And if you’re not on the bandwagon, then you’re like people who were anti calculator back in the day kind of thing.
BI think that like, like any technology, yeah, it’s going to change the way we work. But what does that mean? Is that a good thing or is it not?
AI’m honestly most concerned about the labor implications because okay. What was that meme that you were telling me about?
BFuturists in the 1970s: automation is going to free us up from all of the menial tasks in our lives so we’ll be able to pursue creative endeavors, the arts and music. And then it’s like tech bro in the 2020s: Automation is going to free you up from creative endeavors and the arts and music so that you can be distracted from your menial task work-life.
AAw. It is scary though, have you seen some of the AI art?
BI haven’t seen any good AI art, I’ve seen a lot of terrifying ones.
AI mean, it’s cool. It’s cool. I’m also not an artist, so I’m not able to look and be like, yeah, this has quality composition or whatever.
BThe meme pages I follow have put some really choice trainwrecks up. Most of what I saw was supposed to be people, but the faces were super fucked up. like fractal eyes or, like there’s one I saw that was, when you ask AI to give you a picture about a handshake,
Awhat did it show?
BOh my gosh. Lots of extra fingers, like a hand with like eight fingers shaking a hand with four or something. There’s one where there’s a hand shaking another hand. That is basically like a wrist connected to another hand that’s shaking a third hand.
AOh, that’s thinking outside the box.
BYes.
AWhich could be cool or
BIt could be but it isn’t trying to be.
AYeah. So what’s that one? DALL-E, no, DALL-E. Hey!
BSalvador Dali?
AThat must be what it’s named after.
BWhat?
AThe art AI. An art AI, I should say. That didn’t occur to me until I just said it out loud.
BSurrealist.
AYeah. Melting watches, right?
B Pouring clocks in the bathtub.
AWhere did that come from?
BThat’s a Dali, isn’t it?
AI feel like that’s the punchline of a joke.
BHow many surrealists does it take to change a light bulb?
ATwo giraffes and a bathtub full of clocks. So, yeah, I can’t call out like yes, this art was definitely created by a human or a robot. And as much as they’d like to think I would be able to call out an AI generated essay, I don’t think I could consistently do that. Although like from what I’ve seen, even if I didn’t know it was AI, it would not get a very good grade.
BI can definitely see that, that AI would not would not make an A essay.
AYes, exactly that. It tends to be vague and wrong in places. And like I’ve done some experimentation and we’re going to demonstrate it in a bit here. But I listened to a podcast interview recently that actually made me feel a lot better about this. It was on the Re:verb podcast. And they interviewed Scott Graham. He built a, an AI text generator into like a class project. I think he actually said write your papers using this and then revise it. And he said that the students hated it because it took so much work to bring the writing up to any standard of quality that they may as well have just written it themselves. So that’s reassuring to me for the moment. But honestly, I am really glad that there are researchers and instructors out there who are incorporating this into their classrooms and not ignoring that it exists because it’s one of those things. Students are going to use it anyway and so your writing class may as well be, and probably would be their one opportunity to think critically about how they’re using it.
BYeah.
ASo thank you, innovative professors who are doing that work. I hope that there is room for me to be a little annoyed by the existence of this thing that suddenly
BCertainly, yeah.
Arequires me to change up like projects and academic honesty policy and all that like
BOK, Boomer.
AOh my gosh. That does really make me sound like a boomer. It’s not like I’m teaching the same course again and again. Every time I reteach a course, I change multiple parts of it every single time and I always tell myself I’m just going to teach it the same way and see myself all this prep time, but I literally cannot, or I will get bored. And even so, I don’t feel like pandering to the whims of AI.
BBut it is fully reasonable to be annoyed that somebody made a thing exists that’s going to change your job.
AAlong with that, I think there should be room to be nervous about it too in terms of what the long-term consequences might be. Because the way things go in our country, people will settle for a lower quality product if it means they can fire their staff of technical communicators. That is my fear. At the same time, focusing on just the like, the text generation part of ChatGPT, that is a small piece of what technical communicators do. There is so much of a human element to that work that I think cannot be substituted with a machine.
BI honestly think that in the realm of technical communication, like you said, because it’s so specific and there’s such focus of what needs to be done in this technical communication that you literally can’t just make it up,
Aright?
BYou know, like if it’s a user manual, pfft, you’re gonna let a computer guess how it’s supposed to work? Coming up with stories. That’s one thing. Telling you how to use dangerous equipment is not the same sort of thing.
AThat’s right. And I don’t doubt that there is a way to use it, supplementally or whatever, but
BIt could be a failure of imagination on my part, but I don’t see there being an awful lot of actual communication of technical information that can be done with accuracy, efficacy by an AI.
AYeah, I mean we’ll have to see. Along the lines of long-term consequences. So it’s that what could go wrong question that we did a whole episode about last year. I hope people are asking that question about this technology. I think they are. Whether it’s enough to make a difference is another question. So some of my experimentation, I asked ChatGPT to write a podcast dialogue about the ethics of AI. You were there actually when it, when it spit it out. And we were both like, holy shit. Not because it was necessarily good, but there was definitely a sense of magic to it. Like, I can just conjure this. Let’s do a reading. Try to make it sound as natural as possible. Okay, I’ll be person one, you’d be person two. I highlighted your lines in pink.
BOh, thanks.
AOkay. Hey everyone. Welcome to our podcast about the ethics of AI.
BYeah, let’s talk about the ethical implications of the latest developments.
AThat was pretty wooden. Take two. Hey everyone, welcome to our podcast about the ethics of AI.
BYeah, let’s talk about the ethical implementations. How am I supposed to say that in a non- wooden way? Let’s talk about the ethical implications of latest AI developments.
ASo what are your thoughts on the use of AI in healthcare?
BWell, I think it’s great that AI can help with diagnosing diseases and making treatment recommendations. But there’s also the potential to misuse this technology.
AAbsolutely. There are so many ethical considerations when it comes to using AI in any field.
BYeah. And speaking of misuse, What do you think about the use of AI in military operations?
AThat’s a tricky one. On one hand, it could be a powerful tool for protecting people. But on the other hand, it could be used to commit atrocities.
BExactly. We need to be careful
ACan we just, can we just backup for a second? On one hand, it can be a useful tool. On the other hand, it could be used to commit atrocities!
BYou know, I gotta say AI, you’re showing your hand. I mean, it’s, it’s basically already said that if we let it run military technology, it’s going to commit atrocities.
AOoh. Could be used. Oh, notice the passive voice there. I just find it a funny contrast like on the one hand, positives, on the other hand, atrocities, it’s just this throwaway line. Do you not find that amusing? I mean, amusing is not the right
BIt does seem rather flippant.
AYes, exactly that.
BIt could be used to commit atrocities. Haha.
AI need to hear more Pee-wee.
BIf you see an AI committing atrocities, scream real loud. AHHHH. Huh huh huh.
AWhat? Sorry, I interrupted our very natural sounding dialogue.
BExactly. We need to be careful about how we use AI and make sure we’re aware of the potential consequences.
AAbsolutely. We can’t forget the ethical implications of AI. Well, that’s all the time we have for today’s podcast. Thanks for listening. That’s not the end of the real podcast, people. Don’t stop your podcast player just yet. We need to analyze this. There’s so much here. Deep subjects addressed superficially, plugged into a template. That’s how it reads to me.
B Yes, it is puddle deep.
A It’s almost like a Mad Libs for
BYes. Mad libs for a podcast.
ANo sources, obviously.
BWell, yeah.
ABut again, to its credit, I can see someone potentially using this to build out an outline like, Oh, we could talk about different fields or different applications of AI. And then you’d want to talk about it for more than a sentence.
BThat I think is my favorite thing, it’s just like pinball, pinball, pinball over.
AWhat’s kinda cute about it is that whatever dataset it’s drawing from people are very friendly to each other in their podcast dialogues. There’s a lot of absolutely, Exactly.
BOh, yeah.
AYou’re right. Great point. Yeah. That’s not the worst.
BThere’s no Are you out of your fucking mind?
AYeah. So then I got to thinking, maybe I didn’t give it a fair chance. Maybe the podcast dialogue genre is too superficial to let it really shine. So I was thinking to myself, what’s a genre that’s very nuanced and human: stand-up comedy. I asked it to write a stand-up act about the ethics of AI. I’m going to let you do the honors of reciting it, again as naturally as possible. We want to see how, how well this AI can emulate human comedy.
BHello everyone. So I was thinking about the ethics of AI the other day. And it got me thinking, what robots do if they could vote? Would they vote for the same party as us? Or would they be like, Hey man, you’re not a robot. I’m voting for the robot party.
AInsert laugh track here.
BInsert laugh track. Or what if robots had feelings. Would they be able to feel love? Or would they just be like, I’m a robot, I don’t understand what love is. And then there’s the question of robot rights. Should robots have rights? Should they be able to sue us for mistreatment? I’m thinking, Sure. Why not? It’s only fair. Alright, thanks for having me, have a great night, everyone.
AWhoo! Was any part of that funny on purpose?
BMaybe I’m voting for the robot party. If you imagine, instead of political party, a bunch of robots getting together and dancing
ADoing the disco or whatever, whatever people do at parties.
BI’ve never been to one.
AI don’t know what a party is, man. So yeah, it’s capable of producing a funny image, but your human brain has to take it to another level. Potential situations that are being set up here could be played out in humorous ways.
BI think you’re giving it a lot more benefit of the doubt than I would.
AWhat I’m saying is that if somebody were like, Oh shit, I’ve got my stand-up act on Friday and I’m not prepared. They could plug something in and they could read it and be like, That is shit, I can do better. How do I make this actually funny?
BI’m going to do stand-up about this piece of garbage. It’s got like the sandwich of the intro and the exit of a stand-up routine.
Aand the storytelling like the other day I was blah.
BAnd then it almost seems like the AI is coming through and trying to advocate for itself. What about robot rights? I mean, yeah, sure, It’s only fair. You should be able to sue us for mistreatment.
ALooked at through that lens that is a little scary. I think humor is one of the best tests of how human AI can come across. And in this case, I think it’s fair to say it’s failed, but like, okay, here’s the thing about humor. It can be difficult to detect even among humans in the same room. There’s so much that goes into a joke. For instance, there’s inflection, there’s facial expressions, all these like microscopic nuances. And then there’s all that assumed shared knowledge that has to be in place for a joke to make sense.
BYeah.
ABut besides humor, do you know what else is context-specific, highly nuanced, and audience-centered?
BOh, I know this one. I know this one. Is it fart jokes? I’m sorry. Technical Communication?
AThat’s what I’m after.
BYay!
AYeah. And so, like we were saying before, that’s what gives me a little bit of hope that robarts. Roborts. It gives me some hope that robots aren’t going to come steal our jobs in the immediate future. I got curious about AI’s ability to write jokes specifically about topics that you and I respectively have a very deep knowledge of. Because I wanted to see, can the AI understand this topic in a nuanced way? Or are we just going to get like stereotypes or whatever? And that’s where we get my experiment about whether we’re funnier than a robot. On Twitter, some of you may have seen this, I had a series of polls where I put a joke and then I gave three punchlines and I asked people to vote for the best one. There was one AI punchline in the midst of each of those surveys. So now you get the results. I asked ChatGPT to write a joke about technical communicators. Why did the technical communicator cross the road?
BI don’t know.
AI have three options for you. A, to understand the pedestrian experience. B, because the instructions said to cross the road. C, the question is not why, but how? This one got 30-some votes on Twitter,
Bokay.
AOh, and our friend from the Podcast of Podcasts chimed in and suggested to get to the conference center, which would be very fitting in ATTW season.
BThat’s a good one for academics.
AShould we reveal which one is the AI punchline first or should we reveal which one got the most votes first?
BMost votes first.
AMost votes first. Okay. It was A, to understand the pedestrian experience, which I am proud to say was my creation.
BWhat’s that last one, It’s the question is not why, but how? Was that one me?
AThat was me and that was the one I was actually the proudest of.
BI thought that was the best answer too.
AAnd then it was B, because the instructions said to go to the other side, that got the second most votes. And that was indeed the ChatGPT generated punchline. And this was the first one we did and so it was alarming because I was like, that’s actually a funny punchline.
BThat’s a plausible punchline.
AAnd when robots are funny, You gotta, you gotta be a little worried. Anyway.
BI’d be more worried if they were militant, but
AFair. Fair. So I decided to even try this out in person. So I asked my colleagues at a faculty meeting. So everyone there, tech comm. I figured if anyone is going to get the joke, it’ll be these folks. One person said punchline B was their favorite. Someone said punchline A was their favorite. That was the, to understand the pedestrian experience.
BThat was the UX person, wasn’t it?
AOh my gosh, it was. Yeah, and if I remember right. She said, now that one is really audience specific and nuanced. And I was like, you know what’s up. Shout out to Dawn. And then the other faculty member said, I like all three of them.
Bwe should say that almost half of the people who answered on that first joke went with
AA. So in that particular instance, yes, I was funnier than a robot, but I was also less funny than a robot depending on how you look at it.
B True.
AThe next joke I put on Twitter was, Why don’t English professors ever tell jokes in class? A, because they don’t have a sense of humor.
BFair.
AB, because the Moby Dick jokes write themselves. C, because they don’t want to start a grammar discussion. So this one had fewer participants so we are less confident in the significance of the results.
BAnd the vote was closer as well,
ARight. It was pretty evenly split, but, oh, punchline C got the most votes, 40%. And that was indeed the AI generated punchline. Because they don’t want to start a grammar discussion.
BSo apparently, what ChatGPT knows about English professors is grammar.
APrecisely. And the one thing chat GPT knows about technical communicators is that they write instructions.
BMore accurately, they expect them to follow instructions.
AYes. Ooh, interesting. That’s what we’re going to get is we’re gonna get superficial,
BStereotypical
AExactly. And that can be funny. You know, ss, as someone who is an English professor and a Technical Communication professor at that, I am happy to poke fun of myself in these ways. But in reality,
BWhat would you guess is the,
Ahow much do English professors actually care about grammar?
BRight.
AAh. I think if other English professors are anything like me they are in the field because they love language and part of what is beautiful about language is that it can change. And that it means different things in different contexts to different people in different periods of time. Which means there’s an appreciation for what language is more so than what language should be. That’s not to say there are not sticklers out there. And again, there are situations where attention to grammar matters more than others.
BSo it is of interest. But the rectitude of grammar is of less interest.
AYes. The second punchline, the Moby Dick jokes write themselves. Yes, that was me. Apparently I have the humor of what, a 12-year-old boy?
BI mean, call me Ishmael.
AWhat?
BOkay. That was really, really badly played. That is the first line of Moby Dick.
AI know that.
Bfor anyone who doesn’t know.
AAnd then who came up with the first punchline, because they have no sense of humor? Was that me?
BIt was.
AThat sounded like a little mean. I have to say.
BIt was right up there. It was neck and neck.
AIt got a couple votes, yep.
BSpeaking of bad jokes, the other day, I was thinking about how like dad jokes are almost universally puns and groaners and bad jokes. Whereas mom jokes,
AThere is no such thing as mom jokes.
BUh. Mom jokes typically are derogatory.
AOh, your mom jokes?
BYes. What does that say about implicit sexism in society?
AHow about that? And in fact, that leads me to an observation,
BObserve away.
AIn all the iterations of the English professor joke. I ran this multiple times just to see what kind of variety I was getting in terms of responses. And in all the iterations, the English professor was a he. And by all I mean like four. And then for the technical communicator jokes, it was most often he. Maybe there was one she thrown in there. Which I mean, both are not representative of reality. Technical communicators tend to be women, in fact. It suggests something about either who is the default subject of a joke or who is the default image of an English professor. You know, it kinda reminds me of that Google Images thing years ago, that if you Google search English professor, it’ll pull up multiple images of old white men with
Bsweater vests,
Aelbow patches. So there is bias. Obviously. I mean, even the AI itself acknowledged it can be biased and can in fact commit atrocities.
BThat’s right. We’re on to you, ChatGPT.
ANow, I want to talk more about the experience of using this tool. There was a cool and unexpected thing that arose out of this experiment for me, which is that I felt inspired. And I don’t often feel inspired creatively.
BOkay.
ABut you saw me. I was like,
BOoh, now let’s try this.
BIt put my brain in a different mode, almost. Yes, I started from the AI. But the more I engaged with it, the more I was also able to move away from it.
AMoved away from it.
BLike teachers are afraid that students are going to press the button and turn in the essay, right? And it’s going to have no amount of actual engagement. You know what I mean? And that was not the case for me. I didn’t press the button and say, Oh, I’m done here. Not that I have any pressing need to tell technical communicator jokes. But as I was seeing what this AI was churning out, I started getting ideas of my own that I think it’s fair to say they weren’t inspired by what I was seeing, but not necessarily derivative of them. Maybe they were, who knows what that means.
AWell, it was probably spitting out something that keyworded your brain to activate this network, or that network.
BYeah. Yeah, yeah, yeah.
AYou came up with something that wasn’t, it may have been because of it, but it wasn’t based on it really.
BOr like the punchline it gave me. I thought of a better setup for it or vice versa. Overall, I mean, you were kind of in range as I was doing this but you didn’t do it yourself, did you?
AI don’t think so. No.
BYeah. You were just kind of hearing me go on and on. And I mentioned before that there was this sense of magic. Like you press a button and it gives you this fully formed thing that you can then critique and take apart and adjust. And
BFern? Meow.
AThat was exciting to me. But whenever we get that sense of magic with a technology, that’s a sign to stop and think a little bit. Not to be a wet blanket, but
BWell, yeah, I mean, definitely in the case of AI coming up with content, text is absolutely a realm in which it is so bounded, that it has a chance. It has a chance, Yeah, okay, come on up. It has a chance to figure things out and to monkey-typewriter its way into believability.
AMm-hmm. The concern I have about it is that what’s going on behind the scenes is so invisible. Like I mentioned before, I don’t even know where this comes from. What, what kind of repositories is it drawing from?
BWhat datasets started it learning? I mean, even, even when you’re talking about a neural net, that is kind of by definition a black box, but you feed something into the black box and that isn’t unknown, at least to the feeder.
ARight, right. But there is a point at which machine-learning kind of takes on a life of its own. And you really cannot backtrace all the decisions it’s made.
BRight.
ANow, in defense of black boxes, this was an interesting thought from Graham and Hopkins. They mention the fact that our brains are a black box. We cannot know the intricate processes.
BBrains were the original neural net.
ABut that doesn’t mean we discount its output.
BSometimes we do, but
AIndeed, as we should. But I think it was just an interesting point.
BI think that it’s very much a form of the uncanny valley. At a certain point of getting close but not exactly, they’re just too unnerving and it freaks people out. I think that specifically in this case, since it’s, like our brains, a black box, that information goes in and unpredictable information comes out. It behaves like a human. If it wasn’t for the lightning fast reply, when you ask it for something complicated, you would think there might be someone on the other end of this conversation responding.
AI mean, that’s in the title, Chat GPT, right? It’s supposed to feel like you’re having a conversation with someone in a way. And so you frame your prompts in almost a conversational way. And when you think of Google, more and more Google has gotten better at interpreting the conversational kinds of search phrases people might put in, you know, natural language processing. But at least with Google you still have some agency because it gives you options.
BYeah, you choose where it brings you. It gives you choices of where it can bring you.
AAnd with ChatGPT, it’s like, well, okay. Do you remember when Google used to have the I’m feeling lucky button. Is that still there?
BI don’t know.
AI don’t know that I’ve ever clicked on it, but if I remember right, it brings you to the top search result for that query. Is that right?
BIt may. My first experience with it, gosh, it was one of those things that went viral before there was really a means other than word of mouth to go viral. It was you type in Iraq WMD and you hit, I’m feeling lucky. And then it brings you to some page where, it was like a 404 page that was specifically mocking George Bush’s like insistence that there was weapons of mass destruction in Iraq, which
AWhat was your reaction at the time?
BIt was comical, so I laughed.
AOkay. It was subversive. But it does raise the question of how that could be abused, of course,
BTrue. And Iraq, WMD, the more interesting question is, how this beat out all manner of news sites
AYeah.
Bthat had just been had for years been cranking out, towing the administration line garbage. And this joke site came up and rose to the top.
AAnd that’s saying something because, so you know I did a study of Google where I literally watched people Google stuff.
BWere you Google-ogling?
AGoogle ogling, it’s a new research method.
BAnd if you study this phenomenon…
AGoogleogleology?
BGolly what a day.
AYeah. And so one of the takeaways there was that so much of our use of Google is kind of in the background as in like we aren’t necessarily conscious about it. And I say we because other research has, has backed this up too but you know I noticed that in my own observations and my own use of Google, we go to Google and we don’t stop and think, is there a different search engine that would make more sense for this particular query? We don’t think what would happen if I prioritized results lower down on the list that kind of thing.
BI remember that early on before the natural language learning of Google, there were classes, instruction, on how to use a search engine.
AYou had to know how to use the Boolean operators and everything.
BYeah.
AI mean, in some databases you still do that but, and I don’t fault anyone for just going with the flow because that is Google’s goal.
BThey want to be invisible.
APrecisely. With Google, I think in recent years, I’ve heard people complain that it’s less useful than it used to be because it’s so loaded up with ads and sponsored results and everything now.
BCapitalism ruins everything.
AAnd in fact, it is difficult to distinguish between ads and not ads. But you can still figure it out if you pay attention. When it comes to ChatGPT, it just spits something out. If you don’t know where it’s coming from, if you don’t know who might have paid money to get priority in a result or whatever, that’s dangerous. And I want to shout out to the UX Podcast for raising this possibility in one of their recent episodes, which I hadn’t thought of before. You can see how Chat GPT, in the way its interface is designed, actually takes away more agency from the user compared to Google. Because it gives you one response.
BYeah.
AAnd it’s just straight out answering your question versus providing you multiple sources that could answer that question in different ways or provide different angles on it.
BMoreover, Google gives you sources, Google gives you links to the place where you can get your answer from. Depending on how well it understands the question, it might just have that little sidebar,
AThe knowledge panel.
BIt might have the knowledge panel that just gives you an answer.
Awhich frankly is good in some cases. Because when you type in what, diabetes like, you, it’s probably better that people get Mayo Clinic prioritized.
BYeah. I mean, there’s definitely functions where you want to know your source, but there’s also functions where you don’t want that extra click because you’re looking for something that is so simple. It can just tell you. If you type in 34 yd in kilometers, you want an answer. You’re not looking for like, the most authoritative source on this information.
APrecisely.
BOr when was JFK assassinated? Or very specific things that there is one answer and there is no debate reasonable or unreasonable about.
AWell, there might be unreasonable debate about it, when you’re talking about JFK because apparently he is still alive?
BI’m, I’m sure that there are tinfoil hatters around that will say that he’s secretly running the US government from the far side of the moon. Why not?
AYes.
BAnd if not, that one’s free. That one’s free folks. You run with it.
AType it into ChatGPT and see what kind of sci-fi it gives you. However, that does get really blurry. Especially when it comes to medical information. Where on the one hand, we have people saying, trust the science, the COVID vaccine works. Here’s, here’s its mechanisms and here’s the studies that show it’s effective in these ways. And then on the other side you have people skewing and fabricating and manipulating information. And that’s not to say that these are two equally valid ways to view this question. But rather it’s acknowledging that there are issues like that where there’s enough polarization and enough just complete lack of common ground on the issues that it gets really hard to make those calls, like from the perspective of a company like Google. And we’ve seen how politics does make a difference in how CEOs run their companies.
BOur chat here is inspiring. I’m getting like, Oh, oh, oh, oh, ideas about things I would ask chat GPT if we had it here and ready to rock. Is COVID-19 real? If it had anything more than yes as the answer, ha. Were the moon landings faked? Is the Earth flat? You know, stuff that people create controversy about that is not controversial.
AYeah. Yeah, but all that to say. The less distance between a question and an answer, the less opportunity for reflection. I guess. You’re taking away those decision points that allow people to say, Oh, okay, there’s this website and there’s that website. In some cases maybe that’s better. But in a lot of cases, because this AI is pulling from an Internet that is inherently, that has sexism and racism baked into it. I would be cautious about that.
BChatGPT doesn’t compensate for bad actors. Maybe it does, but it’s a black box. So I don’t know. Maybe it has a way of even understanding that bad actors exist?
AI have seen people post results where someone will ask ChatGPT to, this is hypothetical, but in the vein of saying, write an essay in praise of Adolf Hitler. And it’ll come back and say, nope, not doing that.
BThat sounds like a human said, okay, we want to make sure it does not do this because we don’t want them to come with pitchforks and torches to our house.
AYeah. Someone is asking that question, what could people do to abuse this tool? But people are endlessly imaginative when it comes to ways to exploit technology.
BAnd people.
AAnd I don’t know that an individual organization is going to be able to keep up. So all that to say, I think the role of writing instructors at this point in time is to, again, acknowledge that this thing exists. Acknowledged the fact that students are probably going to use it. But talk about that importance of algorithmic literacy and kind of having those checkpoints for reflection rather than just inherently trusting it because it’s technology and it, and it materializes texts like magic. So that’s that.