Watch on Youtube
Listen on Spotify
Full Transcript
Noah: So I'm here with Professor Kerezy. And today we're gonna talk about AI and education technology. A couple months ago I was in Professor Kerezy's class and I was using AI in some of my assignments, which is against the syllabus that he put out. And he called me on it and we spoke about it. And now I just wanted to have a more in-depth discussion about what AI usage should be allowed in schools, what it should look like. And then go beyond that to kind of some other counter-AI topics about today, AI psychosis, how it's affecting mental health and things of that nature.
Prof. Kerezy: Yeah, by the way, I hope you would agree, Noah, I would call that discussion that we had kind of resulting with a mild reprimand and you did just fine in the class. So yeah, you definitely weren't a part of or anything like that. And I actually learned some things as a result of the conversations that we've had, and I've learned some things even, you know, all the time. This field is changing so much, so fast. But I do like the way you explain that. And, again, just by way of background, college education, higher education is my third career. I began as a journalist, both while I was a student in college. And then for a couple of years after that, I migrated into the field of public relations, kind of marketing communications, and did that for, well, more than a couple of decades. And then I entered higher ed in around 2003, 2004 part-time. Then joined a faculty full-time at Tri-C in 2006. So I've been doing this full part-time for a little bit more than 20 years, and this is my last year. I will be retiring at the end of this academic year. It seems right now as things stand. So you're catching me late, Noah. I hope you're catching me great today.
Noah: Awesome. Be better, better late than never. Any plans post-retirement?
Prof. Kerezy: You know, I wrote a book about Jesse Owens, and it's got a working title of "Jesse Owens, Sensation Superstar Survivor Symbol." It is undergoing peer review right now at Kent State University Press. The people there have reported to me that the first phase of that went very, very well, and they're hoping that that's gonna be published by sometime in the third quarter of 2026. So about a year from now, I'll probably be putting the finishing touches on a book tour. So, visiting places like Columbus, Ann Arbor, and Indianapolis and other places in the Midwest, Chicago where Jesse Owens either lived or made a tremendous impact. So I'm predicting that in my not-too-distant future.
Noah: That's awesome. All right, so the topic of today, AI and education. There's so much in AI and education. There's so much between fifth graders using ChatGPT in the classroom all the way through college students using it to the actual help that AI can provide in subjects like the sciences and math and things of that nature versus what's cheating, what's not. So let's start with just, I wanna get your opinion on what are the ramifications of having AI in the classroom?
Prof. Kerezy: First off, Noah, I'm really glad to be addressing this topic with you. And the second thing is I want to point out, this is a broad swath here. We can't set rules for what's gonna happen in higher education in 2025, 2026 for AI. We can't talk about rules for what's gonna happen in elementary education or high school education because this field is changing so rapidly. I will share with you a little bit of what I learned from a couple of years ago too, but the shorthand answer—and again, remember my background is more journalism and public relations—is that I think when people become overly reliant upon AI, it makes it harder to both instill and teach critical thinking. And that's not just my take on it. That's about, oh gee whiz, 14, 15 years of take from people who have paid close attention to technology, people in the fields of psychology, people in the fields of higher education, because the precursor to AI has been like the opening of the internet, and it's kind of fascinating. If you don't mind, I know we're using Google Meet for us to talk today. Google has spent billions of dollars researching where our eyes go when we see things on screens. Where do our eyes go? You know? And the reason why is if they can track your eye movement and figure out what I'll call the eye candy that gets your attention the most, that's gonna help them in their advertising and their marketing. Now, it would be an absolute fool to think that any company, be it Google, be it Amazon, be it Meta, anybody at all that's done that kind of research is not going to leverage that to great advantage when they introduce their AI products. They're gonna use that in a great and powerful way. So as human beings—and I mean this kind of like down here, right?—we're kind of like being moth-led to the flame because the people who are creating the AI material know what is gonna interest us and what's going to attract us. So that scares me a little bit too, especially when you're just talking about 10, 11, 12, 13-year-olds whose brains aren't fully developed. And, at some point we need to talk about the concept of neuroplasticity. But I don't want to toss that in just right now. But, in my humble opinion, if I could wave a Harry Potter-type wand and make all the decisions, which I can't, obviously I would basically ban AI in lower elementary, begin to introduce it in upper elementary and then gradually increase its usage in high school. And then depending on the type of class, set rules there. And two years ago I was very fortunate and I was part of the artificial intelligence task force that Cuyahoga Community College set up to establish it. And we set up ground rules in three areas: courses and subjects where AI is allowed, courses and subjects where AI is kind of encouraged for like an outline or a prep perspective, and then courses and activities where it was prohibited. And I was pretty happy with that pattern when we set it up back in 2023. What we really couldn't have anticipated and what everybody sees now is you can't have your phone on for more than a minute without AI trying to offer you something, a shortcut, a way to talk instead of type. You know, all these things are going on everywhere right now. I'm being bombarded all the time with ads for a 28-day course that's gonna teach you everything you need to know about AI, you know? And the truth is that we don't know for sure where AI is gonna go. We know it's gonna be a transformational technology, but nobody can say with absolute certainty, oh, it's gonna eliminate jobs here, it's gonna eliminate jobs there. We know AI, one of its early things with the LLMs is writing. I've seen lots and lots of examples of AI writing and I've had to react to that. But having said that, I know it's gonna revolutionize accounting. I think it's gonna revolutionize the legal industry. I think it's gonna revolutionize architecture and in many, many other fields as well, just from the reading that I've done. So, hey, I babbled on a little bit here. Let me, let me let you ask the next question.
Noah: So, yeah, that's a good answer. I feel like we're seeing a lot of kids even using it at home. Like we're seeing five-year-olds, 10-year-olds talking to the AI. I've seen use cases there, I've seen them learning a bit with AI at home. So how effective is it to curb AI usage in the classroom until, say, fifth or sixth grade if there's also that home presence?
Prof. Kerezy: You know, I'm gonna kind of ratchet it up one and then we'll come back down and specifically answer your question, 'cause I think the critical part of this is the student's ability to think. What happens if the student can't think well, can't develop reasoning well, because they're used to just talking into a device that gives them all the answers that they think they need to know for life? That's kind of what we're looking at and what I don't know, because I'm not a neurosurgeon and I'm not an expert on neurology, but I have done some research into and learned a little bit about neuroplasticity. And what that is, is that's the way that our brain changes and the way that our brain becomes more fully formed and developed over time. There's a part of our brain called, I believe, called the hippocampus, and that's kind of like a memory type part. Somebody actually did a study many, many years ago of cab drivers in the city of London because they saw neuroplasticity going on inside their brains and they saw the hippocampus expand because this is, again, the time before GPS. These guys just about doubled the amount of hippocampus area in their brains to store spatial information. They knew how to get from one point in London to another point in London because they had done it, driven it many times, and their brains remembered the route. Their brains had adapted and adjusted. That's my worry: what happens if this brain develops in such a way where it can't develop, it can't adjust, it can't evolve or grow, and it's stuck just being a slave, so to speak, to whatever answer comes out of the device that it talks into? That, that's my worry. And I don't have the answer to that, Noah. But I think that any human being who likes to rationally think and might be scared to death of some of the horrific, sci-fi books and stuff that's been written about the horrors of AI would probably say, yeah, it's probably not a good idea to let the AI devices and software do all of the thinking for us. So that's kind of the reason why I'd want to prevent it.
Noah: Where do you think AI can be the most effective in teaching? Is it letting students ask questions to the AI in their own guided learning time? Is it the teacher having an AI assistant with them in lesson planning? Where do you see it being most effective when you get to the age where you've already developed some critical thinking and it can help advance that critical thinking?
Prof. Kerezy: If you don't mind, let me dwell first on one little point you said, and then I'll roll back to the bigger point. Sure. In many parts of America, there's a teacher shortage right now. They can't hire enough teachers and a combination of pay and incentives and so on and so forth. Having said that, it is very possible that in some school districts you could come up with AI software that could be of an aid to a teacher. So a teacher could either reach more students effectively or be able to reduce the amount of time they need in lesson prep, and then increase the amount of time that they have in actual classroom instruction. So in my mind there's the promise of that, so to speak, and the possibility that that could be helpful in education. But let me go back to the bigger question that you asked. Could AI be used to kind of help refine and outline ideas? I think so. I think that would be a good use of the technology and I know some of my colleagues at Cayahoga Community College are using it in that capacity right now. Could AI point out contrasting points in like a pro/con-type debate? I believe so. I think that would be a good thing to do. And again, I could see that being used in, say, political science or civics and some other areas as well. If AI can be devised in a way and could be channeled in a way that it helps further spark the curiosity of the student, I think that's good. If the student thinks, oh, it's just a shortcut and sees everything as a task, let me get this paper done, let me get this math problem done, let me get this geometry thing done, then AI is actually taking over the mind of the student. And in my opinion, that's the bad part. How do you draw that line? That's the real challenge, Noah. And that's what educators are gonna face in the years ahead. I will say this, I'm glad I'm retiring soon because I really, and again, I don't mean it in a mean way or a bad way, but you know, too many professors now and too many high school teachers now are feeling more like they're police officers and they're, oh, I gotta refer all this out. And it's just, it's a bad path. That's a bad path for everybody. It's a bad path for the student. It's a bad path for the classroom. It's a bad path for the teacher. It's a bad path for a college. It's a bad path for a professor. It really is a bad path for everybody. You gotta somehow figure out a way to overcome that.
Noah: That makes sense. I think the biggest question is some students are gonna be using AI to do exactly what you just said, which is pasting into the chatbot, getting an answer and more or less cheating on their assignments. Versus some students are going to be learning with AI, using it as a personal aid. For instance, something I did when I took a speech class and a debate class at Tri-C was, I had the AI read up on certain debate styles and then debate with me back and forth based on its understanding of those styles. How do you prevent the students who are using it for cheating purposes from disrupting positive learning experiences that AI could provide?
Prof. Kerezy: Yeah, great question, Noah. And honestly, you might be better equipped to answer that question than I am because you're really delving into this the way you're aggregating, you're keeping up with all the various aspects of artificial intelligence. But having said that, since you asked, I would give you an opinion. By now, I have seen a style that comes from students who use AI excessively, and I won't go into the details, but we have a little relationship with a middle school, high school journalism program in the suburban Cleveland school district. And a student there submitted an assignment and the editors immediately thought this was artificial intelligence, completely written. They ran it through a couple of AI checkers and it was coming back between 60% and 85%, done by AI. Then a few days later, the student wrote an email asking the editors about their assignment, and as soon as the person in charge read that email to me, I said, "This student is using artificial intelligence to write his emails." Because the opening line, the syntax, the structure, the whole nine yards was completely consistent with artificial intelligence emails I'd seen before, you know, this kid's 13, maybe 12. You see where I'm getting to the answer to your question. The problem is, you know, what do we do? Now a lot of school districts have banned cell phone usage. I've seen that. Yeah. There's more and more school districts doing that. So if you're banning the cell phones, maybe it's an easy step to say, "No, you can't use AI," but you can't stop the student from going home and doing the homework using the AI. You know, so, that's piece and part of it. The genie's so far out of the bottle, I think what you need to do is teach what's good and what's right and what's best, and what's gonna develop their brains in the long haul. Try to encourage them to go down that path, rather than down the path of picking up your phone and talking into it. I also see at the college level, especially at the freshman level now, I see more and more in-class professors going back to something that I kind of chuckle at 'cause it's good old fashioned. It's called a blue book and it's just nothing but rule paper inside of a bond book. Usually it's 16 pages or thereabouts, and the professor gives essays and says, "You can't use computers, you can't use your cell phone. You have to write the answers from what you know." I do, I think it's good. Oh, if it's a class, such as philosophy or political science or psychology, or some math and some medical classes where you have to know certain terminology and certain techniques and how they work. I think it's a good thing. I think it's a little bit unrealistic in that that student will have that phone probably by their side later on in life no matter what. Once upon a time, if I went to Tri-C and left my phone at home, no big deal. I know students now that have psychological meltdowns if they forget their phone. Kind of crazy. And again, I'm linking the phone to the AI because it's the omnipresence of the phone that has made the AI revolution and to some extent possible now. In the business world, it's a whole different matter. We can do better programs, we can do more things that are in terms of marketing, in terms of administration management and things of that nature that run on bigger computers that can manage data so much better and wiser. And in my opinion that's the inevitable part of artificial intelligence. It's gonna take over more and more of the business applications far beyond the chatbot. You know, and we're all by now quite familiar with that chatbot. I think the psychology behind that is like the poison pill that I think a lot of students, especially those that haven't fully developed their thinking processes, might succumb to and they may end up then in greater trouble and danger later on in their academic careers because they were able to find that bad apple, so to speak.
Noah: Yeah, I agree with you there. Obviously, I'm very pro-AI in pretty much everything we do, including schools. But that definitely is a concern that I would align myself with: putting AI in front of students too much too early, where they don't develop the cognitive reasoning and critical thinking that they need in order to grow with AI per se later in life.
Prof. Kerezy: And you know what, Noah, let me just add a couple of things to this. I'm not anti-AI. In fact, I know AI is inevitable in a lot of places. So much so that I said, "Yeah, I'd be happy to do this interview with you for what you're doing." I think that the question is responsible usage, at what point in a person's development as a human being, and then how much of it do you need to know in order to advance later on in life? I subscribe to a nonprofit called Trusting News. And they're trying to come up with information and tips and programs and advice all the time to help news gathering organizations rebuild something that they have almost completely lost in the United States, and that's trust. And this past week they put out this like six-page special thing about artificial intelligence. And what they're doing is they're giving newsrooms advice on how you should try to get your readers or get your viewers to learn about artificial intelligence and why journalists ought to take the lead in educating the general public about it. I think it's a great concept. I think it's a badly needed concept. Part of the problem is we kind of line up in these two camps: "Oh, AI is very good," or "AI is terrible, it's evil," you know? And truth be told, it's probably some of both. Having said that, because of its inevitability, it is important that people learn about it and they understand about it. And anytime you educate people, you replace the kind of fear with knowledge. I think people win, you know? So my hope is in the future, people won't be less terrified about AI and what it's gonna do to my brain, what it's gonna do to my job, what it's gonna do to my career, what it's gonna do to my life, and think about how can AI help me? How can AI be more of a guide to me on down the road?
Noah: That's a great point. So based on what you said too, I wanted to bring something else up, and I think this might be our last point on education. You talked about who's spreading the news, who's facilitating the learning of AI. I spoke to somebody last week. They were an expert on AI in policing, AI with officers. And something that she said that was interesting was that AI in law enforcement right now is not being dictated by government or by the officers themselves. It's being dictated by tech companies who are going to sell to different stations and different departments, and that's who's choosing what should be adopted. So how does that relate to education? You mentioned you were on a committee with Tri-C talking about AI adoption there, but who should really be the ones shaping the future of AI adoption?
Prof. Kerezy: Before I answer that question, you said something in your preface that I think it's very important for anybody who sees this to understand. When I teach my "Principles of Media Communication," which is kind of like a survey course, an intro to mass communication, I have my own four timeless truths, and this is one of them: it's all about the Benjamins. Yeah, it really is. And that's what's truly driving the artificial intelligence world right now is everybody and their brother trying to come up with a way to make a nickel, dime, dollar, billion dollars, $10 billion if your name is Nvidia. On what's happening with artificial intelligence. And I saw this with the beginning of calculators, Texas Instrument and all that, back in the early 1970s. We saw it with software for everybody where the computer becomes available at your fingertips. And the big winner, of course, was Microsoft and Bill Gates at that. And by the way, I do want to point out to people that just because somebody is a big winner doesn't mean they're not gonna continue to be a big winner. Well, don't get me wrong, Microsoft's making billions and billions and billions of dollars. Their market cap is in the trillions somewhere. But having said that, when Bill Gates and Microsoft put out Windows 95 in, yeah, 1995, the first version of Windows 95 didn't have Internet Explorer in it. A company as big as Microsoft was completely missing the boat on how big the internet was gonna become as recently as 30 years ago. So the reason that I point that out is that there are a lot of prognosticators out there that are gonna say, "Oh, AI's gonna do this, and AI's gonna do that, and AI's gonna do the other thing." And they may or may not be wrong, but one thing your police friend or your police consultant said that is absolutely true is the companies that come up with the products and the software and the services and the storage and all that, they're gonna push it like no tomorrow because they want you to buy their product because that's how they're gonna make their money. I think that is inevitable. I think we're gonna see more and more AI software that's probably gonna be done by textbook publishers aimed at the K through 12 and higher education market, tools and techniques that they're gonna say are gonna improve your learning. I think this is coming up right now, if not in the next year or so, you're gonna see more and more things like that. And, you know, you would ask a question I wanted to answer. What role should faculty play? In my mind, faculty need to learn about AI and they need to understand guidelines in classrooms and principles for their own classes, what they teach and how it's gonna happen. If they don't, administrators, and especially in business marketing, the businesses are gonna make up all the rules. They're gonna set the rules. And then the higher education institutions are gonna have to react and respond. And in my humble opinion, and again, my opinion has changed a little bit since two years ago when I did the first glance at this, community colleges really ought to be at the forefront of trying to teach the public about AI. They ought to be including that in more coursework. They ought to be developing new courses about it, and they ought to be telling people, "It's coming, ready or not, here it comes, and you really should be preparing for it," rather than just kind of let it swallow you up.
Noah: I agree. That's a good point. Okay. So I think that's a wrap on this part for education. We could talk more on that, or we could move towards kind of the AI psychosis piece or we could schedule a part two. What are you thinking?
Prof. Kerezy: I want to stay away from the AI psychosis until the next time. Okay. But I do have a parting analogy or comparison that I'd like to share with you and everybody that's following you. I want to show it to you because when you see it, I think you get it a lot better. What you're looking at right here is the first CD. It was put out by a recording artist in the 1980s named Tracy Chapman. And the song on here that got everybody's attention was "Fast Car." It was a huge hit for her, and it's been remade and it's been real popular. This also was at the beginning of the digital revolution in audio. Before the late 1980s, most music was put out on vinyl or put out on cassette tapes. This is one of the very first that was done on a CD. And if you were to open it up, and again, I can't get it that close because you're not gonna be able to see it, I don't think. But, you'll see the compact disc logo on it would say "DDD." And this was a coding sequence that was developed by the Society of Audio Recording Engineers, or I can't remember if they have a professional association. And they did it to let the person who was buying the CD know things. The first letter would either be an A or D, and it would say, "Was this recorded using analog technology like tapes, or was it recorded using digital technology?" Interesting—ones and zeros. The second letter stood for how it was mixed. "Was it mixed on an older-style analog type of a mixing system that would take multiple tapes and put it together, or was it mixed digitally, taking digital sounds and putting them together?" And then the third on every single one of these would be of course D, which means how was it output so you can listen to it. Alright, so one of the things about Tracy Chapman—and remember she's more a kind of like a folk singer, a lot of guitar-type music—is, this was "DDD." This was recorded digitally. It was mixed digitally, and then it was outputted digitally. Alright, so where are you going with that, Professor Kerezy? How are our minds today processing information? Are we doing it 100% on our own? Are we doing it 100% with an AI package working with us, or is there some kind of combination, some of one and some of the other? Go to the next one. How do we output it? Is it output looking like it's AI? Is it output with a lot of human thought and consideration into it, or is it some combination thereof? And then again, to complete the analogy, how's it being received? Is the receiver a human being? Is the receiver some computer program that is picking it up and interpreting it and trying to figure out what it means? You know, I think that's where we are right now in the AI world: the inputs and the outputs are changing so fast, and because they're changing so fast, how we look at it changes as well.
Noah: That makes a lot of sense. Great song by the way. "Fast Car." I like that song. But yeah, I posted today in my newsletter actually, I was talking about in an impact industry, AI music, funny enough, and the impact of artists using co-composers of AI. General, very common nowadays, and by the way, there's an intermediate step, right?
Prof. Kerezy: That intermediate step has been digital sampling. And that's been going on for, I don't know, 15 years, maybe longer, right? Where somebody takes 15 or 20 previously done digital works and links them all together and makes something maybe completely new out of it. Maybe not, depending on what the copyright lawyers decide. But having said that, what we're talking about now is just a short step beyond that. Instead of taking like 15 or 20 different pieces of music and sampling them and combining them all into one, we're taking one piece of music combined with artificial intelligence that's designed to write music and letting them go together and building something that is definitely completely new.
Noah: Yeah, that makes sense. And back to your original analogy too, I think we're at a point where the DDD or the different ways we're processing and talking and receiving information is all over the place. And I think AI right now should be doing it in very specialized areas or specialized ways, and I think some people are trying to use AI for everything. You're trying to exclude AI fully and use human for everything, and I think you really have to understand how AI works in order to get the best of both worlds in terms of the mix of those three letters and how you're receiving information, how you're processing it, and then how you're putting it back out.
Prof. Kerezy: Well said, Noah. Well said. Thank you.
Noah: All right. Well, Professor Kerezy, I appreciate your time. Any parting thoughts on your end?
Prof. Kerezy: No, it's great to chat with you and I'm looking forward to part two of our discussion and we'll talk about, what'd you call that? Psychosis, hysteria, AI.
Noah: AI psychosis. We'll cover that and other topics next time.
Prof. Kerezy: Excellent. All right. Thank you.
Noah: All right. Bye-bye.