Watch on Youtube
Listen on Spotify
Full Transcript
Noah Weisblat: All right, I'm here with Craig Tucker. Craig, I met you a couple of years ago back at an event in Michigan. But do you want to tell the audience what you do and a little bit about yourself?
Craig Tucker: I represent Vern. I am the co-creator and founder of Vern, and it focuses on emotion recognition in real-time at scale. We are patented and used by tens of thousands of people around the world in award-winning customer service and mental health applications. This kind of came out of the studies I was doing at Michigan State University. We found a new model of emotion, and that’s what we’ve been working on ever since.
Noah: Awesome. So from my understanding, Vern has a specialty and some offerings as well, the specialty being able to recognize emotions via AI or use emotion in AI. So, tell me a little bit about that and then kind of what you guys offer at Vern.
Craig: Sure. That is one of the reasons why large language models have such a terrible time with emotion. We created ours based on neuroscience; it works in real-time and can do anything from analyzing conversation sentence by sentence. You can also use it for the AI itself to help generative AI—like chatbots, virtual assistants, and avatars—to understand emotion and be able to react appropriately.
Noah: Okay, and give me a bit more specifics. You don't have to go into the secret sauce, but how do you tell an AI to have empathy or to feel a certain way? How does that actually work? Just explain it to me like a five-year-old.
Craig: Sure. Psychology is obviously trying to get into the black box by asking questions of the box, and neuroscience is actually looking inside the box itself. So you get a little bit different data set from which you can make all of these logical leaps. Long story short, Vern is more of a neural model. It works and picks up these latent, distinct but salient clues that we as humans have evolved to give to one another. So what Vern does is it basically replicates a human receiver. We all act as senders, and if you know anything about the Shannon-Weaver sender-receiver model—everyone knows that one—it’s one of the [models] we play a game with, and that really was the big, huge breakthrough moment for us.
Noah: That's awesome. So, you guys are going to be releasing something coming up in the next few days, hopefully, fingers crossed. Tell me a little bit about that.
Craig: Yeah. So we're releasing a new product called Hoomans.chat. It’s Hoomans, like the dogs or cats would call the people. This is a little human adjacent, and these are avatars. If you can remember back in the day, the old project Chatroulette, where you'd randomly talk with a random stranger—well, in this case, it's going to be an avatar. But these avatars are something you've never seen before. They are deeper, more personal; they can sense your expressions and your feelings. They can see your environment and interact with that. The depth and quality of their personality are unlike anything you’ve ever seen. And because they control, they have our control system, which we call an Action Pack, they never go off script. So good luck trying to get anything off script! It should be a great, fun experience with a number of different, really interesting characters that you could talk to for a minute, talk to for an hour. Either way, you're going to have an incredible experience that'll be dropped out here pretty soon.
Noah: Super exciting. Tell me about a couple of these characters.
Craig: All right, so our most widely known character, the one we developed in the beginning, is Zeke. He's the cowboy. Basically, he's the world's first all-the-way-from-prompt-to-avatar entity. A lot of work went into the creation of building models up and then being able to get the avatar model to train on an AI without making it look horrible, right? Zeke is starring as the storyteller as one of the avatars, and he will be able to spin whatever yarn suits your fancy in a way that you've never seen before. If you kind of think of a never-ending story or choose-your-own-adventure, that's what the experience would be like. And then, of course, we have Amber. If you've ever experienced Amber, she's one of our avatars who's not nice. You can just think about her as a mean girl. If you've ever been roasted, that's what you're in for. So it's kind of like that Reddit subreddit of r/roastme. That's what you're going to experience. However, there's a little wrinkle with Amber: she can understand if you are empathetic or if you're sincere or not, and she will really judge that. If you are, she actually becomes a little nicer, starts to reveal part of her tragic backstory, and you realize why she was so mean.
Noah: Oh, man. That's awesome. I'm excited to start using these. So let's pivot a little bit, going from Vern-specific towards the general AI space. Obviously, you've been building in it for a while. I was going through some texts of my own the other day just to look at kind of when I saw my first ChatGPT test, and that was back in 2022. So, I think we both been familiar with the space for a while, but just give me your general take on how you feel about AI in general and how you've seen it evolve over the past few years.
Craig: Yeah, it's strange. You're right; we kind of saw that coming out of the labs and their first initial betas and testing with OpenAI. I think anybody who was in the space knew what was coming. I don't think any of us really realized the speed at which it would hit. We all had an idea of what this could be because we were seeing it, right? We were seeing these AIs, these large language models, respond the way that they are. We’re the ones seeing them win chess championships and build some of the software. So we knew it was coming, but just to see it kind of break out in '22—I know it was a huge ripple effect across the entire industry. It went from, "Oh, my startup is dead," to, "Oh my God, this is going to replace everybody's job on Earth." So you've got the doom and gloom to the wide-eyed optimism that kind of comes with these new technologies. It was really funny to me because I kind of had a feeling that large language models would become synonymous with AI, and it did, much to everyone else's chagrin. AI isn't just a large language model, as you know, right? There's a lot of AI out there that does visual recognition, pattern recognition, trying to find incongruities and abnormalities in security. There is AI that out there that does really complicated math in real-time. So there's a lot of different types of artificial intelligence. Even if you look at our model, you may not even classify Vern as an AI, according to everyone else's conceptualization of AI. We weren't derived off a deep learning or machine learning model, for instance. We realized that it didn't work—it always had some problem with the labeling, and so it would have no external validity. What we did is, from that knowledge and that testing, we created our own handmade and hand-curated [system] with content analysis experts like Dr. Brendan Watson at Michigan State, who literally wrote the book we used in the doctoral program. This is what we just called in the 80s a fixed expert system. We just didn't have the computing power or the ability to create a so-called "golden truth" to kind of base everything off of.
Noah: That makes sense. So if we talk about AIs and LLMs specifically, which ones do you find yourself using just kind of out of them? I do a weekly race segment, so I talk about how OpenAI is doing this week, and how Claude and Anthropic [are doing]. Which ones kind of, out of those main big LLM companies that your average Joe might use, are you using by yourself and then at Vern?
Craig: Sure, so all of the above, really, Noah. I mean, we use everything. We'll use Gemini, we'll use Claude, we'll use ChatGPT, OpenAI's product. Obviously, like most of the market, about 80% of our usage is through an OpenAI product. And that's what we've got our Vern AI wrapper, called Lemmy, on as a ChatGPT [plugin]. You can put that on ChatGPT and get Vern results to feed back into your own ChatGPT, and that's what the avatars use. So we've primarily used that, and that's kind of our forerunner. It's not always the greatest, but it's the most consistent and the most stable and gives you the most expected results time after time after time, which, you know, for an LLM is pretty rare. So that's been our experience. And with these guardrails and these Action Packs we put in, we've had now well over 2,000 conversations on these avatars since they've started, and we haven't had a single report back of anything going off track. That's not to say it’s not possible, but we think that what we've created is [robust]. We can pretty much, not necessarily guarantee it won't happen, but the likelihood and the probabilities are really low. So let's say OpenAI is the winner for us. Gemini is excellent at working on things around the edges of avatars, like more of their character-like right, or kind of giving me some sample conversations that I can help to train this on. And then Claude is really good for just about everything, but we haven't used it as much as we probably should, honestly.
Noah: Okay, cool. Yeah, I try to use Claude for more technical process stuff. That really helps me better. That's cool. How efficient would you say you are now compared to pre-AI? Obviously, you're working in AI, so it's kind of a messy comparison, but how efficient would you say you are with AI?
Craig: Oh, I'd say about 150% more efficient, without a doubt. I mean, in every facet of what we do, from those that are going to be writing copy for marketing or advertising, web development, design, analysis across the board—even coding. We don't do a lot of live coding. We've got a really good rock star staff that really doesn't want to introduce that element too much, but we do some around the edges. And of course, we build things and break them fast in our lab, so that's kind of what we see.
Noah: Cool. What industry do you think will be completely different in five to ten years because of AI?
Craig: All of them. All of them. I had somebody really highly respected in the space kind of gave us their goal: "Nobody working in five years." Well, let's see how that works in society. He's obviously drinking the Kool-Aid a little bit too much. But I think it is all of them, and they all will [change]. It's just like any type of disruptive technology. When the electric typewriter came out, personal computers, then the internet, then the mobile phone—all of these additional things we've used have just kind of boosted our own individual productivity. But that also boosts our own individual creativity. The absolute explosion in the economy of application services, things like that that were enabled in the digital economy, weren't even thought of in pre-internet days. And we've got to realize, yeah, the internet did take out people that answered 2-1-1 information calls, a lot of the stuff that maybe we didn't really want to do before. I would say that the top one that I see is going to be affected is customer service, and that really, honest to God, needs the most [change]. It's not that these people don't do a great job, but they're given an impossible task. You don't call customer service because you're happy; you call customer service because you're upset. Invariably, you're going to get all of this negativity raining on you, which is going to burn you out. So I think that's the biggest one right now; it has the most ability, and it's more of an expanding capabilities rather than replacing jobs. People do other things, right? They used to have a job where you had to stand by a coke furnace at a steel plant and shovel coal in there, and that was a hot, dirty, messy job. People died frequently, got burns constantly. Now a machine does that, and nobody's complaining.
Noah: Yeah, definitely. Okay, so let's go off-course a little bit here because I was just thinking about the humans you guys are building, your AI humans. I'm thinking a little bit about game development and story lines and things of that nature. What's your vision for gaming with AI in the next five to ten years?
Craig: I think that anywhere you have an NPC (Non-Player Character) now, you can have a lot of creativity. So anytime that you have an outside character that's not human that can affect storyline or gameplay, I think that's all the better. It just adds to the experience, especially if you're starting a game and getting a game out. You don't necessarily have that critical mass of a million people concurrently playing, right? I definitely see NPCs specifically using some of the technology that we're using to make them more human-like and interact with people in an unpredictable way, so the gameplay isn't the same every time. I mean, that is so fascinating. And I've seen some folks that work, like, in esports gaming. They're doing some awesome stuff with this. So I see that space just growing and becoming more and more relevant and more valuable.
Noah: Yeah, that's fascinating. I'm intrigued to see what happens with AI gaming mixed with some AR/VR stuff. It could get a little bit crazy. Talk to someone who is just finishing up college or just finishing up high school, and maybe they have a field set in mind, maybe they don't know what they want to do. How can they leverage AI to kind of either improve their chance at getting a job or to improve their chances of being successful in their field?
Craig: Well, I'd say, first of all, embrace AI. Know a little bit more than your older colleagues because they're going to come to you and ask you stuff. So your knowledge of that is definitely going to give you a competitive edge as you enter the workforce. So I would say, get experience. If you can get a certification, put that on the resume; that definitely works. And I've heard two different versions of the same story that I want to tell. I've heard some people say, "Yes, make your resume with AI. Have them write it because AI is actually doing all of the screening, and you'll hit all the notes that they're looking for. AI will know how to talk to AI." There's some validity to that. I've seen tests gone through where somebody wrote it out completely in ChatGPT, didn't edit it, put it through, and submitted it, and it went immediately through to a second—they got the interview. And then [they] tried to do the same thing where they handed it and got rejected. But then I've seen other ones where they are incorporating AI screeners at some locations and say, "If you send one in that is detected as AI, it automatically gets deleted." So, you know, maybe one version of each, and then send both and see what happens. I would say that's the thing: don't be scared of AI. I think it is a hell of a productivity tool. If I had it at the age of the people we were talking about, the amount of possibilities would have been endless. You don't really need a coding background as much anymore. I want to put an asterisk on that and say that if you don't have the background experience, you're going to make a lot of mistakes because you're not going to know what not to do, and I think that's the most important thing.
Noah: Yeah, that makes sense. I think having some idea of how to code can super leverage AI, but if you have no idea of how to code, then you can build a lot, but there could be some mistakes in there that you might want to have somebody take a look at. So, when we got in this car today, I went to Chrome, I clicked on my calendar, I clicked on our meeting, I joined, and I used—did all that in Google Chrome, a browser. Recently, we saw that you could start shopping in ChatGPT. You could start talking to applications in ChatGPT. How do you see the future of browsers with AI? And then if you guys are building something in that space as well, tie that in.
Craig: Yeah, I see that the days of the user interface the way we know it are numbered. I think that why would you replace a human interaction with billboards, tabs, folders, and pages? That's what we've been doing for the last 30 or 40 years, and the appetite for people for something new and different is absolutely out there. Eventually, I can see that the AI humans we're building, we're building to do exactly that: to replace web pages to replace user interfaces. You'll have a place to go. The avatar will step up, talk to you however you want them to appear to you, right? And will be emotionally intelligent, be able to understand what you're looking for. If you're there to complain, it will then be able to take your complaints and [route you to] the right person if you really do still want to talk to a human being. It can also, you know, we were building the stuff now where it can now pull up a product catalog, show you the products that are there as you talk in your conversation. You can sort the products to whatever your specifications are, using the video abilities that these avatars have. We're working on getting and building it so that you can try on the clothing before you [buy it]. The avatar can direct the entire thing. This is currently what we're building out in the lab now. We should be able to have that capability in the next couple of months to a year. And so it just really kind of boils down to: what do you want your experience to be? Do you want to type it in, go to a web page, try to find your way through somebody else's idea of wayfinding, or deal with a chatbot from the old days that has five things it can do and doesn't understand a damn thing? What we're building can sit on top of any identic workflow, so it can be the same face that somebody reacts to across all of the different functions. So, we're saying that the next big thing in user interfaces is a face again, we're back to that.
Noah: Yeah, that's really good. That's... it's going to be really interesting in a couple of years, or I mean, even now with what you guys are doing, I'm excited to see it all play out. It's just crazy to think about.
Craig: Yeah, some days it's like, what the hell are we doing?
Noah: Yeah, just trying to just trying to process everything. All right, so one more question here. We're seeing a lot of stuff in the media right now on something called AI Psychosis and kind of the ChatGPT talking with ChatGPT made me go crazy, things of that nature, and some of it's a little funny joking and some of it's a bit more serious. So talk to me about how you perceive all of that stuff and then how Vern can kind of help limit some of that as well.
Craig: It's a difference in philosophy, Noah. Is it a safety first position, or are you taking a position of engagement first? We take the safety-first [approach], so that changes our whole mindset, how we see things, how we go about doing things. We kind of engineered our system from the back forward to be HIPAA compliant and secure, stateless, you know, and all of these things that would be necessary to ensure consumer confidence. Whereas other companies, a lot of these companies built them for relationships, to be able to do whatever the user wants them to do, fulfill whatever fantasy that the user has. And so there's a fundamental difference in the marketplace between those types of chatbots and even an unregulated, uncontrolled ChatGPT instance. Anybody throwing slideware on there, good luck. It's going to fail. It's going to do these things. You're going to end up with headlines, and people that are vulnerable are going to end up committing self-harm. We have a lot of controls that prevent that from happening. First of all, with Vern's ability to detect emotions, we then can map those to CBT (Cognitive Behavioral Therapy). So we can actually look and see how these emotions are playing within the context of what's said. We're not diagnosing, but we can get a good idea if somebody is tending towards or showing signals of a particular type of mental illness. And we can intervene. We have very strong rules that don't allow you to play devil's advocate, don't allow you to role-play things like that that might be able to get them in trouble. In the end, our system is really based on the user: your emotions control how the conversation goes. And if you're getting into these areas of really high negative feelings, a red flag will triage you, will assuage you, will pull you back from the ledge and get you on the right path time and time again. So that, that's the biggest difference between the technology. We see it every day. Every headline we come out, we're like, "Oh, who got hit now?" But again, it's, "Am I going for engagement? Do I want to piss people off and put them against one another, or do I want to help them?" And that's just a completely different fundamental philosophy to start with.
Noah: Yeah, that's a huge difference in approach. When I saw OpenAI finally start to implement some safeguards, I thought of the immediate credibility on the line. They need to figure out how emotions are playing into these conversations, and what's going on. All right, that's kind of all I got for today. Is there anything that we didn't cover that you wanted to talk about?
Craig: No, I think we're good. I just think the bottom line is go out there, try the stuff out. People that actually do care and give a damn are working really hard to make sure that they're safe [and] effective. We're using it in mental health. We're using it with children with autism. We're getting results where parents come to us saying they've never talked to anybody longer than a minute; now they're talking to these avatars 10 minutes at a time. Or being able to help an elderly person who has dementia, who just talked to their child on the phone, hung up, and now they've forgotten and don't think anyone loves them. They pick up an avatar, and they can talk and continue the conversation. So things like that are out there. Really read through the articles with a critical eye and know that there's a lot of good stuff out there.
Noah: That's excellent. Okay, I appreciate you joining me today.
Craig: Thanks, Noah. Thanks for having me.