Full Transcript
Noah Weisblat: Give me your name, where do you work, what city. What do you do? And tell me a little bit about yourself.
Kamari Wright: Great question. So my name is Kamari Wright. I reside in Atlanta, Georgia, where I'm born and raised, and I currently work for a small startup in the airline space called Volantio.
Noah Weisblat: Okay, what do you do for Volantio?
Kamari Wright: There I'm a senior software engineer. I maintain one of the legacy products. It's called Smart Alerts. And in that, I am also working on introducing AI and language learning models in various ways throughout our workflow.
Noah Weisblat: Okay. And how often, or how long have you been an engineer for? Did you just start vibe coding a year ago, or did you go to school for it?
Kamari Wright: Yes. So I actually have been programming since I was about 15, and I'm 30 now, so it's been, it's been a long 15 years. But yeah, I attended the Ohio State University, majored in computer science and engineering, and then I think my junior year, no, my sophomore year, I got my first internship and then that led into a job my junior year. I've been working in the industry as a software engineer ever since.
Noah Weisblat: Cool. We'll start big picture and then we'll kind of get into the nitty gritty. So how do you just see, in general, AI in the software engineering space? You see it as a positive and negative? How do you feel about it in the space specifically, and then we can dive further.
Kamari Wright: In general, it makes the bar a lot higher for your engineers. So I think it makes life harder and work harder, and entry-level roles harder for your junior engineers. For your upper mid-level senior-level engineers, I think it's an opportunity to 5x to 10x the amount of output that you're able to provide for your business or whatever value you're looking to provide through software. Yeah, and that's kind of it at a high level. I don't know if you want to keep going, or I can wait on that unless you ask more questions.
Noah Weisblat: Yeah, no, I got you. It's more questions. Um, so just in your day-to-day as a software engineer... Side note, so found my questions, so we're back. Oh, yeah. In a software...
Kamari Wright: Huh, interesting question. So I actually use all three every day. For me...
Noah Weisblat: Those kind of big five software AI systems, which ones of those do you use?
Kamari Wright: Anthropic? Huh, interesting question. So I actually use all three every day, and for me, the Claude code is just a head and shoulders better than any other AI agent out there that can support doing actual software engineering tasks, and then general questions day-to-day, just kind of planning and trying to figure out what work we're gonna do next or how we're gonna tackle a problem. I'm working with anything from OpenAI, so that's mainly ChatGPT and some of their deep research models.
Noah Weisblat: Difference and where would you kind of put Claude code versus Cursor? Kind of in the developer area?
Kamari Wright: work. And, and unless you give it an MCP connection, it doesn't have any interaction with you. So, yeah, that kind of gives an overview of where those two fit for me.
Noah Weisblat: Interesting. So if I give you the option between Claude Code, Cursor, or actually, let's do four. I give you an option between Claude Code, Cursor, a sophomore in school, sophomore engineer from Ohio State, or someone who just graduated from Ohio State with a degree in software engineering, who would you take, or rate those from one to four on who you would rather have on your team?
Kamari Wright: Oh, okay, so I need more context. Am I planning to work with this system or this individual long term? Like, is this a person that I'll be working with a year or two from now or three years from now?
Noah Weisblat: Good question. No, it's just one project. It's about a three-month project, but it is just that one project.
Kamari Wright: Oh, in order.
Noah Weisblat: Yep.
Kamari Wright: This is going to stink, but I will tell the truth. In order: Claude Code, Cursor, then the individual that just graduated, and then the sophomore in college.
Noah Weisblat: Interesting. Okay.
Kamari Wright: Like, I'm having to explain the basics of Typescript or the basics of Python, and then not only that, I have to explain the framework on top of that. And then we're talking about product requirements and technical requirements. And then, on top of that, sophomore engineering approaches. And that alone is going to take me three months to just give you a brief overview about. So, um, it just would be ineffective. But now if you tell me, "Hey, this is an individual I'm going to work with for two years or three years, or or put a long-term investment into," by all means, it's worth more now than ever to get somebody to work with and teach them and train them for long-term benefit.
Noah Weisblat: With the recent grad engineer over Claude Code? If you once again, if you could only pick one, which is a situation with barring an Nvidia collapse, probably wouldn't happen, but what would be the time frame where you would feel comfortable going with the recent grad over Claude Code? Would it be a year, two years, five years? What would be that time frame?
Kamari Wright: Software engineer? And it's like, "No, still be a software engineer," because you have these tools. You can learn that much more quickly and be that much more effective if you, if you have a little bit of guidance. So this, by all means, does not mean, "Don't go be a software engineer," it just means, um, the workflow and the approach looks different. Yeah.
Noah Weisblat: Everybody says, and what you're saying too, is there's also a big difference between a software engineer who's who might be proficient with Claude, proficient with Cursor, and who uses those systems on a daily basis versus someone who is, and I'm sure this might be a bit more rare these days, but for someone who doesn't really use them or doesn't really know how to use them all too well...
Kamari Wright: Engineers who have yet to touch some of these tools. So like, your major companies that are in the tech space but not tech-focused, a lot of those engineers are just now getting access to Chat GPT and are just now figuring out who Anthropic is as a company. And it's just now finding out about Claude, let alone Claude Code, so it is not as rare as you think to find an engineer that has yet to touch any of these tools. Man, we are still extremely early and at the forefront, which is cool to see.
Noah Weisblat: Of Claude, how often? Because I hear you're talking about Claude Code and Cursor, which are different, I believe, from just the LLM of Claude itself. So how often are you using the actual Claude versus Claude Code, or are they the same thing? Explain that to me like I'm five years old.
Kamari Wright: Oh yeah, good question. So Claude and Claude Code are not the same thing. Claude is more of an LLM without the tooling built around it. So it's better at, you know, answering basic questions like, "Hey, what is the weather going to be like a week from now?" You know, your standard ChatGPT questions that you might ask just for general information, whereas Claude Code is an AI agent, meaning that they've taken the core Claude and built all of this paying and software and logic around it to help software engineers be more effective. Hopefully that answers that question.
Noah Weisblat: Yep, that does. Thank you.
Kamari Wright: Yeah, and then, Oh yeah, go ahead.
Noah Weisblat: No, if you had something to add, by all means.
Kamari Wright: Yeah. So, um, Cursor is not a language learning model or an AI agent at all. It's a tool where we write code that has AI. I guess you could say it has an AI agent built in in two different ways, and then it has a couple of language learning models built in as well. So what it does is it gives you the ability to stop writing code, give that code into, or put that code into either an LLM of your choice or an AI agent and let it do its thing. So it's just a tool to help you, I guess, move more efficiently and effectively as an engineer.
Noah Weisblat: Some trouble. I've also had some experience in engineering, but just kind of give me an idea of what's true and what's hyperbole in terms of being able to "vibe code" as they call it, projects as a non-engineer, and where you see that space?
Kamari Wright: system. And, and that's, by all means, that cannot go to production. So hopefully that answers your question. I think it did. But yeah, man, vibe coding is great for MVPs, for those who are technical, your tracer bullet approach. Great for that, but for production-ready systems, not at all.
Noah Weisblat: Yeah, that makes a lot of sense, and I think I've seen, probably from you reposting, honestly, on Twitter. I think I've seen someone talk about that on the security side where they talked about how vibe coding is great up until any security measures are needed, and then, then it gets into a bit of trouble.
Kamari Wright: Yeah, very much, very much so. There are countless examples of people getting their apps hacked in a lot of these communities on X.
Noah Weisblat: Yeah. So I wrote this kind of as a quick question, but I feel it doesn't really encapsulate, it's kind of hard to answer given the way that you actually are using the AI system. So the question initially was, what percent of your job would you say you're using Claude Code or an AI agent for, what percent is more important, automated? But I'm going to try to change that question. If you want to answer that, you can, but I know it's, you're kind of using Claude Code a lot. I would assume, what percent of your job are you using Claude Code or Cursor or AI, just in general? Not completely drop this off to AI and let it write it, but using it in some capacity, and then we'll kind of move forward in the same direction.
Kamari Wright: Oh, good question. I would say the only thing I don't use it for is communicating across the board with my teammates in Slack. But for literally every other aspect of my job, I am using some form of LLM or an AI agent, and that's not to say that, like, I'm literally typing it in and walking away from the computer, but it's making the, I can't say mundane, but the grunt work of my job a lot easier. So, like, literally anything that requires me to type a whole bunch, so like if I'm coding in our codebase, like, I'm letting Claude Code do that, but at the same time, I'm designing the classes. I'm designing the implementation, so I'm feeding all of that into it and then letting it do the grunt work for me. Um, so I like to say this and it's kind of terrible, but it's true. Um, I treat all LLMs or AI agents like a junior software engineer. So my job has now turned from typing in the codebase all day and, you know, trial and deployments to feeding context to a third-party engineer in Claude Code, and then letting it spit out a PR, me reviewing it, and then I'm responsible for the deployment and making sure the test case is passed. And then on the other side, because I'm a team of one at our product, documentation, I am giving the high-level goals and the high-level, how can I put it, the high-level feature set, and then it'll go and fill out the details and the PRD, and I'll read through it and say, "Yes, no, change this, move that," or "Hey, this was a good idea, let's put this on the back burner for later." So, in terms of my job, where am I using it? If I had to give it a percentage, about 85-90% of my job, but that's not being automated away from me. It's just where is it being used as a tool, you know, and it's being used as a tool in 90% of what I do.
Noah Weisblat: Yeah, that makes a lot of sense. Okay. So what percent then, and I kind of see that almost as an example of AI, just for say, a writer, AI is like your computer in the sense that you're using it 90%, you're not having it do all the work, but it makes it a lot easier and a lot quicker. What percent of your job would you say is "drop it in the bucket AI," you can put in what you want, walk away, and it will get it done for you?
Kamari Wright: Good question. Um, at this stage, literally any. So we have a lot of configuration being managed in JSON, and a lot of configuration being managed in like, settings Python files. So like, if any of those things need to be changed or like a Sass file for one of our clients, like, sure, Claude Code can knock that out. How often does that happen? Like maybe once a month. So like maybe five to five percent of my job, maybe like close to 10% of my job is like, drop it and walk away. Um, yeah, but yeah, I would say closer to five because even then, I'm reviewing the PRs just to make sure it didn't hallucinate in some way or make some mistake or make some change that I didn't expect. So yeah, I would say about five percent of my job is completely automated.
Noah Weisblat: Interesting. And then the last piece on this topic, let's say before AI, you were a 100% efficiency engineer. If you had to give a number in terms of your efficiency now, just compared to that 100%, and say you're twice as efficient, you'd be a 200%, what would your efficiency number be now that you have all these AI tools at your disposal?
Kamari Wright: I would say I actually know the answer to this question. It's going to sound ridiculous. I'm about three and a half, like 3.5 times more effective. So I would say what, that'd be like 350%? Um, yeah, man, once you master your workflow...
Noah Weisblat: Yep.
Kamari Wright: And I have, by no means, mastered my workflow whatsoever. Like, I'm still improving it every day. But the closer you get to mastery with your workflow and where AI fits, man, you can accomplish a lot. So I am not bragging by any means whatsoever, but um, because of my workflow, not only am I one of the most effective engineers at our company, but I am able to now accomplish what would have taken me two weeks of work in about two to three days.
Noah Weisblat: Okay. Wow.
Kamari Wright: Yeah, so, yeah, and I'll give an overview of my workflow. I put a lot of topics into our Jira-based tool, and then I will let, I will put a lot of context into those tickets, and then I'll feed all that context into an instance of Claude Code. I tell it to ask me questions and then spit out a PR, and then I'll give it the green light. Do whatever it wants, as long as it doesn't, on a separate branch and in a PR. And then I'll let it spit out those PRs and review them or make changes myself, and I'll do that for whatever tickets I have in that week. So Monday, I might let it spit out all the code at once for all of those tickets. Tuesday, I'm reviewing all of those things that they've done, "they" because I run multiple instances of Claude Code. So, I'll read through those PRs on Tuesday. Some of those will get approved on the spot, the easier ones. A lot of those will have 20 to 30 comments from me saying, "Change this, move that, this doesn't work. Why did you do this?" And then Tuesday afternoon, I'm like, you know, addressing those changes, making some of those changes myself, kind of guiding the tool. And then Wednesday, we're deploying and making any final changes. So yeah, man, and once you, once you master your workflow, it is, it is...
Noah Weisblat: Interesting.
Kamari Wright: A complete game-changer.
Noah Weisblat: Very cool. So I have one or two more questions on your day-to-day workflow, and then we'll close out with some talk about kind of just the AI race in general. What are some challenges that you've had with AI, just in that day-to-day, and kind of how have you solved them?
Kamari Wright: Oh man. Um, so once you know the core idea of what these LLMs are, you will quickly see their vulnerabilities, and the main one is hallucinations. Like, its job is to predict the next best token for you. Um, which could be something that it thinks is true but it's really not, and I've had a lot of instances where it does hallucinate solutions. So, like creating API keys that don't really exist, or I've had one or two instances where it's created API keys and they actually work. Or, you know, for some of these models, they're, they're told that it's a year ago, if that makes sense. So, like today might be July 2025, but they're working in context of July 2024. So any date-based changes, any changes around API, so, like if you're saying, "Hey, integrate with Google Drive," if their API has changed in the last year, it is going to give you the old implementation. Um, other limitations that I've seen is, um, let me think, let me think, let me think of some good ones. Ah, let's see. Oh, over-engineering is a great one. A lot of times it tries to meet you where your knowledge is, and sometimes you don't always need the most proficient or the greatest engineered solution, and it will try to provide it. So, if you're saying, "Hey, integrate with Stripe," for example, it's gonna, if you don't tame it in or tell it, "Hey, give me a simple solution," it's gonna give you error handling, retry logic, queuing, and it's like, "Hey, I don't need all of that. I just need you to integrate with the API and do that simply and write a couple tests." Um, so for me, to answer your second question of how I get around a lot of these things, it's really about context, man. So the more context you can feed into the model, the more context you can feed into the tasks that you're asking, the more likely it is to give you an answer that's close to what you're looking for, or if not, the right answer out the gate. So people like to focus a lot on prompt engineering, and I think that is perfect. Like, you focusing on prompt engineering is like level one, but I think on top of prompt engineering is context engineering, and the better you are about feeding context into these models, the better you are going to get in terms of solutions and output. So, what context is, because I'm just using this word and I haven't explained it, context is information that goes alongside your question, right? So if I'm saying, "Hey, write a test case for all of my integrations," it's gonna do it. But like, what's considered an integration? You could be integrating with Prometheus, you could be integrating with Redis, you could be integrating with Stripe, all of the different tools. But if you're saying, "Hey, write tests for all of my third-party integrations," and then, "Oh, by the way, I want you to write tests in this format," it's going to give you pretty much exactly what you're looking for, with, you know, within reason. Like, the naming conventions might suck and the like. But my point is, just how much information can you give it upfront? That's the context, and the better you are about context, the more likely you are to reduce hallucinations, reduce over-engineering, and the like. And then one more thing, I know I'm talking a lot, but I'm sorry. A lot of these tools have like core rules files. So like, Cursor has cursor rules, Claude Code has Claude.md. These are files that it reads every time for context. One thing I really like to do is to spend a lot of time upfront feeding as much information that I can into those files because that will be the backbone for how it approaches things. So like, all of your software engineering approaches, all of your beliefs, so if you're if you believe in inheritance over composition or vice versa, like most engineers do, you can tell it that, and it'll make certain choices depending upon what you put in that file. So, that's also what I like to use as well.
Noah Weisblat: Interesting. Thank you for that.
Kamari Wright: No problem.
Noah Weisblat: Okay, so I have a two-part question, and I want you to answer it sequentially, but I'm going to give you both of the parts at the same time, just so you don't contradict yourself. You've got to weave, you've got to weave through some barriers. So the first part of the question is, where do you see AI and this space specifically with Claude Code, with Cursor, with AI agents related to coding in five to ten years? Maybe just five years, even. Kind of, where do you see it developing in five years? And the second part of the question is, what is your message to someone who is either A, just starting out as maybe a senior in high school and looking at different majors and thinking, "Do I want to go into computer science given where the space is going to be at?" And then B, what is your message to someone who just graduated maybe a year ago with a computer science degree and is thinking, "Do I really want to continue with this path? Is my job about to be fully automated soon? Should I just go become a bricklayer in Cuba? What should I do here?" So, yeah. The first part is where do you see the space in about five years, and then two, what's your message to either a senior in high school trying to decide on their major, and if they should pursue software engineering, or someone who just graduated and is thinking if they should change career paths?
Kamari Wright: Oh, great question. So, in five or ten years, I see these models being able to accept a lot more context, and that's the limitation. So the limitation is how much can we feed into these models at any given time? So obviously, as technology improves, as solutions become more efficient, they'll be able to accept more context, which will be great. But I also see these tools becoming specialized, so you'll have models for your restaurant space, your models for your airline space, um, things that become really proficient or models that become really proficient at certain domains. Um, do I think I will have a job? Yes, we all still have jobs, but we will be able to be more effective and efficient. And then, on the flip side of that, I don't know where that leaves us in five to ten years. If we're able to efficiently solve what would normally take us six months or twelve months in two or three months, I don't know if we take on more work or people become more leisure-focused. I have no clue, but I think our world and our dynamic will shift in the next five to ten years due to these tools. And then the second part, what does that mean for your senior in high school and what does that mean for your person who just graduated? If you're a senior in high school, pay attention, pay attention to the industry you're interested in, and if you're interested in computer science today, now is the best time to go learn it. And the reason I say that is because what would have taken you a four-year degree, and then some in the summertime, to learn and to pick up, you can now learn and effectively have the time. So my experience could have been stuffed into two years by working with a tool like ChatGPT or a tool like Claude to help, you know, throw ideas back and forth and ask questions to get answers back to me to help me learn more efficiently. So if you're a senior in high school, go take advantage of the opportunity in front of you. Now, literally, the playing field is more now level than ever. So go take advantage of it. If you're interested in computer science, and then go figure out how to go make some money from it, because you, honestly, I always say this but I now mean it more than ever, you don't need the degree, you just need the experience. So the earlier you start, the better. There are actually, I'm talking a lot, but there's a group of three high schoolers that built an app called Calorie AI. Um, they actually just graduated this past June, but their app is now worth 3.2 million dollars, I think. I mean, it's an app that I actually use every day where you take a picture, feed the picture into the AI model, and it tells you all of the micros and the macros in which you just ate. Um, yeah, so like those are there, those group of individuals are an example of what you could do just by using ChatGPT to learn the information instead of doing the work for you. Now, if you're somebody who just graduated, your responsibility now is to go learn a workflow. So let's say you got a computer science degree and you just graduated in May, I would tell you, go figure out how to introduce AI into your workflow. And then go use it to subsequently build an artifact. So, whether that be a calculator app, whether that be something that you're passionate about, like sports betting for some, or just analytics around working out or choose any of those things, whatever you're passionate about, go, go test around your workflow by building something. There are still companies that will hire you, and your resume looks a lot better when you can say, "Hey, I'm not afraid of this tool, but I've learned how to master it to make me more effective and efficient." You're literally extremely valuable to any tech-focused company that's out here. So for both individuals, your senior in high school and your person who just graduated, I would tell them, "Don't listen to the 'dooming and glooming' by any means. Figure out how to take advantage of your opportunity and figure out how to introduce these tools into your workflow." Yeah.
Noah Weisblat: That's great advice. Good advice. Well, all right. So I got one more question left for you, and then I think we can call it a day. So, the last question I have for you is the AI race, which there could be, there could be new companies approaching, but once again, what I call the "Nvidia 5." I'm actually going to include Nvidia in the AI race. Give me your one to six of who you think is best positioned right now in the space of AI from Nvidia, OpenAI, Anthropic, xAI, Google Gemini, and Meta AI, who's spending a trillion dollars, figuratively, but...
Kamari Wright: Almost literally.
Noah Weisblat: Hiring, um, hiring new staff to try and get back into this race. Give me your one to six.
Kamari Wright: Ah, this is gonna sound terrible, but it's true, and this is partially biased, I am sorry for whoever's hearing this. One is Nvidia and then two is everybody else. I think honestly, I think the way we look at this is incorrect. I don't think there will be an AI model to rule the world, or LLM to rule them all, and then everybody else will fall off the wayside like MySpace and Facebook and all of the like because of how the math works in the system, but I'll get to that in a second. The reason I say Nvidia will win and they're number one is because they're focused on the hardware, and they're also focused on the communication between the hardware and, how can I put it, the operating systems, if that makes sense? So they're not in the, they're in the race of language models, but that's not their focus, because they understand that all of these models will need to work on some set of hardware. So they're focusing all of their time and energy on that, and that transcends, transcends any language learning model. Like Google, Anthropic, OpenAI, even Deepseek or Kami, the companies who did those in China, they can all use Nvidia, and that's what you're seeing now is that all of these companies are running to Nvidia because their chips and hardware are AI optimized and AI focused. So I do think Nvidia will, quote-unquote, win the race of AI by virtue of focusing on hardware. But I do think all the other companies are gonna be equally, equally, but like, they will all have their own domain that they're good at. Um, like I can see Google Gemini being really good at um, general tasks, but then on the flip side, being good at working with all of the systems that Google has built over all these years. So being really good at working with Google Drive, with Google Cloud Platform, being really good at owning that domain, but then also owning the domain that comes with, you know, your Android devices and their phones. Whereas I can see ChatGPT and OpenAI being really good at just your general day-to-day tasks and your general day-to-day items. And then the way Anthropic is, is going, and you can just see their, their focus and their brilliance being software engineers. So you can see Anthropic, in my mind, being the go-to for all software engineers. And then, say what you want about Elon, say what you want about X, I think that model is gonna be really good at being unfiltered, um, just by virtue of how they train it and kind of their philosophy and their approach. Um, and I think it's gonna be really good at understanding and navigating the social media space, which that's a whole other topic. And then, oh, Llama and Meta. I think they're gonna be good at what they've always been good at. They've always been good at open source, they've always been good at giving us a foundation. Um, yeah, and I think they'll continue to be good at that space. And I don't quite know what that looks like, but they've even already started because Llama was the first thing that you can run literally on your laptop. Like no other model was allowing you to do that until they did it. And then obviously Deepseek, I think allows you to do that now, and there's some others, but it started with Meta and Llama. So yeah, I think they all dominate spaces, but I think Nvidia, you know, Nvidia is like the arms dealer to them all. So I think Nvidia will win there.
Noah Weisblat: Explain to me what that means to you as a software engineer. What is the importance of a being open source? And how does that affect what you're doing? Just explain that to me like I'm 10 years old.
Kamari Wright: Oh, good question. So there's closed source, and there's open source. So, open source means that we can go today and understand all the inner workings of that thing, right? So, as opposed to Facebook, none of us know how Facebook works, how the algorithm does what it does. So that's closed source, right? So Meta, although they have Facebook, and they have Instagram, those things are closed source. Meta on the developer side has been really good about creating tooling and things where we can go download it and make it our own. So like the main focus would be React, which is a framework that literally every front-end engineer has touched or used that came from Meta and their prioritization of open source, right? So the same reasons that React was important to be open-sourced and how that revolutionized front-end engineering is the same reason why Llama being open-sourced could revolutionize what we're doing with language learning models and AI because we can understand how it's being trained and we can take it and make it our own. And a lot of these tools that we're talking about, they don't allow that. I cannot remember if Google Gemini is open source, but I know Llama is and I know Deepseek is. Um, but the reason that's so valuable is because it levels the playing field for every engineer across the world, right? So Claude Code right now for me to use it, I pay a hundred dollars a month, but everybody doesn't have access to a hundred dollars a month in America. And then once you go to, you know, engineers in various countries throughout Africa, South America, like that barrier of entry is that much higher. But imagine if you had, you know, some tooling that was free, that wasn't quite as good but close enough, that almost levels the playing field for all of us, right? And gives all of us access to LLMs and to those sets of tools. So the reason open source is so important is because for engineers like myself, ten years ago, like, we didn't have the money to pay for the best and the greatest, so we had to find a free way to do it. And then for the capitalist in me, it now can reduce my, my spend, right?
Noah Weisblat: Yeah.
Kamari Wright: So it reduces my overhead, which then increases the margins that I have. So if I have knowledge on how to train Llama myself, that's one less thing I have to pay for.
Noah Weisblat: Interesting. I didn't know that. That is, that's, yeah, that's, that's incredibly beneficial for those, especially, like you said, kind of in different countries across the globe who definitely might not have a hundred USD per month to pay for some of these tools to be able to use them and learn them. So shout out Meta for that. Okay. So last thing, I want to give you the opportunity. If you had anything else you wanted to say, talk about before, before we close out that I didn't get to in my questions, do you have anything you want to talk about?
Kamari Wright: Yeah, I have one thing I want to say. Um, so Gil Scott-Heron, a famous musician who's now deceased, God rest his soul, he used to say something that was very powerful and I've never understood it until now. He used to say, "The revolution will not be televised." I never understood what it meant, and to some level, it's about consciousness and how we think. But you can take that and apply it to this technology and understand that this is a revolution in terms of how we approach working as a whole and just changing the dynamic of just how we accomplish tasks and how we accomplish goals. And it's not being televised. If you turn on the TV today, there's the Wimbledon final. There's Fox News talking about interest rates. There's CNN talking about politics, but nothing about LLMs or AI whatsoever. The revolution is not being televised, but it's happening now. So if you're interested, if you're thinking about it, if you work in anything from sanitation to flying planes, it can help you in some way. So find it, take some time out, go use it, go learn it, and figure out how it can make you better.
Noah Weisblat: That's awesome. Well, thank you, Kamari. I appreciate your time. I feel like we could talk on this for hours, but I think we'll end it here, so I appreciate your time and I look forward to maybe having you again.
Kamari Wright: No problem, man. And I'm always happy to join and thank you again for the opportunity.