Full Transcript
03:40 Noah Weisblat: Hi Malinda. Thank you for joining me. Please tell me about yourself and give me a little background.
03:47 Malinda Street: I'm pursuing my degree in law from Case Western Reserve University. That policy now requires them to note on any report whether Generative AI was used to compile it. As I started looking into this, I saw that the implications of AI in law enforcement are huge and introduce interesting problems. That's too broad for a single policy class, so I narrowed my focus to AI in police reporting and how that affects public perceptions, how it can benefit law enforcement agencies, and some of the pros and cons we're looking at there. So yeah, that's me and where we are right now.
07:24 Noah Weisblat: How did you get started coming from Utah to Ohio?
07:32 Malinda Street: What I want to do is help protect people from government overreach, and that's not a super popular discussion to have in some circles. I was finding that the vibes, I guess, just weren't meshing with me at some of the other schools I had applied to because they didn't have that priority. But Case Western did, so I think it was a really good fit. I've loved it. And if I'm honest, I don't know that I could have handled traditional, in-person, full-time law school with my kids. So, it's been a really great fit. I love it so much.
09:09 Noah Weisblat: When did you first hear about AI? What was your first impression, and how has that impression changed over time?
09:30 Malinda Street: It affects people's privacy, how the transparency issues work out, and how we hold people accountable. Who do we hold accountable when somebody uses AI to put together a report that ends up having an error? That's something that we need to be asking and considering as we start implementing AI. My husband is more the tech guy, so I know he frequently will use AI to say, "I feel this way and these are my frustrations with this issue. Put it together in a work email that sounds civil." And I think there's some great value there.
But like I said, I think it's probably not going away. We're going to have to learn to use it. I think it will do to English and humanities classes what the calculator did for math classes. There was a time when it was a big deal. The question now is, how do we use this tool in a way that enhances our learning and our thinking processes, rather than just eliminating our need to use our brains?
13:38 Noah Weisblat: It sounds like you're for using AI as a tool to help in certain jobs and in school, but you're hesitant to be quote-unquote "pro-AI" because of the concern that it's going to replace humans or take over. Those are two very different things, one being taking over jobs and the other being full autonomy or some of the crazier theories out there. In terms of AI agents replacing jobs, why would you say you're against that in general?
15:01 Malinda Street: Let me try to put this in a way that makes sense. I think there are jobs that can be done by AI, but I don't believe we should let them function without oversight. If we turn over jobs to AI, we have to act as supervisors.
15:57 Noah Weisblat: Yeah, that makes sense. I think along those same lines too. I'm probably a bit more pro-AI in terms of its capabilities, but I would also say that if AI is going to take certain entry-level jobs, I see those jobs being reformed in the same way that new technology reforms jobs every day.
16:24 Malinda Street: Right. I mean, we've seen it with the Industrial Revolution. People used to make things by hand, then we got machines. I think this will cause some growing pains as we try to redefine what our society looks like, how we function, and what the new jobs are. But I think the most important thing, like I said, is that we see this as a tool and not just a human replacement.
16:48 Noah Weisblat: Yeah, that makes sense. Okay, so let's get further into your work. How do you see AI being used in law enforcement right now? You mentioned there are guidelines just enforced in Utah where officers need to report what they're using AI for in police reports. Is that the main AI guideline across precincts in the U.S. or across Utah? What can you tell me about how AI is being used in law enforcement right now?
17:25 Malinda Street: That's below the state level, and there are really great reasons for that, but it makes it interesting when you try to talk with people about complying with a state-level policy. They’re like, "Yeah, it's definitely on our radar." The approaches range all over the place.
There are agencies that have been approached by tech companies. The one that almost has a monopoly in law enforcement is called Axon. They've created a program that will take officer body cam footage, create a transcript from the audio, and then use a version of ChatGPT to generate a report. They intentionally incorporate errors or leave blank spaces that an officer has to fix to ensure a human is providing that necessary oversight. Quite a few departments in northern Utah have done a trial period with this technology.
Then you also have officers who are just using ChatGPT on their own—going rogue and having it put together their reports for them. And then there's this upper level of supervisors in law enforcement who think AI seems like a really bad idea. They say, "I wouldn't trust it to put together a quality report, I will never use it." This informs their policy approach; they assume because they wouldn't use it, their colleagues won't either, so they kind of turn a blind eye to it or maybe just wish it away. They have other priorities; we ask a lot of our law enforcement officers, so it's not their top priority (Baird, 2025).
This goes with the way law enforcement operates in the States. We don't take action until we have probable cause, and that kind of bleeds over into policy, where we don't take action on AI reporting until we have reason to believe it's an issue. Unfortunately, in policing, I feel like things have to hit the fan and there has to be a really negative news story before an issue gets the attention it needs. What that means is agencies are not going to be able to shape the discussion of what acceptable AI usage looks like; the companies will. They're going to move in and say, "Try this free thing, do this trial," and get them in with their little hooks.
One of the things I'm learning is that it's hard for agencies to switch systems. There's a sunk-cost fallacy. Once they have a research database that compiles all their data, it's so hard to switch to another one. There are a lot of very outdated models still being used because the data is in that program. You can either try to make the switch and teach officers two different programs as you phase one out, but again, that's not a top priority for them right now.
It's similar with AI usage. Tech companies are going to come in and push it. If agencies aren't aware of the drawbacks and the strengths—and some of the stories they're being told aren't necessarily backed by data—they're not going to be able to shape this. Their priorities are so different from tech companies, which just want to make money. That's been the biggest thing I've seen as I've researched this: there's so much we don't know. It's such a new thing that, simultaneously, we have people who could probably be more involved and become experts, but they don't want to be because they're afraid of AI. Then we have people who are gung-ho about AI but don't really know about local law enforcement agencies, their structure, and the democratic ideals we want to uphold. It's hard to get someone in that sweet spot with both the expertise and the interest to balance all aspects of this decision.
So, I mean, look at me. I put on a blazer today and I'm an AI expert, right? But seriously, that's kind of how it goes. Anybody can do a little bit of research and then bill themselves as an AI expert and offer a training to officers. It's just kind of wild. So, as you look at how different departments across the US are using AI in their police reporting, it varies. I can't speak to all the departments within Utah, let alone the U.S.
One of the things that I have found in my research, though—there was a study done by Schiff and a group of other people, and they looked at how people perceive AI usage in law enforcement (Schiff et al., 2025). One of the things they very clearly found was that people are more accepting of AI usage at a more local level for law enforcement than they are at a national level. That doesn't really surprise us. We're more accepting of and familiar with the people in our neighborhoods. We want to trust the officers in our area more than we trust the FBI—the ones that are all-powerful and we don't know.
One of the really interesting things about AI usage is we don't measure efficacy and actual evidence as much as we measure people's perceptions of efficacy and evidence. You would think they would be in alignment, but they're not. It's something we see in psychology and in true crime. Everybody knows eyewitness testimonies are fallible, yet we still want to hear from eyewitnesses and their experiences. That's not always super reliable in giving us the actual facts.
27:20 Noah Weisblat: It's interesting what you brought up about how tech companies are approaching specific police precincts, and the fact that these companies, looking to make money, might be shaping more policy just by being there and pushing this stuff than the actual supervisors who may not have as much interest in AI usage.
27:52 Malinda Street: Sure. So one of the interesting things is Axon, this company I talked about—their stock has gone up like a thousand percent in the last year because they're going all-in on this AI thing, like so many companies are. They market their "Draft One" technology as being super efficient and time-saving for officers.
In one study, they asked officers how they felt about AI in the beginning, had them use it, and then asked them again at the end. Usually, with new technologies, people are apprehensive at first, but positive reviews go up once they gain familiarity. An interesting thing with this report was they didn't really see a great increase in how officers positively viewed the AI usage. The familiarity was not helping increase officer perceptions, which is unexpected.
They also asked, "This is supposed to save you time. Did you feel like this saved you time?" And the officers were like, "Oh yeah, definitely saved me time." But then they looked at the numbers, and it didn't save those officers time. A lot of this information that I'm giving you comes from an interview I had with Ian Adams. He has a background in law enforcement, boots on the ground, but he also has a PhD and an MPA from the University of Utah. He refers to himself as "the tech cop" (Adams, n.d.). He says they did one of the first studies to actually evaluate whether AI technologies were increasing efficiency among officers, and they weren't.
They did this study in the Manchester PD in New Hampshire, a medium-sized force. With their 85 patrol officers, they randomly selected half to be a control group that did reports like they'd always done. The other half was introduced to an AI tool that captured audio from their body cams and put it into a narrative. At the end, they saw no significant increase in time saved for those officers using the tool (Applied Police Briefings, 2024). Axon claims an 82% reduction in officer time putting together reports, and that's just not borne out in this evidence. This was one of the first studies, and its evidence has been replicated. But because of our societal perceptions, we just kind of go along with it because it seems logical that it would save us time. When it comes down to it, research is not showing that it is. The Anchorage PD actually walked away from their AI trial because they said it's not efficient, it's not saving the time promised, and it's not worth it to us.
So, that's been really interesting. As we look at policymaking, we want to balance democratic ideals—preserving liberties and privacy—with the tool's efficiency for officers. But we're not even 100% sure what these benefits are that we're balancing.
33:08 Noah Weisblat: Where do you think that conflicting reporting comes from? Between the company saying this is saving 82% of officer time and the actual case study showing it didn't save much time at all.
33:25 Malinda Street: You know, I can't say for certain. It could be that Axon is looking at officer perceptions, which would explain it. If you're asking people how they feel, chances are they probably said it saved them time. Another possibility is in the actual implementation. It could be that Axon is trusting their tech more than we should. It's possible that implementing AI responsibly, with the necessary human checking and reviewing, takes time, and maybe Axon was skimping on some of that. I'd have to get in a little deeper, but it is an interesting question.
34:30 Noah Weisblat: A couple more questions. So you talked about how Axon is using AI to help officers save time. The way I see it, there are a couple of different use cases for AI: time-saving, conversion—which doesn't really apply here—and then agentic AI, which essentially takes the human's place or acts as an intern. How do you see AI evolving outside of just that time-saving use case? For example, can you see AI acting as a detective in some capacity? It looks at 20 pieces of evidence, takes in DNA accounts, and then forms an opinion. Do you see that as plausible, and what concerns might you have with it?
35:40 Malinda Street: Sure. Can I go back to reporting first?
35:45 Noah Weisblat: Sure.
35:46 Malinda Street: I think there are potential applications for AI increasing quality in police reporting, which is still being studied. AI is really good at eliminating grammar and spelling mistakes, so it can help with readability and make officers look more intelligent, which is a big deal since 85% of police agencies in the US only require a high school degree. But is readability something we really need to prioritize? That's something we have to balance.
We want to make sure a report is complete, capturing all the elements of the crime. We also want it to capture the officer's perceptions of the circumstances, and to document the actions of state actors. LLMs are really good at completeness because they predict the next token. You can't tell them not to be complete, which is where some hallucinations come in. Most people are aware of the big ones, but there are smaller ones that are more interesting to me because we're more likely to overlook them.
Dr. Adams talks about the "yellow Adidas problem." He'll give officers a description of a suspect: six-foot white male, 200 pounds, red shirt, black pants, yellow Adidas. He'll feed it into the program, and the AI will spit out an almost identical description but will fill it in with "yellow Adidas shoes." It seems like a small thing. But if an officer fails to document whether it was shoes, a backpack, or a hat, that's a big deal. The program will automatically complete the phrase with the most likely thing, which is shoes. But if it was a yellow Adidas hat and then you go to court, the defense attorney can say, "Look, my client doesn't have yellow Adidas shoes, but he has a yellow Adidas hat. Officer, do you frequently mix up shoes and backpacks?" It's a huge deal. Some of the strengths of AI, like its need for completeness, can actually sabotage the quality and accuracy we're looking for.
So, back to your question about using LLMs in detective work and combing through large databases. I think it sounds awesome. There are a lot of really good opportunities there. My husband and I were watching a law show last night, and the opposing counsel just discovery-dumped tons of documents on them. My husband said, "That would be so great to have an LLM go through that for you." And it is great; the possibilities are amazing because AI can scan all this data and look for patterns at a rate we could never perceive (Šaljić & Tomić, 2024).
That is really great, until we start considering the implications. One of the faults of AI is that it's based on human input, so it can be a great source of human intelligence, but also a great source of human bias. We train AIs off of what humans have done. If there are humans who have put together biased reports, that's going to show up in the AI. If the program determines that, statistically speaking, it's most likely to be an individual of a certain description, it's not looking at them as individual humans with agency. Maybe that person didn't actually commit the crime. But because they fit the description that AI has linked with this behavior, the AI is more likely to pass judgment.
That's another place where we need to be careful if we want to go that route. I had to stop pursuing that route for my project because AI in policing is such a big issue, and I had to focus just on reporting, which is where we're using it right now. But there are some really cool and some really concerning implications if we want to use it for predictive policing or just for combing through large amounts of data. I would imagine there would be some efficiency gains there for attorneys and law enforcement, but it definitely increases concerns about privacy, transparency, and accountability.
43:56 Noah Weisblat: Let's travel five years into the future. What do you see as the biggest developments in AI in policing? Do you see it becoming more efficient in reporting, or do you see it going beyond that towards what we talked about with predictive AI?
44:27 Malinda Street: I don't know. I think things will be slow-moving if left to the departments themselves. As I understand it, they get locked in with contracts and they don't update their tech as frequently as tech companies would like them to. So, there will be developments that will not be adopted widespread in law enforcement just because the money's not there, or they don't have the manpower to educate everyone on how to use it properly, or because of that reactive mentality.
I think it will probably be slow, especially since it's a decentralized thing. As politics determine funding, if the powers that be decide our law enforcement would benefit from an increase in tech funding, then some departments will lead out. But I think it will be a slow-moving thing, not any sort of group action.
I think it will be shaped and driven by the tech companies pushing their products rather than by police officers, citizens, defense attorneys, and judges determining what would be best for our society. They're not the ones who are going to be able to shape this if they don't step in and insert themselves into the conversation. Otherwise, it's just going to be a little bit at a time. Contracts will run out, people will need to replace their tech, and Axon will have AI stuff they can use at a little extra cost. I think it will trickle in, and not probably in the best ways, because it won't be a choice that was made for people, but one that was made for them.
47:15 Noah Weisblat: That's a great point. Do you have anything else that we didn't cover that you wanted to talk about today?
47:22 Malinda Street: I'm kind of stuck on the applications—where do we go from here, what do we do? And I think so many people are. My major takeaway is that as we go forward with this, just be skeptical. Just because someone puts on a blazer and calls themselves an AI expert, are they really? We need to watch out, because we're being sold these claims. Just because a company says something is efficient, we have to ask, "Is it really?" As a society and as individuals, as we adopt AI personally or as agencies, make it a choice. Don't let it be something that just happens to you.
48:12 Noah Weisblat: That's great advice. Well, Malinda, I appreciate your time a lot. This was awesome.
References
Adams, I. (n.d.). Ian Adams Research. Retrieved August 19, 2025, from https://ianadamsresearch.com/
Applied Police Briefings. (2024). Does an AI transcription tool reduce the amount of time patrol officers spend on report writing? APB, 1(1). https://appliedpolicebriefings.com/index.php/APB/article/download/5231/3767
Baird, A. (2025, February 18). Why police are divided on using AI to help write their reports. The Salt Lake Tribune. https://www.sltrib.com/news/politics/2025/02/18/why-police-are-divided-using-ai/
Šaljić, E., & Tomić, D. (2024). The Use of Artificial Intelligence in Investigating, Combating and Predicting Crimes. Pakistan Journal of Criminology, 16(4), 619–631.
Schiff, K. J., Schiff, D. S., Adams, I. T., McCrain, J., & Mourtgos, S. M. (2025). Institutional factors driving citizen perceptions of AI in government: Evidence from a survey experiment on policing. Public Administration Review, 85(2), 451–467. https://doi.org/10.1111/puar.13754