Thumbnail for Season 3, Episode 1: The Precrime Unit by Hi-Phi Nation

Season 3, Episode 1: The Precrime Unit

Hi-Phi Nation

39m 36s4,312 words~22 min read
Auto-Generated

[0:01]Hi-Fi Nation, from Slate. Every Tuesday in downtown LA, the LAPD police commissioners have a board meeting that starts at 9:30 in the morning. Please call the roll. Good morning, let the record reflect Commissioner Soff, Decker, Banner and Goldsmith are present and we have a quorum. There are five police commissioners, all civilians and volunteers who are appointed by the mayor. The meetings are open to the public and there's time allocated for public comment on every agenda item. I'm showing 11 cards. Anyone who requests it gets two minutes to speak to the board. There's a stop clock right in front of the podium that starts a countdown as soon as you start. And a bell goes off when your time is up. Then your mic gets cut off. The last meeting of the year 2018 was in mid-December, right before the holidays. The first item on the agenda couldn't sound more boring and bureaucratic. The commissioners are being asked to approve a donation from a charitable organization of $35,000 to the LAPD. to be used to reconfigure the existing conference room into a community safety and operations center for the benefit of Operations West Bureau. Steve Soff is the president of the Police Commission and he's presiding over the meeting. There are four LAPD bureaus: Central, West, South, and Valley. The Community Safety Operation Center, or CSOC for short, started in South Bureau as a response to a rise in violent crime in South Los Angeles in 2016. Think computers doing data analytics, centralized intelligence sharing, that kind of thing. After a couple of years, South Bureau saw a significant reduction in homicides and gun violence. So the mayor, Eric Garcetti, pledged to spread CSOCs to every other bureau in 2018. Time for public comments. Go ahead. Good morning.

[2:04]Do you know what a CSOC is that you're going to prove $35,000 funding? What these operations centers are used for are to gather and store and analyze and share License plate readers, body cameras, CCTVs, surveillance data, which is then used to criminalize members of the community. And from there, individuals are identified and targeted. CSOC, they prime officers so that they are more likely to use force. You are sending officers out into the field frightened and creating this use of force that is resulting in people being killed. And that is where CSOCs come in because the other central nerve centers. Next speaker, please. Time's over now. Next speaker, please. There's probably about 40 people in the room, including myself, flanked by police officers on both sides. And 11 people spoke against the approval of CSOC funding. Many were members of the Stop LAPD Spying Coalition, a group of community organizers who have been suing the LAPD to release documents about their computerized surveillance policies. Afterward, one of the police commissioners, Shane Goldsmith, addressed the concerns before moving for a vote. Um, I'm I will vote on this. Yeah, I approve this. But I did want to just acknowledge the the concerns that have been raised. What it's her turn to speak now. My turn. Thank you. I know that nothing short of abolition of the police department and this commission will satisfy you. We have a motion for approval, please. So moved. We have a second. Shame on you. Call in favor. Aye. Anyone opposed? All of them. All of them. So moved. 4-0. What's next? What's the next one? I'm going to ask you all to stop disrupting the meeting if you can't stop disrupting the meeting, then the meeting will stop. Anyway.

[4:05]The approval of funding for the West Bureau CSOC is a small but symbolic step in LA's ongoing move toward predictive policing technologies. The goal is to have computer programs predict who, where, and when the next crime is going to occur and to direct police units to intervene and prevent it. At this meeting, the CSOC represents for one side how a once notorious police department can turn to technology for progress and reform, replacing the prejudice of human judgment with impartial data and algorithms. For the Stop LAPD Spying Coalition, algorithmic objectivity is a fiction, a cover. For the coalition, the CSOC is just another efficiency tool to target, incarcerate, and control racial minorities in a rapidly gentrifying city. It's a debate that will eventually spread across the country because the technology is moving at a rapid pace and police departments everywhere are looking for an upgrade. One way to anticipate how cities around the country will react is to look at how the debate is unfolding in LA. You're rubber stamping CSOCs and you're rubber stamping the same goddamn policies. You're speaking from both sides of your fucking mouth. Your time is over. You're disrupting the meeting. So at this point, about six police officers in the room start moving in. And they opened a digital video recorder that one of them's been holding the whole time and they start filming. Of course we want abolition of police. Of course we want abolition of your saying. Shame on you Shane Goldsmith and you're a fucking president of a free. Ladies, that's your last warning or you're both going to be leaving. You're both going to be leaving. Both of them are out. Thank you. Thank you. Next speaker please. Other people would like to speak. Other people follow the rules and were.

[6:06]You're out for the meeting. You're continuing to disrupt. Happy holidays to you too. Billionaire piece of shit. Next speaker. The next speaker is Adam Smith. From Slate, this is Hi-Fi Nation. Philosophy in story form. Recording from Vasser College. Here's Barry Lamb. Steven Spielberg's adaptation of the Philip K. Dick story, Minority Report, is now 17 years old. Given what we know today, parts of it were prophetic, parts of it absurd. The film tells the story of a future where Tom Cruise pieces together psychic predictions of violent crimes on a high-tech computer. Then sends out a team of cops to arrest and jail the perpetrator before the crime occurs. They're called the Pre-Crime Unit. Real life predictive policing is here now. The CSOC that the LAPD commissioner has funded is the real-life equivalent of Tom Cruise's control room. Other than that, actual predictive policing programs show just how unrealistic the movie was. The crimes in the Minority Report were all bourgeois fantasies. Murders of cheating spouses, conspirators and child kidnappers, all of them had affluent white victims and white perpetrators. In real life, predictive policing technologies target property crime, drug dealing, gun violence associated with gangs. The kind of things affecting communities of poverty and color. The other piece of fantasy in Minority Report is what it depicts as the central problem with predictive policing. People have free will. The psychics might have been wrong. In the real world, free will isn't the issue. The real philosophical problems are more basic, maybe even harder. And there aren't any psychics, just statistical science based on a little criminology, a little anthropology, a lot of data collection, all pulled together by the developing fields of machine learning and artificial intelligence. In the next two episodes, I'm guiding you through the use of statistical algorithms in criminal justice, from the streets to the prisons. Lots of forces are pushing to replace human judgment with computerized ones. And it's happening at a time when the rules aren't known and what counts as justice isn't obvious. Hi-Fi Nation will return after these messages.

[8:44]My name is Sarah Brain and I'm an assistant professor of sociology at the University of Texas at Austin. Sarah Brain is going to be my guide. Sarah embedded herself for years with the LAPD, doing ride-alongs, observations, interviews, studying how these new technologies are changing the relationship between the police and the community. Sarah, let's start with the basics. What's PredPol? Sure. PredPol is a location-based predictive policing software, so it's used to predict where and when property crime is likely to occur in the future. So essentially it takes three kinds of inputs that are all part of historical crime data, when where and what type of crime occurred, more recent crimes are weighted more heavily. And then it outputs these 500x500 square foot boxes where crime is more likely to occur in the future. And then police officers are given these printouts or these images at the beginning of their shift and told to spend time in those predictive boxes. Tell me the range of things that officers do in the field with that kind of information. I mean essentially they drive to those boxes and then they check in and out of those boxes. During uncommitted time, meaning like if the officers were not responding to a call for example, or at the station booking somebody. They were told to spend their uncommitted time driving to those predictive boxes and basically looking around and seeing if anything was happening there. Um, and of course intercepting if they saw a crime in progress, but a lot of it is just this like deterrent strategy of if you're sitting there in this high crime area, somebody who maybe was going to steal a car, if there's a cop car sitting right there, wouldn't steal that car. It doesn't sound dramatic at all. It sounds really boring and just what you would expect people to do, like before they would just guess like I guess there's a lot that's a chop shop over there or something. Yeah, exactly. That's the thing is I think that like the more you actually learn about predictive policing, the more you're like, oh, okay, that's actually it's not that different than what they were doing before. And it's true. Even the Stop LAPD Spying Coalition admits that these predictive policing technologies are continuations of existing practices that have long been a part of patrolling. The important question is whether that's a good or bad thing. Advocates of location-based predictive algorithms, like PredPol, claim to have a big advantage over ordinary person-based criminal profiling. PredPol never uses data about social categories like race, gender, age, or criminal history. In fact, location-based systems don't use any identifying information about individuals. And predictive software is a lot more transparent than say, the human mind. PredPol's equations, algorithms, controlled studies, are all published in peer-reviewed academic journals. PredPol uses the same predictive models that geologists use to forecast aftershocks after major earthquakes. It's not clear why that works though, but that's actually one of the points of using predictive algorithms. If a forecast is accurate, it doesn't matter why.

[11:55]But a criticism of PredPol is that it makes law enforcement stuck forecasting crime in the same known neighborhoods and locations, making particular places feel occupied or overpoliced.

[12:09]And the crime history data that PredPol uses also comes from law enforcement itself. And law enforcement has conflicting incentives that can affect the accuracy of their data. But things are starting to get fancier. PredPol is version 1.0 of location-based predictive policing, using crime to predict crime. There's no reason why algorithms have to be so limited. Hello. Yeah, here. Can you hear me?

[12:42]Machine learning researchers like Flora Salim can presumably use any data they have to see if a machine can find positive correlations with crime.

[12:58]Salim and her colleagues got their hands on check-in data from the mobile app Foursquare in Brisbane and New York City. If you don't know about Foursquare, it's an app you use to learn about restaurants, attractions, events, and so forth. For a while, it kept check-in data on all of its users. The data is anonymized, there's no identifying information, but you do have information about an individual's check-in history. And you can see how many check-ins are at a location and how that changes over time.

[13:39]There are a lot of interesting things you can measure using just check-in data. One example is diversity. For instance, you know whether a group of people tend to be into the same things based on their similar check-in histories. So if you can see that a lot of these people congregate at location X, then X is a homogeneous location. If you find a location where people with very different check-in histories are congregating, then it's a diverse location. Another thing you can look at is the ratio of newbies to regulars at a place. And these things change over time as people are moving around the city.

[14:22]So what you do is give the computer all of this check-in data. Then you feed it crime reports for crimes like assault, unlawful entry, and drug dealing. And you do it with data for a set period, like six months, and let the machine scan all the data, cut it up, look at ratios and changes, and determine how check-in patterns are correlated with particular crimes. This is the training phase. Once that's done, the computer has come up with its model for predicting future crime, and you move on to the testing phase. You give the computer new check-in data that it hasn't seen before. You ask the computer to make a prediction about where and what kind of crime is going to occur. Now, here's the really ambitious part of the experiment. Salim and her colleagues were looking to predict new crimes that were supposed to happen within three hours of check-in. Their goal is to give officers real-time data about where to increase their patrols. PredPol predicts crimes that are supposed to happen within the next day. To see how well your algorithm does, you compare the computer's predictions with the actual reports that happened within the next three hours. Yeah, absolutely.

[15:45]Here are some of the patterns that the machine found. In general, as locations become less popular, crime increases. Crime is likelier to happen as locations start getting newer or more infrequent visitors. And as locations become more diverse, the likelier you're going to have crime in the next three hours. What humans tend to do is come up with a theory, an explanation of why things pattern the way they do. But this isn't always a good thing. I once gave two lectures about this stuff, where in the first one, I told the audience the real data. Crime decreases as a place becomes more popular. The audience immediately came up with an explanation. More people make it harder to get away with a crime. So criminals target quieter areas. But in the second lecture, I said the opposite, that crime increases as more people check in. And the audience immediately had an explanation for that. More people mean more opportunities for crime. This is a virtue and vice of human judgment. We're good at explanations, and we're good at being convinced of our explanations, letting them guide our thinking, whether they're true or not. The machines don't do this. They only make predictions, leaving the stories for the humans.

[17:14]Now, AI researchers aren't so naive to think that their algorithms are unbiased or story-free. The claim, though, is that they're far less biased than people, who get too caught up in the stories they tell themselves and often ignore data completely. Salim and her colleagues were able to get accuracy without needing crime histories or locations and times. The mere movement patterns were enough. Who knows how accurate things can get when you start putting all of this data together, or bring in

[18:13]A lot of people I talk to get creeped out at this point. But Fitbit data has already been used to solve crimes, pinpointing exact times of death. One sociologist even proposed that we use Fitbits to monitor the police, to predict those that are likeliest to have unusually high stress responses and too easily use force. It's a hard question for all of us how much data about ourselves we're willing to give up in the interest of public safety. But the unfortunate reality is that some communities have more power than others in settling this question. Government officials, police forces, affluent suburbanites, they generally win fights over how much they get to be surveilled. The actual issue is probably how much the affluent will sacrifice the privacy of the poor to secure their own safety. And on the ground, these aren't just theoretical concerns.

[19:53]They got to hurry up. We're going to go inside. It's about the dismantling of Sea-sock. So you are denying that a Sea-sock exists? I'm not denying. It's not aware of. We will use those statistics to determine where we need to send resources.

[29:36]The designer of Operation Laser turned down my request for an interview. But I think I can charitably reconstruct his reasoning. A very small number of people are disproportionately responsible for crime in a neighborhood. When you strategically extract high probability offenders in a laser-focused way, you're making neighborhoods safer with less collateral damage. And you're doing it in a race neutral way by only calculating probability of future offense using data that is correlated with future crime. If the consequence happens to be that young black and brown males are the only ones affected. It's because they're the ones satisfying all the race neutral conditions of being likely offenders. When we look at all the different elements that are used to calculate the risk assessment of a potential chronic offender, we found in 2017, the black community was five times more often arrested than the white community. When you look at stops, which essentially are what lead to field interview cards being filled out, the black community was five times more likely to be stopped. Even in parole and probation, we're finding more and more studies that are identifying that the black community is more often to be on parole or probation. So even though you claim race neutrality, data can stand in as a proxy for race. The other perspective has some merit too. Many of the things that get you points in the algorithm are things under police control. If police want to give someone points, they just have to start talking to them and then call it a quality interaction. In gang affiliation is another example. Who gets recorded as being affiliated with a gang member? An officer could connect just about everyone to a gang member if a neighborhood has gangs in it. When I was young, I lived next to gang members, rode the bus with them. I mean, every teacher in LAUSD probably teaches a gang member. So who gets the points for being affiliated with a gang member after an FI interview? The skinny Asian neighbor or the Latino neighbor? And finally, there's the issue of civil rights. How can LAPD legally extract someone for just having been predicted by an algorithm to be a future offender? Did you observe anything that looked like predictive arrest, a kind of intervention that seems like you wouldn't need some kind of probable cause or reasonable suspicion in order to do? I didn't see them, you know, violating requirements or reasonable suspicion or probable cause, for example. It it was really more of these consensual stops. It's not illegal for a cop to go up to anybody and start talking to them. You can say, I don't want to talk to you and walk away, but if you are a known gang affiliate and you're on parole or probation, you don't have the same ability to walk away. That then in the course of a consensual stop, you might then see something that would constitute individualized suspicion to give you reasonable suspicion to actually then question someone. I mean, I guess another way of putting it, coming from the other side is why isn't a high score enough for individualized suspicion or probable cause, right? I mean, that's another way of putting it. Yeah, I mean, and I think this is like kind of a just as much as it's an illegal question, it's coming from this like philosophical question around like what is individualized suspicion?

[33:35]Is your same action interpreted as more suspicious if you're inside a predictive box, or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing. Like what even is individualized suspicion? Is your same action interpreted as more suspicious if you're inside a predictive box or if you have a high risk score than if you don't? I I think that those prototypical categories are kind of like getting conflated now in this like world of predictive policing.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript