Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you missed the point. An ML model would only fail in the way KL describes under certain circumstances. If it was indeed the case that everyone does drugs, then it would learn that traffic stops in wealthy neighborhoods also leads to drug busts. The conclusion of KL and related work is that we have to be careful when training ML models to remove sources of underperformance, not that all ML models are useless.

A relevant reference: "Identifying and Measuring Excessive and Discriminatory Policing" - https://5harad.com/papers/identifying-discriminatory-policin...

In any case, recreational drug use might be uniformly distributed (and there is an interesting question of what anti-social activities are labeled "crimes"), it is definitely not the case that home invasions, car jackings, robberies, etc. are uniformly distributed.

> On the whole people commit crime because they are desperate (for money, for drugs, etc.), occasionally because they have an anti-social personality disorder.

If you're looking for a single summary of why people commit crime, a better summary is: people commit crimes because they don't think they'll get caught. Desperation doesn't explain all that much.



> The conclusion of KL and related work is that we have to be careful when training ML models to remove sources of underperformance, not that all ML models are useless.

Sure I don't disagree with that, and I'm not saying that all ML models are useless in this space. However the original article's point (that currently implemented predictive policing software doesn't function) I think is very much in line with Lum's work. I'm just attempting to give a more concrete case for the point, as I felt the Gizmodo article was pretty lacking.

> people commit crimes because they don't think they'll get caught

My hometown had some of the lowest crime rates in the nation (no auto thefts, no burglary, no armed robbery). This was absolutely not because it was hard to get away with it (my mother has left her car unlocked every night since I was a teenager, it would be trivially easy to rob her and escape). My town also was extraordinarily wealthy, among the richest nationwide. Now, maybe (probably) there was a lot of white-collar crime or domestic violence. However in terms of public violent crimes there is a clear effect of socioeconomic status. Yes someone in total destitution probably will not commit a crime if they think they will immediately get arrested, but I think the calculus is far more tolerant of the downside risk of arrest if the upside risk is that your kid gets dinner that night. This is what I mean when I say that desperation is a major driver: that it raises the bar on "how much risk of prison am I willing to accept in order to get what I need".


> that currently implemented predictive policing software doesn't function

To the extent you consider pretrial decision risk assessment software to be "predictive policing" (after all, it's predicting which defendants will skip their court date or commit a crime on bail), then there's plenty of evidence that we have good ones. Even a simple logistic regression over the charged crime, defendant age, and gender outperforms most judges: https://5harad.com/papers/simple-rules.pdf

If I recall, KL has written guides for DAs adopting pretrial risk assessments as part of the Safety and Justice Challenge. These models work today.

> My hometown had some of the lowest crime rates in the nation (no auto thefts, no burglary, no armed robbery).

Even if poverty is a necessary condition for violent crime, to the extent you ignore crimes like domestic violence to bolster your argument, that does not mean it's a sufficient condition. As a result, your simplification about crime being driven by poverty is still misleading.

> This is what I mean when I say that desperation is a major driver: that it raises the bar on "how much risk of prison am I willing to accept in order to get what I need".

Empirical measures of criminal decision making suggests certainty of punishment is highly explanatory. It does not change whether they think they need to steal to eat.

As a natural experiment: the USA engaged in historic poverty-reduction measures in its pandemic response. The supplemental poverty measure suggests poverty was reduced by over 50% through extended unemployment, the super doles, and child tax credit expansion. And yet crime of all types (homicide is the easiest to measure) has skyrocketed.

(As a bit of an aside, because policing in America is funded by local jurisdictions, my guess is your safe childhood town was over policed and it’s quite likely petty thieves would be caught. Either by the community or by police officers with little else to do. A statewide police force could redirect funding from rich, low-crime neighborhoods to high-crime neighborhoods and better reduce crime overall.)


> To the extent you consider pretrial decision risk assessment software to be "predictive policing" (after all, it's predicting which defendants will skip their court date or commit a crime on bail), then there's plenty of evidence that we have good ones.

I think this is a fundamentally different ballgame than predicting where and when crimes will occur with the intent to prioritize police presence, but I do take your point that simple models can outperform human decision making in these cases. What's absolute classification error of these models?

> Empirical measures of criminal decision making suggests certainty of punishment is highly explanatory. It does not change whether they think they need to steal to eat.

Not really disputing this, my point is "Need to Steal to Eat" - "Certainty of Punishment" = "Decision to Commit Crime". I don't really understand why "Certainy of Punishment" would be expected to impact "Need to Steal to Eat", but I'm not surprised that the empirical studies you refer to didn't find a relationship there. Would you be able to provide a reference on this one?

> As a natural experiment: the USA engaged in historic poverty-reduction measures in its pandemic response.

Not sure the extent to which the results of this are generalizable. The proper counterfactual here is not "crime-rates pre-pandemic" it's "crime rates post-pandemic in a world where we didn't provide anti-poverty measures". The former is a very poor proxy for the latter, IMO. I wonder if anyone has compared across states or countries with different pandemic responses?

> my guess is your safe childhood town was over policed and it’s quite likely petty thieves would be caught

Yes it had a very well-funded police department who did very little day-to-day. I can say with certainty that they were very bad at tracking down the local drug dealers as our drug trade was positively thriving. Possibly they would be more motivated to catch petty thieves, but this wasn't really tested while I was there. The occasional bit of vandalism or other hooliganry did not typically get punished, as I recall.


> I think you missed the point. An ML model would only fail in the way KL describes under certain circumstances. If it was indeed the case that everyone does drugs, then it would learn that traffic stops in wealthy neighborhoods also leads to drug busts. The conclusion of KL and related work is that we have to be careful when training ML models to remove sources of underperformance, not that all ML models are useless.

This assumes that the police actually _want_ to make drug busts in wealthy neighborhoods. It's hard for me not to think that using ML models is intended to be a way to insulate the decision makers from accountability; pick a model that gives the results you want, don't divulge the details, and you'll never have to explain your actions because you were "just following the model".


No it's not. I'm making a claim about what ML models are capable of in response to someone incorrectly summarizing a possible weakness with them.

If police don't want to make drug busts in wealthy neighborhoods, they don't need models to justify that. There is no jurisdiction in America where discretion has been ceded to statistical models.


> If police don't want to make drug busts in wealthy neighborhoods, they don't need models to justify that.

They may not need models to justify it, but that doesn't mean that it wouldn't be helpful for them to avoid accountability. I obviously can't say for sure whether or not the police are actively trying to avoid policing any given wealthy area, but it doesn't seem like a stretch that if that was a goal, then obfuscating the source of that decision would be helpful, even if it might not be required.

> There is no jurisdiction in America where discretion has been ceded to statistical models.

That's not at all what I said. My point is that someone who wants to influence _perception_ of where decisions were coming from could make use of ML to make their decisions seem to be the result of "objective" data rather than personal bias.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: