Dear Editor,
The coverage of Dr. Circo’s study on police staffing and response times sets off several alarm bells. [“
New Study's Staffing Model Calls for 108 New Officers,” News, Jan. 21] First, the study has been published without being subject to a peer review process in which experts thoroughly analyze the strengths and weaknesses of its arguments.
Second, Dr. Circo’s study was commissioned by a police advocacy group with a history of arguing for more police. Studies commissioned by biased advocacy groups have a sordid history in American politics. They were a key component of the deny-and-delay strategy used by the tobacco and oil & gas industries to defend themselves from the burgeoning public realization of their harms. These industries wanted the public to accept a certain favorable conclusion (smoking doesn’t cause cancer; anthropogenic climate change isn’t a threat largely caused by fossil fuels), so they commissioned studies performed by academics to provide a sheen of legitimacy.
Third, by relying on machine learning models, Dr. Circo is using a statistical argument methodology that’s easily abused. Recent history provides evidence of this. In the election lawsuit presented to the Supreme Court in December 2020, Ken Paxton et. al included a statistical analysis performed by an economics Ph.D to support their claims of voter fraud. Fortunately, this particular analysis was refutable by anyone with cursory knowledge of statistics, and Paxton’s suit was quickly dismissed. Dr. Circo’s study is much more sophisticated than those bumbling efforts, which isn’t necessarily a point in its favor. Machine learning models are notoriously opaque, even to the experts who create them. Additionally, as critics in your article point out, they often rely on numerous questionable assumptions. It can be very easy for researchers to (intentionally or accidentally) build their preferred conclusion into these assumptions.
We should be very cautious about how we use this study to inform policy.