Judge Benitez destroys the 2.2 rounds per DGU lie once and for all
Over two years ago, I read through some court filings in Duncan v. Bonta, the lawsuit against California’s “large capacity” magazine ban. I was left scratching my head at a claim from the State of California in support of their magazine ban, that the average Defensive Gun Use (DGU) incident involves discharging only 2.2 rounds. The more I looked into it, the more obvious it became that this was unsubstantiated.
Since then, Duncan v. Bonta made a trip to the Supreme Court, got GVR’d after NYSRPA v. Bruen, and sent back down the judicial hierarchy to the US District Court for the Southern District of California. The district court published its decision last Friday, in which Judge Roger Benitez completely took apart the 2.2 rounds per DGU canard (PDF pages 26-33):
C. The Invention of the 2.2 Shot Average
…the State’s statistic is suspect. California relies entirely on the opinion of its statistician for the hypothesis that defenders fire an average of only 2.2 shots in cases of confrontation.
Where does the 2.2 shot average originate? There is no national or state government data report on shots fired in self-defense events. There is no public government database. One would expect to see investigatory police reports as the most likely source to accurately capture data on shots fired or number of shell casings found, although not every use of a gun in self-defense is reported to the police. As between the two sides, while in the better position to collect and produce such reports, the State’s Attorney General has not provided a single police report to the Court or to his own expert.
Without investigatory reports, the State’s expert turns to anecdotal statements, often from bystanders, reported in news media, and selectively studied. She indicates she conducted two studies. Based on these two studies of newspaper stories, she opines that it is statistically rare for a person to fire more than 10 rounds in self-defense and that only 2.2 shots are fired on average. Unfortunately, her opinion lacks classic indicia of reliability and her two studies cannot be reproduced and are not peer-reviewed.
“Reliability and validity are two aspects of accuracy in measurement. In statistics, reliability refers to reproducibility of results.” Her studies cannot be tested because she has not disclosed her data. Her studies have not been replicated. In fact, the formula used to select 200 news stories for the Factiva study is incomprehensible. […]
For one study, Allen says she conducted a search of stories published in the NRA Institute for Legislative Action magazine (known as the Armed Citizen Database) between 2011 and 2017. There is no explanation for the choice to use 2011 for the beginning. After all, the collection of news stories goes back to 1958. Elsewhere in her declaration she studies mass shooting events but for that chooses a much longer time period reaching back to 1982. Likewise, there is no explanation for not updating the study after 2017.
[…] details are completely absent. Allen does not list the 736 stories. Nor does she reveal how she assigned the number of shots fired in self-defense when the news accounts use phrases like “the intruder was shot” but no number of shots was reported, or “there was an exchange of gunfire,” or “multiple rounds were fired.” She includes in her 2.2 average of defensive shots fired, incidents where no shots were fired. […] She does not reveal the imputed number substitute value that she used where the exact number of shots fired was not specified, so her result cannot be reproduced. […] For example, this Court randomly selected two pages from Allen’s mass shooting table: pages 10 and 14. From looking at these two pages (assuming that the sources for the reports were accurate and unbiased) the Court is able to make statistical observations, including the observation that the number of shots fired were unknown 69.04% of the time.
The foundation of the claim was not real data but “anecdata,” which don’t cover nearly as many incidents as actual police reports do. (Not every incident is reported, so even police data is incomplete.)
Second, the sampled news reports were randomly selected. It isn’t clear if there were any process safeguards to prevent cherry picking, and there is no transparency about the included incidents.
Third, the selected timeframes look arbitrary.
Fourth, as Judge Benitez points out, including zero-shot incidents will obviously bring the average down, so it’s questionable.
The most devastating critique is that the expert assigned an arbitrary number of shots fired when news stories didn’t include that crucial detail.
The Court is aware of its obligation to act as a gatekeeper to keep out junk science where it does not meet the reliability standard of Daubert v. Merrell Dow Pharmaceuticals, Inc. […] while questionable expert testimony was admitted, it has now been weighed in light of all of the evidence.
Using interest-balancing, the en banc 9th Circuit shamelessly rubber-stamped California’s infringement using this pathetic junk science. It’s gratifying to see interest-balancing tossed into the garbage alongside this junk science under the new Bruen standard.