February 29, 2024

Leap Day of a Leap Year

What Is a Leap Year?

In an ordinary year, if you were to count all the days in a calendar from January to December, you’d count 365 days. But approximately every four years, February has 29 days instead of 28. So, there are 366 days in the year. This is called a leap year.

Illustration of a February calendar in a leap year.

Credit: NASA/JPL-Caltech

Why do we have leap years?

A year is the amount of time it takes a planet to orbit its star one time. A day is the amount of time it takes a planet to finish one rotation on its axis.

Animation of Earth rotating once, indicating a day, and Earth orbiting the Sun, indicating a year.

Credit: NASA/JPL-Caltech

It takes Earth approximately 365 days and 6 hours to orbit the Sun. It takes Earth approximately 24 hours — 1 day — to rotate on its axis. So, our year is not an exact number of days.

Because of that, most years, we round the days in a year down to 365. However, that leftover piece of a day doesn’t disappear. To make sure we count that extra part of a day, we add one day to the calendar approximately every four years. Here’s a table to show how it works:

Year Days in Year Leap Year?
2017 365 No
2018 365 No
2019 365 No
2020 366 Yes

Because we’ve subtracted approximately 6 hours — or ¼ of a day — from 2017, 2018 and 2019, we have to make up that time in 2020. That’s why we have leap day!

Are leap years really that important?

Leap years are important so that our calendar year matches the solar year — the amount of time it takes for Earth to make a trip around the Sun. Subtracting 5 hours, 46 minutes and 48 seconds off of a year maybe doesn’t seem like a big deal. But, if you keep subtracting almost 6 hours every year for many years, things can really get messed up.

For example, say that July is a warm, summer month where you live. If we never had leap years, all those missing hours would add up into days, weeks and even months. Eventually, in a few hundred years, July would actually take place in the cold winter months!

Illustration of two snowmen on the 4th of July with fireworks and snow in the background.

Credit: NASA/JPL-Caltech


The Likely Lab Leak and the Covid Cassandra.

I thought I was done with writing about Covid-19. But Covid-19 isn’t done with me—or with any of us.

I’m writing this precisely four years after Chinese health officials first announced the emergence of a mysterious new form of pneumonia in the city of Wuhan. “No obvious human-to-human transmission has been observed,” the officials added in that December 30, 2019, release. (Already, the Chinese were lying.) Today, Covid cases are ticking up for the umpteenth time. And documents keep coming to light that expose how American officials and scientists similarly suppressed unsettling facts about the pandemic’s origins.

While the death rate from each new wave of Covid keeps dropping, the disturbing revelations about our public health leaders keep getting worse. In December 2023, a new disclosure revealed how leading U.S. virus experts lobbied to conduct dangerous gain-of-function research at the substandard Wuhan Institute of Virology laboratory. The latest leak provides yet more evidence that the pandemic likely emerged from a lab experiment gone awry, and that U.S. scientists actively covered up their possible role in that world-historical catastrophe.

After both the 1986 Challenger explosion and the 9/11 attacks, bipartisan commissions were convened to investigate the disasters. Covid has killed more than a million Americans and has cost our economy at least $14 trillion. And yet we see no great urgency to investigate the pandemic’s murky origins or prevent a recurrence. Republicans in Congress continue to hold productive hearings. But, according to the New York Times, the Biden administration is “privately resisting” pressure to create a 9/11-style commission on the pandemic. The press has largely moved on. And the public health officials most deeply involved in the debacle—including Anthony Fauci and his National Institutes of Health (NIH) colleague Francis Collins—continue to tap-dance around the truth, even after leaving their posts.

Continue reading “”

Note The New Madrid Fault, right smack on the Mississippi

New map shows where damaging earthquakes are most likely to occur in US.

New USGS map shows where damaging earthquakes are most likely to occur in US 

Nearly 75% of the U.S. could experience damaging earthquake shaking, according to a recent U.S. Geological Survey-led team of more than 50 scientists and engineers.

This was one of several key findings from the latest USGS National Seismic Hazard Model (NSHM). The model was used to create a color-coded map that pinpoints where damaging earthquakes are most likely to occur based on insights from , historical geologic data, and the latest data-collection technologies.

The research is published in the journal Earthquake Spectra.

Continue reading “”

Updated information on Mass Public Shootings from 1998 through October 2023

Between January 1st, 1998, and October 25th, 2023, 52.5% of attacks used solely handguns, and 16.8% used only rifles of any type—thirty-five percent of attacks used solely rifles or rifles in conjunction with another type of gun. Given the debate over pistol-stabilizing braces, the Excel file we provide lists the guns used in each attack, and two of the attacks used AR-15-type handguns with a pistol-stabilizing brace.

Continue reading “”

When astronauts become farmers: Harvesting food on the moon and Mars.

With renewed interest in sending people back to the moon and on to Mars, thanks to NASA’s Artemis missions, thoughts have naturally turned to how to feed astronauts traveling to those deep space destinations. Simply shipping food to future lunar bases and Mars colonies would be impractically expensive.

Astronauts will, on top of everything else, have to become farmers.

Of course, since neither the moon nor Mars has a proper atmosphere, running surface water, moderate temperatures or even proper soil, farming on those two celestial bodies will be more difficult than on Earth. Fortunately, a lot of smart, imaginative people are working on the problem.

NASA has been studying how to grow plants in space on the International Space Station for years. The idea is to supplement astronauts’ diets with fresh fruits and vegetables grown in microgravity using artificial lighting. Future space stations and long-duration space missions will carry gardens with them.

Continue reading “”

A New Report Throws Cold Water on Man-Made Global Warming Pseudoscience

“To what extent are temperature levels changing due to greenhouse gas emissions?” may prove to be the most important scientific paper in the last 10 years.

Climate Discussion Nexus offers an introduction to why this paper is so important:

Well, this is awkward. Statistics Norway, aka Statistisk sentralbyrå or “the national statistical institute of Norway and the main producer of official statistics”, has just published a paper “To what extent are temperature levels changing due to greenhouse gas emissions?”

The awkward part isn’t trying to grasp the subtleties of Norwegian since it’s also available in English. It’s that the Abstract bluntly declares that “standard climate models are rejected by time series data on global temperatures” while the conclusions state “the results imply that the effect of man-made CO2 emissions does not appear to be sufficiently strong to cause systematic changes in the pattern of the temperature fluctuations.”

But the really awkward part is that a paper from a government agency dares to address openly so many questions the alarmist establishment has spent decades declaring taboo, from the historical record on climate to the existence of massive uncertainty among scientists on it.

What the Norwegians did was conduct statistical analyses of observed and reconstructed temperature series and test whether the recent fluctuation in temperatures differs systematically from previous temperature cycles potentially due to the emission of greenhouse gases. For example, the researchers gathered all the data from various sources, including those related to the four previous glacial and inter-glacial periods, and did a statistical analysis to see how more recent Global Climate Models (GCMs) compare.

In the global climate models (GCMs) most of the warming that has taken place since 1950 is attributed to human activity. Historically, however, there have been large climatic variations. Temperature reconstructions indicate that there is a ‘warming’ trend that seems to have been going on for as long as approximately 400 years. Prior to the last 250 years or so, such a trend could only be due to natural causes.

The length of the observed time series is consequently of crucial importance for analyzing empirically the pattern of temperature fluctuations and to have any hope of distinguishing natural variations in temperatures from man-made ones. Fortunately, many observed temperature series are significantly longer than 100 years and in addition, as mentioned above, there are reconstructed temperature series that are much longer.

I was recently discussing the fact that Earth is warming from its last glaciation period. The Norwegian statisticians’ comprehensive temperature review takes the long view into account by looking at the last 420,000 years.

Continue reading “”

Judge Benitez destroys the 2.2 rounds per DGU lie once and for all

Over two years ago, I read through some court filings in Duncan v. Bonta, the lawsuit against California’s “large capacity” magazine ban. I was left scratching my head at a claim from the State of California in support of their magazine ban, that the average Defensive Gun Use (DGU) incident involves discharging only 2.2 rounds. The more I looked into it, the more obvious it became that this was unsubstantiated.

Since then, Duncan v. Bonta made a trip to the Supreme Court, got GVR’d after NYSRPA v. Bruen, and sent back down the judicial hierarchy to the US District Court for the Southern District of California. The district court published its decision last Friday, in which Judge Roger Benitez completely took apart the 2.2 rounds per DGU canard (PDF pages 26-33):

C. The Invention of the 2.2 Shot Average

…the State’s statistic is suspect. California relies entirely on the opinion of its statistician for the hypothesis that defenders fire an average of only 2.2 shots in cases of confrontation.

Where does the 2.2 shot average originate? There is no national or state government data report on shots fired in self-defense events. There is no public government database. One would expect to see investigatory police reports as the most likely source to accurately capture data on shots fired or number of shell casings found, although not every use of a gun in self-defense is reported to the police. As between the two sides, while in the better position to collect and produce such reports, the State’s Attorney General has not provided a single police report to the Court or to his own expert

Without investigatory reports, the State’s expert turns to anecdotal statements, often from bystanders, reported in news media, and selectively studied. She indicates she conducted two studies. Based on these two studies of newspaper stories, she opines that it is statistically rare for a person to fire more than 10 rounds in self-defense and that only 2.2 shots are fired on average. Unfortunately, her opinion lacks classic indicia of reliability and her two studies cannot be reproduced and are not peer-reviewed.

“Reliability and validity are two aspects of accuracy in measurement. In statistics, reliability refers to reproducibility of results.” Her studies cannot be tested because she has not disclosed her data. Her studies have not been replicated. In fact, the formula used to select 200 news stories for the Factiva study is incomprehensible. […]

For one study, Allen says she conducted a search of stories published in the NRA Institute for Legislative Action magazine (known as the Armed Citizen Database) between 2011 and 2017. There is no explanation for the choice to use 2011 for the beginning. After all, the collection of news stories goes back to 1958. Elsewhere in her declaration she studies mass shooting events but for that chooses a much longer time period reaching back to 1982. Likewise, there is no explanation for not updating the study after 2017.

[…] details are completely absent. Allen does not list the 736 stories. Nor does she reveal how she assigned the number of shots fired in self-defense when the news accounts use phrases like “the intruder was shot” but no number of shots was reported, or “there was an exchange of gunfire,” or “multiple rounds were fired.” She includes in her 2.2 average of defensive shots fired, incidents where no shots were fired. […] She does not reveal the imputed number substitute value that she used where the exact number of shots fired was not specified, so her result cannot be reproduced. […] For example, this Court randomly selected two pages from Allen’s mass shooting table: pages 10 and 14. From looking at these two pages (assuming that the sources for the reports were accurate and unbiased) the Court is able to make statistical observations, including the observation that the number of shots fired were unknown 69.04% of the time.

The foundation of the claim was not real data but “anecdata,” which don’t cover nearly as many incidents as actual police reports do. (Not every incident is reported, so even police data is incomplete.)

Second, the sampled news reports were randomly selected. It isn’t clear if there were any process safeguards to prevent cherry picking, and there is no transparency about the included incidents.

Third, the selected timeframes look arbitrary.

Fourth, as Judge Benitez points out, including zero-shot incidents will obviously bring the average down, so it’s questionable.

The most devastating critique is that the expert assigned an arbitrary number of shots fired when news stories didn’t include that crucial detail.

The Court is aware of its obligation to act as a gatekeeper to keep out junk science where it does not meet the reliability standard of Daubert v. Merrell Dow Pharmaceuticals, Inc. […] while questionable expert testimony was admitted, it has now been weighed in light of all of the evidence.

Using interest-balancing, the en banc 9th Circuit shamelessly rubber-stamped California’s infringement using this pathetic junk science. It’s gratifying to see interest-balancing tossed into the garbage alongside this junk science under the new Bruen standard.

Okay, so when do we start sending mining missions?

In A First, NASA Returns Asteroid Samples to Earth.

A capsule containing precious samples from an asteroid landed safely on Earth on Sunday, the culmination of a roughly 4-billion-mile journey over the past seven years.

The asteroid samples were collected by NASA’s OSIRIS-REx spacecraft, which flew by Earth early Sunday morning and jettisoned the capsule over a designated landing zone in the Utah desert. The unofficial touchdown time was 8:52 a.m. MT, 3 minutes ahead of the predicted landing time.

The dramatic event — which the NASA livestream narrator described as “opening a time capsule to our ancient solar system” — marked a major milestone for the United States: The collected rocks and soil were NASA’s first samples brought back to Earth from an asteroid. Experts have said the bounty could help scientists unlock secrets about the solar system and how it came to be, including how life emerged on this planet.

Bruce Betts, chief scientist at The Planetary Society, a nonprofit organization that conducts research, advocacy and outreach to promote space exploration, congratulated the NASA team on what he called an “impressive and very complicated mission,” adding that the asteroid samples are the start of a thrilling new chapter in space history.

“It’s exciting because this mission launched in 2016 and so there’s a feeling of, ‘Wow, this day has finally come,’” he said. “But scientifically, it’s exciting because this is an amazing opportunity to study a very complex story that goes way back to the dawn of the solar system.”

The sample return capsule from NASA's Osiris-Rex mission in Utah on Sept. 24, 2023.
The sample return capsule from NASA’s Osiris-Rex mission in Utah on Sunday.Keegan Barber / NASA via AP

Continue reading “”

Earth’s atmosphere can clean itself, breakthrough study finds.

Scientists have made a groundbreaking discovery that could change the way we think about air pollution. Researchers at the University of California, Irvine, have found that a strong electric field between airborne water droplets and surrounding air can create a molecule called hydroxide (OH) by a previously unknown mechanism.

This molecule is crucial in helping to clear the air of pollutants, including greenhouse gases and other chemicals.

The discovery is outlined in a new paper published in Proceedings of the National Academy of Sciences, which suggests that the traditional thinking around the formation of OH in the atmosphere is incomplete. Until now, it was thought that sunlight was the primary driver of OH formation, but this new research shows that OH can be created spontaneously by the special conditions on the surface of water droplets.

Continue reading “”

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.
Richard Horton, editor of The Lancet

A study of language in Science articles from 1997 through 2021 raises concerns about exaggerated claims.

Careful scientists know to acknowledge uncertainty in the findings and conclusions of their papers. But in one leading journal, the frequency of hedging words such as “might” and “probably” has fallen by about 40% over the past 2 decades, a study finds.

If this trend holds across the scientific literature, it suggests a worrisome rise of unreliable, exaggerated claims, some observers say. Hedging and avoiding overconfidence “are vital to communicating what one’s data can actually say and what it merely implies,” says Melissa Wheeler, a social psychologist at the Swinburne University of Technology who was not involved in the study. “If academic writing becomes more about the rhetoric … it will become more difficult for readers to decipher what is groundbreaking and truly novel.”

The new analysis, one of the largest of its kind, examined more than 2600 research articles published from 1997 to 2021 in Science, which the team chose because it publishes articles from multiple disciplines. (Science’s news team is independent from the editorial side.) The team searched the papers for about 50 terms such as “could,” “appear to,” “approximately,” and “seem.” The frequency of these hedging words dropped from 115.8 instances per 10,000 words in 1997 to 67.42 per 10,000 words in 2021.

Continue reading “”

Study Shows Gun Laws Don’t Matter, Race Does

33 people were shot over the weekend in Chicago. Urban gangland violence like that is what real “mass shootings” look like and finally a Journal of the American Medical Association paper addressed the problem by shifting the blame to something it calls “structural racism”.

The JAMA paper, which was quickly picked up by CNN as “Structural Racism may Contribute to Mass Shootings” and by Bloomberg as “Mass Shootings Disproportionately Victimize Black Americans”, acknowledged what conservatives have been saying about gun violence.

“There was no discernible association noted in this study between gun laws and MSEs [mass shootings] with other studies showing similar findings,” it noted.

The issue wasn’t gun laws, it was race. “The study found that in areas with higher black populations, mass shootings are likelier to occur compared to communities with higher white populations,” CNN reported. “The findings disrupt the nation’s image of mass shootings, which has been shaped by tragedies like the Las Vegas festival shooting and Sandy Hook in which most of the victims were not black,” Bloomberg added.

Faced with an immovable statistical object and the unstoppable force of equity, the JAMA paper blames the whole thing on structural racism. The study correlates urban areas and neighborhoods with high concentrations of single-parent households” to mass shootings. It then demonstrates that “structural racism” must be at fault because of “the percentage of the population that is black.” Black people in the study are interchangeable with racism.

Such is the state of woke medical science which tries to fix racism with more racism. The study never comes up with any plausible explanation of how structural racism causes people to shoot each other. At one point it claims that “racial residential segregation practices are predictive of various types of shootings” in a country where segregation had been abolished since 1964.

The study’s definition of segregation is so senseless that it lists majority black cities like Detroit, a 77% black city, as being 73% segregated, and Baltimore, a 62% black city, as being 64% segregated. A city with a strong black majority and black leaders is racially segregated and its people are suffering from “structural racism”. That’s why there are so many mass shootings.

But if segregation is the issue then why does Atlanta, which had actual segregation, have only 18 mass shootings, while Chicago has 141? Southern cities show up as less segregated and less violent in the paper’s data. A history of segregation is clearly not the issue. This isn’t about the past, whether it’s the historical revisionism of the 1619 Project, or any other.

If segregation were the issue, crime would have been far higher during segregation than after it.

Continue reading “”

This gets verified and replicated and away we go to the races

The First Room-Temperature Ambient-Pressure Superconductor

Sukbae Lee, Ji-Hoon Kim, Young-Wan Kwon

For the first time in the world, we succeeded in synthesizing the room-temperature superconductor (Tc400 K, 127C) working at ambient pressure with a modified lead-apatite (LK-99) structure. The superconductivity of LK-99 is proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and the Meissner effect.

The superconductivity of LK-99 originates from minute structural distortion by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. The shrinkage is caused by Cu2+ substitution of Pb2+(2) ions in the insulating network of Pb(2)-phosphate and it generates the stress.

It concurrently transfers to Pb(1) of the cylindrical column resulting in distortion of the cylindrical column interface, which creates superconducting quantum wells (SQWs) in the interface. The heat capacity results indicated that the new model is suitable for explaining the superconductivity of LK-99.

The unique structure of LK-99 that allows the minute distorted structure to be maintained in the interfaces is the most important factor that LK-99 maintains and exhibits superconductivity at room temperatures and ambient pressure.

What’s up in space

EARTH-DIRECTED CME (UPDATED): A magnetic filament in the sun’s southern hemisphere erupted on July 11th (movie #1) and hurled a CME toward Earth (movie #2). According to a NASA model, most of the CME will sail south of our planet, but not all. The northern flank will likely strike our planet’s magnetic field during the late hours of July 14th possibly causing a G1-class geomagnetic storm. Aurora alerts: SMS Text

A HYPERACTIVE SUNSPOT: New sunspot AR3372 is seething with activity. In the last 24 hours alone it has produced eight M-class solar flares (graph) To the extreme ultraviolet telescopes onboard NASA’s Solar Dynamics Observatory, it looks like the northeastern limb of the sun is on fire:

The rat-a-tat-tat of solar flares from AR3372 is causing a rolling series of shortwave radio blackouts around all longitudes of our planet. Ham radio operators, mariners and aviators may have noticed loss of signal below 30 MHz on multiple occasions since July 11th. In addition, episodes of sudden ionization in the atmosphere are doppler-shifting the frequency of time-standard radio stations such as Canada’s CHU and America’s WWV (data).

If current trends continue, we should expect more strong M-class flares during the next 24 hours with a chance of X-flares as well. This sunspot will become even more geoeffective in the days ahead as it continues to turn toward Earth. Solar flare alerts: SMS Text

Bu bu bu bu but all those scientists can’t be wrong!

Regarding Consensus Science

I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.
― Michael Crichton

Paging Khan Noonien Singh. Paging Arik Soong.

Scientists Create Synthetic Human Embryo Models in Major First.

In a major scientific first, synthetic human embryo models have been grown in the lab, without any need for the usual natural ingredients of eggs and sperm.

The research – first brought to wider attention by The Guardian – has prompted excitement about the potential for new breakthroughs in health, genetics, and treating disease. But the science also raises serious ethical questions.

The embryo structures were produced from stem cells cultured from a traditional embryo in the lab. Stem cells can be programmed to develop into any kind of other cell – which is how they are used in the body for growth and repair.

Here, stem cells were carefully coaxed into becoming precursor cells that would eventually become the yolk sac, the placenta, and then the actual embryo itself.

A paper on the breakthrough has yet to be published, so we’re still waiting on the details of exactly how this was achieved.

The work was led by biologist Magdalena Żernicka-Goetz, from the University of Cambridge in the UK, together with colleagues from the UK and US. Last year, a team led by Zernicka-Goetz was able to successfully grow synthetic mouse embryos with primitive brains and hearts.

We should point out that we’re still a long way from creating babies artificially. These are embryo-like structures, without a heart or a brain: They’re more like embryo models that are able to mimic some, but not all, of the features of a normal embryo.

“It is important to stress that these are not synthetic embryos, but embryo models,” wrote Zernicka-Goetz on Twitter. “Our research isn’t to create life, but to save it.”

One of the ways in which this research could save lives is in helping to examine why many pregnancies fail at around the stage these artificial embryos replicate. If these earliest moments can be studied in a lab, we should get a much better understanding of them.

We could also use these techniques to learn more about how common genetic disorders develop at the earliest stages of life. Once there’s a greater knowledge about how they start, we’ll be better placed to do something about them.

At the same time, there are concerns around where this kind of synthetic embryo creation could lead. Scientists say strong regulations are needed to control this kind of research – regulations that at the moment don’t really exist.

“These new assays in vitro will pave the way for future studies that aim to unravel the mechanisms of human development, as well as the effects of environmental and genetic anomalies,” says biologist Rodrigo Suarez from the University of Queensland in Australia, who wasn’t involved in the research.

“As with most emerging technologies, society will need to balance the evidence about the risks and benefits of this approach, and update the current legislation accordingly.”

As pointed out by bioethics researcher Rachel Ankeny from the University of Adelaide, who wasn’t involved in the research, today scientists abide by a ’14-day rule’ which limits the use of human embryos in the lab, requiring that human embryos can only be cultivated in vitro for a maximum of 2 weeks.

Rules like this, as well as new ones that may be brought in as this research continues, force us to ask fundamental questions about when we consider ‘life’ beginning in an organism’s existence – and how close to a human embryo a synthetic embryo must be before it is considered essentially the same.

“We need to engage various publics about their understanding of and expectations from this sort of research, and more generally about their views on early human development,” says Ankeny.

“These biological processes are deeply tied to our values and what we think counts as human life.”

The research has yet to be peer-reviewed or published, and was presented at the annual meeting of the International Society for Stem Cell Research.

And with CHF and CAD/CAM-CNC manufacturing, such ‘forensics’ are even more problematical

FYI, this is a l-o-n-g article.

Devil in the grooves: The case against forensic firearms analysis
A landmark Chicago court ruling threatens a century of expert ballistics testimony

Last February, Chicago circuit court judge William Hooks made some history. He became the first judge in the country to bar the use of ballistics matching testimony in a criminal trial.

In Illinois v. Rickey Winfield, prosecutors had planned to call a forensic firearms analyst to explain how he was able to match a bullet found at a crime scene to a gun alleged to be in possession of the defendant.

It’s the sort of testimony experts give every day in criminal courts around the country. But this time, attorneys with the Cook County Public Defender’s Office requested a hearing to determine whether there was any scientific foundation for the claim that a specific bullet can be matched to a specific gun. Hooks granted the hearing and, after considering arguments from both sides, he issued his ruling.

It was an earth-shaking opinion, and it could bring big changes to how gun crimes are prosecuted — in Chicago and possibly elsewhere.

Hooks isn’t the first judge to be skeptical of claims made by forensic firearms analysts. Other courts have put restrictions on which terminology analysts use in front of juries. But Hooks is the first to bar such testimony outright. “There are no objective forensic based reasons that firearms identification evidence belongs in any category of forensic science,” Hooks writes. He adds that the wrongful convictions already attributable to the field “should serve as a wake-up call to courts operating as rubber stamps in blindly finding general acceptance” of bullet matching analysis.

For more than a century, forensic firearms analysts have been telling juries that they can match a specific bullet to a specific gun, to the exclusion of all other guns. This claimed ability has helped to put tens of thousands of people in prison, and in a nontrivial percentage of those cases, it’s safe to say that ballistics matching was the only evidence linking the accused to the crime.

But as with other forensic specialties collectively known as pattern matching fields, the claim is facing growing scrutiny. Scientists from outside of forensics point out that there’s no scientific basis for much of what firearms analysts say in court. These critics, backed by a growing body of research, make a pretty startling claim — one that could have profound effects on the criminal justice system: We don’t actually know if it’s possible to match a specific bullet to a specific gun. And even if it is, we don’t know if forensic firearms analysts are any good at it.

Continue reading “”

Poll: 61% of Americans say AI threatens humanity’s future.

A majority of Americans believe that the rise of artificial intelligence technology could put humanity’s future in jeopardy, according to a Reuters/Ipsos poll published on Wednesday. The poll found that over two-thirds of respondents are anxious about the adverse effects of AI, while 61 percent consider it a potential threat to civilization.

The online poll, conducted from May 9 to May 15, sampled the opinions of 4,415 US adults. It has a credibility interval (a measure of accuracy) of plus or minus two percentage points.

The poll results come amid the expansion of generative AI use in education, government, medicine, and business, triggered in part by the explosive growth of OpenAI’s ChatGPT, which is reportedly the fastest-growing software application of all time. The application’s success has set off a technology hype race among tech giants such as Microsoft and Google, which stand to benefit from having something new and buzzy to potentially increase their share prices.

Fears about AI, justified or not, have been rumbling through the public discourse lately due to high-profile events such as the “AI pause” letter and Geoffery Hinton resigning from Google. In a recent high-profile case of AI apprehension, OpenAI CEO Sam Altman testified before US Congress on Tuesday, expressing his concerns about the potential misuse of AI technology and calling for regulation that, according to critics, may help his firm retain its technological lead and suppress competition.

Lawmakers seem to share some of these concerns, with Sen. Cory Booker (D-NJ) observing, “There’s no way to put this genie in the bottle. Globally, this is exploding,” Reuters reported.

This negative scare messaging seems to be having an impact. Americans’ fears over AI’s potential for harm far outweigh optimism about its benefits, with those predicting adverse outcomes outnumbering those who don’t by three to one. “According to the data, 61% of respondents believe that AI poses risks to humanity, while only 22% disagreed, and 17% remained unsure,” wrote Reuters.

The poll also revealed a political divide in perceptions of AI, with 70 percent of Donald Trump voters expressing greater concern about AI versus 60 percent of Joe Biden voters. Regarding religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to human civilization, at 32 percent, compared to 24 percent of non-evangelical Christians.

Reuters reached out to Landon Klein, director of US policy of the Future of Life Institute, which authored the open letter that asked for a six-month pause in AI research of systems “more powerful” than GPT-4. “It’s telling such a broad swatch of Americans worry about the negative effects of AI,” Klein said. “We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action.”

Meanwhile, another group of AI researchers led by Timnit Gebru, Emily M. Bender, and Margaret Mitchell (three authors of a widely cited critical paper on large language models) say that while AI systems are indeed potentially harmful, the prevalent worry about AI-powered apocalypse is misguided. They prefer to focus instead on “transparency, accountability, and preventing exploitative labor practices.”

Another issue with the poll is that AI is a nebulous term that often means different things to different people. Almost all Americans now use “AI” (and software tools once considered “AI”) in our everyday lives without much notice or fanfare, and it’s unclear if the Reuters/Ipsos poll made any attempt to make that type of distinction for its respondents. We did not have access to the poll methodology or raw poll results at press time.

Along those lines, Reuters quoted Ion Stoica, a UC Berkeley professor and co-founder of AI company Anyscale, pointing out this potential contradiction. “Americans may not realize how pervasive AI already is in their daily lives, both at home and at work,” he said.

Thinking About Absolute vs. Relative Risk of Negative Outcomes with Firearms

Lately, I have been working on the chapter of my book on American gun culture that explores negative outcomes with firearms.

Although I differ from most scholars studying guns by beginning not with gun deviance but with the normality of guns and gun owners, I do take negative outcomes seriously.

Trying to get a better understanding of how the United States compares to other countries in the world in terms of negative outcomes with firearms, I recently stumbled upon the Institute for Health Metrics and Evaluation (IHME) and its cross-national Global Burden of Disease (GBD) database (more about IHME GBD at the end).

Continue reading “”