Former senator told Biden he’d ‘kick the sh-t out of’ the then-VP for getting handsy with his wife.

Former Massachusetts Sen. Scott Brown threatened President Joe Biden with bodily harm when the then-veep allegedly got fresh with Brown’s wife more than a decade ago, he recalled this week.

“I told him I’d kick the sh-t out of – beat the – I told him to stop,” Brown told host Tom Shattuck on the “Burn Barrel” podcast Wednesday.

“He didn’t act the way I thought he should,” Brown added. “And, you know, we called him on it, and that’s it.”

The incident occurred in 2010, when Biden, in his role as president of the Senate, posed for photos with Brown and his wife, Gail Huff Brown, at the Republican’s swearing-in ceremony in the US Capitol.

Photographers captured Huff Brown’s frozen grin as Biden’s right hand remained awkwardly behind her back – apparently near her posterior – as the portrait session ended.

Brown who won his senate seat in a 2010 special election after the death of Sen. Teddy Kennedy and served just three years in office, refused to elaborate on the episode.

“No, no. It’s old news, it’s old news,” he insisted when Shattuck pressed him for further details.

Instead, Brown blamed Biden’s inappropriate handsiness on incipient dementia — which, he suggested, has worsened during his presidency.

“I spent quite a bit of time with him. I enjoyed his company,” Brown recalled. “But we all know people who have dementia and have the beginning of Alzheimer’s, and he’s got it,” he said. “I mean, it’s the walk. It’s the way he’s mumbling, his anger outbursts. And it’s a shame that we can’t do better in this great country.”

For years, Biden has been notorious for his touchy-feely behavior with women and young girls — with a particular fondness for groping female family members of new senators and cabinet members taking the oath of office.

In June, actress Eva Longoria had to physically guide the 80-year-old president’s hands away from her breasts as he embraced her at a White House film screening.

The increasing futility of gun control in a 3D printing world
“You can’t stop the signal”

Inexpensive Add-on Spawns a New Era of Machine Guns

Caison Robinson, 14, had just met up with a younger neighbor on their quiet street after finishing his chores when a gunman in a white car rolled up and fired a torrent of bullets in an instant.

“Mom, I’ve been shot!” he recalled crying, as his mother bolted barefoot out of their house in northwest Las Vegas. “I didn’t think I was going to make it, for how much blood was under me,” Caison said.

The Las Vegas police say the shooting in May was carried out with a pistol rigged with a small and illegal device known as a switch. Switches can transform semiautomatic handguns, which typically require a trigger pull for each shot, into fully automatic machine guns that fire dozens of bullets with one tug.

By the time the assailant in Las Vegas sped away, Caison, a soft-spoken teenager who loves video games, lay on the pavement with five gunshot wounds. His friend, a 12-year-old girl, was struck once in the leg.

These makeshift machine guns — able to inflict indiscriminate carnage in seconds — are helping fuel the national epidemic of gun violence, making shootings increasingly lethal, creating added risks for bystanders and leaving survivors more grievously wounded, according to law enforcement authorities and medical workers.

The growing use of switches, which are also known as auto sears, is evident in real-time audio tracking of gunshots around the country, data shows. Audio sensors monitored by a public safety technology company, Sound Thinkingrecorded 75,544 rounds of suspected automatic gunfire in 2022 in portions of 127 cities covered by its microphones, according to data compiled at the request of The New York Times. That was a 49 percent increase from the year before.

“This is almost like the gun version of the fentanyl crisis,” Mayor Quinton Lucas of Kansas City, Mo., said in an interview.

Mr. Lucas, a Democrat, said he believes that the rising popularity of switches, especially among young people, is a major reason fewer gun violence victims are surviving in his city.

Homicides in Kansas City are approaching record highs this year, even as the number of nonfatal shootings in the city has decreased.

Switches come in various forms, but most are small Lego-like plastic blocks, about an inch square, that can be easily manufactured on a 3-D printer and go for around $200.

Law enforcement officials say the devices are turning up with greater frequency at crime scenes, often wielded by teens who have come to see them as a status symbol that provides a competitive advantage. The proliferation of switches also has coincided with broader accessibility of so-called ghost guns, untraceable firearms that can be made with components purchased online or made with 3-D printers.

“The gang wars and street fighting that used to be with knives, and then pistols, is now to a great extent being waged with automatic weapons,” said Andrew M. Luger, the U.S. attorney for Minnesota.

Switches have become a major priority for federal law enforcement officials. But investigators say they face formidable obstacles, including the sheer number in circulation and the ease with which they can be produced and installed at home, using readily available instruction videos on the internet. Many are sold and owned by people younger than 18, who generally face more lenient treatment in the courts.

Continue reading “”

There’s no way to rule innocent men. The only power any government has is the power to crack down on criminals.
Well, when there aren’t enough criminals, one makes them.
Ayn Rand


Oh England…………

Girl arrested over ‘lesbian nana’ comment will face no further action, police say

Police officer to whom the ‘lesbian nana’ comment was directed by teenager arrested for homophobic public order offence - TikTok/@nikitasnow84

Police officer to whom the ‘lesbian nana’ comment was directed by teenager arrested for homophobic public order offence – TikTok/@nikitasnow84© Provided by The Telegraph

A16-year-old girl arrested in Leeds after being accused of making a homophobic remark to a police officer will face no further action, West Yorkshire Police said.

A video uploaded to TikTok by her mother showed the autistic teenager being detained by seven officers outside her home in Leeds in the early hours of Monday Aug 7.

The force also said it will “take on board any lessons to be learned” after the footage of the arrest sparked criticism on social media.

The mother posted on TikTok: “This is what police do when dealing with autistic children. My daughter told me the police officer looked like her nana, who is a lesbian.

“The officer took it the wrong way and said it was a homophobic comment [it wasn’t].

“The officer then entered my home. My daughter was having panic attacks from being touched by them and they still continued to manhandle her.”

‘Releases girl from her bail’

A statement released by police on Friday said: “In relation to an incident in Leeds on Monday, where a 16-year-old girl was arrested on suspicion of a homophobic public order offence, West Yorkshire Police has now reviewed the evidence and made the decision to take no further action.

“This concludes the criminal investigation and immediately releases the girl from her bail. Her family has been updated.

“West Yorkshire Police’s Professional Standards Directorate is continuing to carry out a review of the circumstances after receiving a complaint in relation to the incident.”

Assistant Chief Constable Oz Khan said: “We recognise the significant level of public concern that this incident has generated, and we have moved swiftly to fully review the evidence in the criminal investigation which has led to the decision to take no further action.

“Without pre-empting the outcome of the ongoing review of the circumstances by our Professional Standards Directorate, we would like to reassure people that we will take on board any lessons to be learned from this incident.

“We do appreciate the understandable sensitivities around incidents involving young people and neurodiversity and we are genuinely committed to developing how we respond to these often very challenging situations.”

Cause of fire at Rand Paul’s office remains unknown as senator looks for answers

The cause of a fire at a building housing a Kentucky office of Sen. Rand Paul, R-Ky., remains unknown as investigators continue to search for answers as to what started the blaze that impacted a handful of buildings in downtown Bowling Green.

A Saturday Facebook post from the Bowling Green Fire Department provides a detailed account of a fire that engulfed the building located at 1029 State Street and nearby buildings early Friday morning. The office building houses Paul’s local office and a local law firm.

picture of the aftermath posted to the department’s Facebook page Friday shows an upper story of the building, with the numbers “1029” visible at the front entrance, partially collapsed. The department elaborated on the extent of the damage in the Saturday Facebook post.

“Yesterday at 01:45, BGFD responded to multiple reports of smoke and fire coming from the Presbyterian Church on State Street. Soon after these calls, more units were dispatched for a fire alarm at 1025 State Street,” the Saturday Facebook post stated.

Continue reading “”

Actually they’ve made more than one movie about this……..

‘World’s first mass-produced’ humanoid robot to tackle labour shortages amid ageing population.

The company behind GR-1 plans to release 100 units by the end of 2023 mainly targeting robotic R&D labs. GR-1 will be able to carry patients from the bed to wheelchairs and help pick up objects.

In China, the number of people aged 60 and over will rise from 280 million to more than 400 million by 2035, the country’s National Health Commission estimates.

To respond to the rising demand for medical services amid labour shortages and the ageing population, a Shanghai-based firm, Fourier Intelligence, is developing a humanoid robot that can be deployed in healthcare facilities.

“As we move forward, the entire GR-1 could be a caregiver, could be a therapy assistant, can be a companion at home for the elderly who stay alone,” said the CEO and Co-founder of Fourier Intelligence, Zen Koh.

Standing 1.64 metres tall and weighing 55 kilograms, GR-1 can walk, avoid obstacles and perform simple tasks like holding bottles.

“The system itself can achieve self-balance walking and perform different tasks. We can programme it to sit, stand and jump. You can programme the arms to pick up utensils and tools and perform tasks as the engineers desire,” said Koh.

Though still in the research and development phase, Fourier Intelligence hopes a working prototype can be ready in two to three years.

Once completed, the GR-1 will be able to carry patients from the bed to wheelchairs and help pick up objects.

The company has developed technology for rehabilitation and exoskeletons and says that the patients are already familiar with using parts of robotics to, for example, support the arms and legs in physical therapy.

Koh believes humanoid robots can fill the remaining gap.

“Eventually they [patients] will have an autonomous robotics that is interacting with them.”

GR-1 was presented at the World AI Conference in Shanghai along with Tesla’s humanoid robot prototype Optimus and other AI robots from Chinese firms.

Among those was X20, a quadrupedal robot developed to replace humans for doing dangerous tasks such as toxic gas detection.

“Our wish is that by developing these applications of robots, we can release people from doing dreary and dangerous work. In addition to the patrol inspection,” said Qian Xiaoyu, marketing manager of DEEP Robotics.

Xiaoyu added that the company is planning to develop X20 to be used for emergency rescue and fire detection in future, something “technically very challenging” according to him.

The World AI Conference runs until July 15.

Skynet brags…..

AI Robots Admit They’d Run Earth Better Than ‘Clouded’ Humans

A panel of AI-enabled humanoid robots told a United Nations summit on Friday that they could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly-developing potential of artificial intelligence.

And they admitted that they cannot – yet – get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the UN’s two-day AI for Good Global Summit in Geneva.

Humanoid Robot Portrait

Humanoid Robot Portrait
Humanoid AI robot ‘Ameca’ at the summit. (Fabrice Coffrini/AFP)
They joined around 3,000 experts in the field to try to harness the power of AI – and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

They were assembled for what was billed as the world’s first press conference with a packed panel of AI-enabled humanoid social robots.

Continue reading “”

Paging Khan Noonien Singh. Paging Arik Soong.

Scientists Create Synthetic Human Embryo Models in Major First.

In a major scientific first, synthetic human embryo models have been grown in the lab, without any need for the usual natural ingredients of eggs and sperm.

The research – first brought to wider attention by The Guardian – has prompted excitement about the potential for new breakthroughs in health, genetics, and treating disease. But the science also raises serious ethical questions.

The embryo structures were produced from stem cells cultured from a traditional embryo in the lab. Stem cells can be programmed to develop into any kind of other cell – which is how they are used in the body for growth and repair.

Here, stem cells were carefully coaxed into becoming precursor cells that would eventually become the yolk sac, the placenta, and then the actual embryo itself.

A paper on the breakthrough has yet to be published, so we’re still waiting on the details of exactly how this was achieved.

The work was led by biologist Magdalena Żernicka-Goetz, from the University of Cambridge in the UK, together with colleagues from the UK and US. Last year, a team led by Zernicka-Goetz was able to successfully grow synthetic mouse embryos with primitive brains and hearts.

We should point out that we’re still a long way from creating babies artificially. These are embryo-like structures, without a heart or a brain: They’re more like embryo models that are able to mimic some, but not all, of the features of a normal embryo.

“It is important to stress that these are not synthetic embryos, but embryo models,” wrote Zernicka-Goetz on Twitter. “Our research isn’t to create life, but to save it.”

One of the ways in which this research could save lives is in helping to examine why many pregnancies fail at around the stage these artificial embryos replicate. If these earliest moments can be studied in a lab, we should get a much better understanding of them.

We could also use these techniques to learn more about how common genetic disorders develop at the earliest stages of life. Once there’s a greater knowledge about how they start, we’ll be better placed to do something about them.

At the same time, there are concerns around where this kind of synthetic embryo creation could lead. Scientists say strong regulations are needed to control this kind of research – regulations that at the moment don’t really exist.

“These new assays in vitro will pave the way for future studies that aim to unravel the mechanisms of human development, as well as the effects of environmental and genetic anomalies,” says biologist Rodrigo Suarez from the University of Queensland in Australia, who wasn’t involved in the research.

“As with most emerging technologies, society will need to balance the evidence about the risks and benefits of this approach, and update the current legislation accordingly.”

As pointed out by bioethics researcher Rachel Ankeny from the University of Adelaide, who wasn’t involved in the research, today scientists abide by a ’14-day rule’ which limits the use of human embryos in the lab, requiring that human embryos can only be cultivated in vitro for a maximum of 2 weeks.

Rules like this, as well as new ones that may be brought in as this research continues, force us to ask fundamental questions about when we consider ‘life’ beginning in an organism’s existence – and how close to a human embryo a synthetic embryo must be before it is considered essentially the same.

“We need to engage various publics about their understanding of and expectations from this sort of research, and more generally about their views on early human development,” says Ankeny.

“These biological processes are deeply tied to our values and what we think counts as human life.”

The research has yet to be peer-reviewed or published, and was presented at the annual meeting of the International Society for Stem Cell Research.

June 6

1755 – Nathan Hale, patriot and Revolutionary War spy, is born in Coventry Connecticut.

1799 – Patrick Henry, American lawyer and politician, 1st Governor of Virginia dies, age 63 at Red Hill Virginia.

1813 – At the Battle of Stoney Creek, a British force under John Vincent defeats an American force twice its size under William Winder and John Chandler.

1844 – The Young Men’s Christian Association (YMCA) is founded in London.

1889 – In downtown Seattle, an accidentally overturned glue pot in the Clairmont and Company cabinet shop in the basement of the Pontius building starts “The Great Seattle Fire” which destroys 25 city blocks, including the entire downtown business district, 4 of the city’s wharves, and its railroad terminals, but only causing 1 known death

1894 – Governor Davis H. Waite orders the Colorado state militia to protect and support the the Western Federation of Miners workers engaged in the Cripple Creek miners’ strike.

1912 – On the Alaskan Katmai peninsula, the largest volcanic eruption of the 20th century forms the Novarupta volcano.

1918 – During the Battle of Belleau Wood in World War I, while attempting to recapture the wood at Château-Thierry, the U.S. Marine Corps suffers 1087 casualties, more than it has taken in total before, and its worst single day’s count until the Battle of Tarawa in 1943.

1925 – The original Chrysler Corporation is founded by Walter Chrysler from the remains of the Maxwell Motor Company.

1934 – As part of the ‘New Deal’,  President Franklin D. Roosevelt signs the Securities Exchange Act into law, establishing the U.S. Securities and Exchange Commission.

1942 – During World War II, northwest of Midway island, U.S. Navy dive bombers flying off the carriers, Hornet, Yorktown and Enterprise, attack a Japanese invasion force and sink the Japanese navy cruiser Mikuma and 4 carriers; Akagi, Kaga, Soryu and Hiryu, which had participated in the attack on Pearl Harbor, 6 months earlier.

1944 – During World War II, 155,000 Allied troops begin the invasion of France with landings on Normandy beaches along with airborne parachute and glider assaults further inland.

1971 – Hughes Airwest Flight 706,  a McDonnell Douglas DC-9, collides in midair with a Marine Corps F-4 Phantom jet over the San Gabriel Mountains, north of Los Angeles, killing all 49 passengers and crew aboard the commercial jet and the pilot of the fighter.

1985 – The grave of “Wolfgang Gerhard” is opened in Embu, Brazil. The  remains are later proven to be those of Josef Mengele, Auschwitz’s “Angel of Death”.

2002 – A near Earth asteroid estimated at 10 meters in diameter explodes over the Mediterranean Sea between Greece and Libya. The explosion is estimated to have a force of 26 kilotons of TNT, slightly more powerful than the Nagasaki atomic bomb.

Skynet, and HAL, smile……..

AI-Enabled Drone Attempts To Kill Its Human Operator In Air Force Simulation.

AI – is Skynet here already?

Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD [Suppression of Enemy Air Defensesmission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

Poll: 61% of Americans say AI threatens humanity’s future.

A majority of Americans believe that the rise of artificial intelligence technology could put humanity’s future in jeopardy, according to a Reuters/Ipsos poll published on Wednesday. The poll found that over two-thirds of respondents are anxious about the adverse effects of AI, while 61 percent consider it a potential threat to civilization.

The online poll, conducted from May 9 to May 15, sampled the opinions of 4,415 US adults. It has a credibility interval (a measure of accuracy) of plus or minus two percentage points.

The poll results come amid the expansion of generative AI use in education, government, medicine, and business, triggered in part by the explosive growth of OpenAI’s ChatGPT, which is reportedly the fastest-growing software application of all time. The application’s success has set off a technology hype race among tech giants such as Microsoft and Google, which stand to benefit from having something new and buzzy to potentially increase their share prices.

Fears about AI, justified or not, have been rumbling through the public discourse lately due to high-profile events such as the “AI pause” letter and Geoffery Hinton resigning from Google. In a recent high-profile case of AI apprehension, OpenAI CEO Sam Altman testified before US Congress on Tuesday, expressing his concerns about the potential misuse of AI technology and calling for regulation that, according to critics, may help his firm retain its technological lead and suppress competition.

Lawmakers seem to share some of these concerns, with Sen. Cory Booker (D-NJ) observing, “There’s no way to put this genie in the bottle. Globally, this is exploding,” Reuters reported.

This negative scare messaging seems to be having an impact. Americans’ fears over AI’s potential for harm far outweigh optimism about its benefits, with those predicting adverse outcomes outnumbering those who don’t by three to one. “According to the data, 61% of respondents believe that AI poses risks to humanity, while only 22% disagreed, and 17% remained unsure,” wrote Reuters.

The poll also revealed a political divide in perceptions of AI, with 70 percent of Donald Trump voters expressing greater concern about AI versus 60 percent of Joe Biden voters. Regarding religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to human civilization, at 32 percent, compared to 24 percent of non-evangelical Christians.

Reuters reached out to Landon Klein, director of US policy of the Future of Life Institute, which authored the open letter that asked for a six-month pause in AI research of systems “more powerful” than GPT-4. “It’s telling such a broad swatch of Americans worry about the negative effects of AI,” Klein said. “We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action.”

Meanwhile, another group of AI researchers led by Timnit Gebru, Emily M. Bender, and Margaret Mitchell (three authors of a widely cited critical paper on large language models) say that while AI systems are indeed potentially harmful, the prevalent worry about AI-powered apocalypse is misguided. They prefer to focus instead on “transparency, accountability, and preventing exploitative labor practices.”

Another issue with the poll is that AI is a nebulous term that often means different things to different people. Almost all Americans now use “AI” (and software tools once considered “AI”) in our everyday lives without much notice or fanfare, and it’s unclear if the Reuters/Ipsos poll made any attempt to make that type of distinction for its respondents. We did not have access to the poll methodology or raw poll results at press time.

Along those lines, Reuters quoted Ion Stoica, a UC Berkeley professor and co-founder of AI company Anyscale, pointing out this potential contradiction. “Americans may not realize how pervasive AI already is in their daily lives, both at home and at work,” he said.

Trading Privacy for Convenience: Starbucks’ Biometric Experiment With Palm Payments in Washington Town

Starbucks has launched a trial of Amazon’s palm payment system Amazon One in a community north of Seattle, Washington. The coffee chain has already tried Amazon Go at concept stores built in partnership with Amazon in the city of New York.

The new trial will take place in a waterfront community north of Seattle called Edmonds. Starbucks appears to be testing if older people, who are more resistant to new technologies, will welcome the idea of biometrics payments, The Spoon reported.

Reception of the technology has been mixed, with attendants reporting that older people are more skeptical of the technology.

“They’re kind of freaked out by it,” an in-store attendant told Forbes. “It’s an older town, so some people aren’t interested.”

Starbucks is not yet forcing people to use Amazon One. Other payment options are still available.

Those interested in using the system are required to register their palm at an in-store kiosk. From there they can use the contactless payment system at stores with Amazon One.

They actually did make a movie about this

Comatose People to be Declared Dead for Use as Organ Donors

The law that redefined death in 1981, referred to as the Uniform Determination of Death Act (UDDA), is being revised.  The UDDA states that death by neurologic criteria must consist of “irreversible cessation of all functions of the entire brain, including the brainstem.”  However, in actual practice, doctors examine only the brainstem.  The result is that people are being declared dead even though some still have detectable brainwaves, and others still have a part of the brain that functions, the hypothalamus.

Lawyers have caught on, pointing out in lawsuits that the whole brain standard was not met for their clients.  As a result, the Uniform Law Commission (ULC) is working on updates to the UDDA based on proposals from the American Academy of Neurology (AAN).

In the interest of preventing lawsuits, the AAN is asking that the neurologic criteria of death be loosened even further and standardized across the United States.  The revised UDDA is referred to as the RUDDA.  Below is the proposal drafted at the February session of the ULC, which will be debated this summer:

Section § 1. [Determination of Death]

An individual who has sustained either (a) permanent cessation of circulatory and respiratory functions or; (b) permanent coma, permanent cessation of spontaneous respiratory functions, and permanent loss of brainstem reflexes, is dead. A determination of death must be made in accordance with accepted medical standards.

Notice that the new neurological standard under (b) does not use the term “irreversible,” nor does it include the loss of whole-brain function.  The term “permanent” is being defined to mean that physicians do not intend to act to reverse the patient’s condition.  Thus, people in a coma whose prognosis is death will be declared dead under this new standard.  An unresponsive person with a beating heart on a ventilator is not well, but he is certainly not dead!  The Catholic Medical Association and the Christian Medical and Dental Association have written letters to the ULC protesting these changes.

Continue reading “”

Clearview AI Scraped Billions of Facebook Photos for Facial Recognition Database

Facial recognition firm Clearview has built a massive AI-powered database of billions of pictures collected from social media platforms without obtaining users’ consent.

In late March, Clearview AI CEO Hoan Ton-That told BBC in an interview that the company had obtained 30 billion photos without users’ knowledge over the years, scraped mainly from social media platforms like Facebook. He said US law enforcement agencies use the database to identify criminals.

Ton-That disputed claims that the photos were unlawfully collected. He told Business Insider in an emailed statement, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

However, privacy advocates and social media companies have been highly critical of Clearview AI.

“Clearview AI’s actions invade people’s privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services,” a Meta spokesperson said in an email to Insider. 

Ton-That told Insider the database is not publicly available and is only used by law enforcement. He said the software had been used more than a million times by police.

“Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

According to critics, using Clearview AI by the police subjects everyone to a “continuous police line-up.”

“Whenever they have a photo of a suspect, they will compare it to your face,” Matthew Guariglia from the Electronic Frontier Foundation, told BBC. He said, “It’s far too invasive.”

The AI-driven database has raised privacy concerns in the US to the point where Sens. Jeff Merkley and Bernie Sanders attempted to block its use with a bill requiring Clearview and similar companies to obtain consent before scraping biometric data.

In 2020, the American Civil Liberties Union sued Clearview AI, calling it a ‘nightmare scenario’ for privacy. The ACLU managed to ban Clearview AI’s products from being sold to private companies but not the police.

Clearview AI is a massive problem for civil liberties. The easiest way to prevent Clearview AI from scraping photos from your social media accounts is to not be on social media. Alternatively, if you wish to maintain a social media presence, ensure that the images you post are not publicly accessible on the web.