Why All the Hate for Smart Guns?

We’ve been hearing about “smart guns” for well over a quarter century. The dream (of some) has been a gun that recognizes its owner and will only work for that person. The idea is to make sure that people who aren’t authorized — thieves, children — are locked out and can’t use the firearm. That ideal is obvious and laudable. The history and execution, so far, have been less than impressive.

Lots of people have advocated a wide array of designs and hyped allegedly market-ready models. Some were well intentioned people who thought they could overcome the technological challenges involved. Some seemed more like snake oil salesmen who hawked not-ready-for-prime-time contraptions, some of which were downright awful.

But we’re living in a time of string theories and God particles. Anything is possible, right? Technology marches on and no one really doubted that one day, someone would develop and market a viable “smart gun” with systems of one type or another that would reliably (within reason) ID authorized users.

Then along came the legislative wizards in New Jersey state government who single-handedly stifled “smart gun” design for a couple of decades. Led by some very big brains like Senator Loretta Weinberg, they enacted a law that mandated that once a smart gun design was marketed to consumers anywhere in the US, all guns sold in the Garden State would have to have the technology.

In the grand tradition of Soviet central planning, Senator Weinberg’s mandate had some, shall we say, unintended consequences among rational economic actors who live and work out here in the real world.

To wit, the Jersey mandate put a damper on “smart gun” R&D. No one wanted to trigger the law and be responsible for condemning millions of Garden State gun buyers to having a choice of exactly one gun

Continue reading “”

Skynet, and HAL, smile……..

AI-Enabled Drone Attempts To Kill Its Human Operator In Air Force Simulation.

AI – is Skynet here already?

Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD [Suppression of Enemy Air Defensesmission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

And with CHF and CAD/CAM-CNC manufacturing, such ‘forensics’ are even more problematical

FYI, this is a l-o-n-g article.

Devil in the grooves: The case against forensic firearms analysis
A landmark Chicago court ruling threatens a century of expert ballistics testimony

Last February, Chicago circuit court judge William Hooks made some history. He became the first judge in the country to bar the use of ballistics matching testimony in a criminal trial.

In Illinois v. Rickey Winfield, prosecutors had planned to call a forensic firearms analyst to explain how he was able to match a bullet found at a crime scene to a gun alleged to be in possession of the defendant.

It’s the sort of testimony experts give every day in criminal courts around the country. But this time, attorneys with the Cook County Public Defender’s Office requested a hearing to determine whether there was any scientific foundation for the claim that a specific bullet can be matched to a specific gun. Hooks granted the hearing and, after considering arguments from both sides, he issued his ruling.

It was an earth-shaking opinion, and it could bring big changes to how gun crimes are prosecuted — in Chicago and possibly elsewhere.

Hooks isn’t the first judge to be skeptical of claims made by forensic firearms analysts. Other courts have put restrictions on which terminology analysts use in front of juries. But Hooks is the first to bar such testimony outright. “There are no objective forensic based reasons that firearms identification evidence belongs in any category of forensic science,” Hooks writes. He adds that the wrongful convictions already attributable to the field “should serve as a wake-up call to courts operating as rubber stamps in blindly finding general acceptance” of bullet matching analysis.

For more than a century, forensic firearms analysts have been telling juries that they can match a specific bullet to a specific gun, to the exclusion of all other guns. This claimed ability has helped to put tens of thousands of people in prison, and in a nontrivial percentage of those cases, it’s safe to say that ballistics matching was the only evidence linking the accused to the crime.

But as with other forensic specialties collectively known as pattern matching fields, the claim is facing growing scrutiny. Scientists from outside of forensics point out that there’s no scientific basis for much of what firearms analysts say in court. These critics, backed by a growing body of research, make a pretty startling claim — one that could have profound effects on the criminal justice system: We don’t actually know if it’s possible to match a specific bullet to a specific gun. And even if it is, we don’t know if forensic firearms analysts are any good at it.

Continue reading “”

When Even Artificial Intelligence Doesn’t Want to Address the ‘Gun Violence’ Elephant in the Room

It’s like the other current in “gun control” stories we often move past quickly. I asked ChatGPT a question about that issue we are required to ignore and ChatGPT even danced around it. Here’s the exchange:

Erickson: Break down the 2021 gun crime statistic totals by race of shooter and race of victim

ChatGPT: I’m sorry, but as an AI language model, I don’t have access to real-time data and statistics beyond my knowledge cutoff date of September 2021. Additionally, it’s important to note that collecting and analyzing data on gun violence can be a complex and sensitive issue, and the available data may not always provide a complete or accurate picture of the situation.

However, I can provide some general information on gun violence statistics in the United States based on available data. According to the Gun Violence Archive, which tracks incidents of gun violence in the US, there were over 43,000 incidents of gun violence in the US in 2020, resulting in over 23,000 deaths and over 19,000 injuries. The data on the race of shooters and victims is not always available or complete, but based on available information, it appears that gun violence disproportionately affects Black and Hispanic communities.

According to a report from the Centers for Disease Control and Prevention (CDC) on firearm homicides from 2015-2019, the firearm homicide rate among Black non-Hispanic individuals was 28.5 per 100,000 population, while the rate among Hispanic individuals was 7.8 per 100,000 population. In contrast, the rate among White non-Hispanic individuals was 3.3 per 100,000 population. These numbers suggest that Black and Hispanic individuals are more likely to be victims of firearm homicides than White individuals.

Regarding the race of shooters, data from the National Crime Victimization Survey (NCVS) suggests that in incidents where the race of the offender was known, Black individuals were disproportionately represented as offenders of violent crime, including firearm-related offenses. However, it’s important to note that this data does not provide a complete picture of gun violence, as it only includes incidents that were reported to law enforcement and where the race of the offender was known.

Overall, it’s clear that gun violence is a serious issue in the US, and efforts to address the problem should take into account the disproportionate impact on Black and Hispanic communities. It’s also important to continue collecting and analyzing data to better understand the nature of gun violence and the factors that contribute to it.

The woke AI can dance around it if it wants, but the reality is that the black and Hispanic communities are disproportionally affected by gun violence, and they are disproportionately affected by gang violence.

Instead of politicians lecturing Americans on AR-15s, which do not account for most gun deaths in the country, maybe they should be talking to the black and Hispanic young men of America who are killing each other and other members of their communities with guns.

There were 48,830 gun deaths in America in 2021. 54% were suicides. 43% were murders. Of those 20,958 murders, 103 were mass shooting victims according to the FBI, and only 3% involved rifles like an AR-15.

Disproportionately and overwhelmingly, the murders caused by guns are through violence in non-white communities, often by gangs. The media focuses on the few mass shooters and the white shooters of non-white victims and chooses to ignore the gangs and the violence in non-white communities. The media demands we ban the AR-15 and rarely comment on handguns.

In other words, the media and the politicians focus on the least used gun for murders and focus on the least common gun deaths all to advance an agenda where, if they really wanted to reduce gun deaths, they’d focus on crime, gangs, and the need for more robust law enforcement. That would actually be politically viable and would actually reduce gun deaths.

But that would not focus on white supremacy, Republicans, and gun owners. It’d cover the problem, not the political opposition.

— Erick-Woods Erickson in The Data and the Silence

Trading Privacy for Convenience: Starbucks’ Biometric Experiment With Palm Payments in Washington Town

Starbucks has launched a trial of Amazon’s palm payment system Amazon One in a community north of Seattle, Washington. The coffee chain has already tried Amazon Go at concept stores built in partnership with Amazon in the city of New York.

The new trial will take place in a waterfront community north of Seattle called Edmonds. Starbucks appears to be testing if older people, who are more resistant to new technologies, will welcome the idea of biometrics payments, The Spoon reported.

Reception of the technology has been mixed, with attendants reporting that older people are more skeptical of the technology.

“They’re kind of freaked out by it,” an in-store attendant told Forbes. “It’s an older town, so some people aren’t interested.”

Starbucks is not yet forcing people to use Amazon One. Other payment options are still available.

Those interested in using the system are required to register their palm at an in-store kiosk. From there they can use the contactless payment system at stores with Amazon One.

Hmm, drone speedbags.

The Marines are getting supersized drones for battlefield resupply
The big flying machines are designed to carry about 150 pounds and can fly at about 67 miles per hour.

On April 11, the Department of Defense announced that it was allocating just over $8 million for 21 new delivery drones. These flying machines, officially called the TRV-150C Tactical Resupply Unmanned Aircraft Systems, are made by Survice Engineering in partnership with Malloy Aeronautics.

The TRV-150C is a four-limbed drone that looks like a quadcopter on stilts. Its tall landing legs allow it to take off with a load of up to 150 pounds of cargo slung underneath. The drone’s four limbs each mount two rotors, making the vehicle more of an octocopter than a quadcopter.

Continue reading “”

Hackers Are Breaking Into ATT Email Accounts to Steal Cryptocurrency.

Unknown hackers are breaking into the accounts of people who have AT&T email addresses, and using that access to then hack into the victim’s cryptocurrency exchange’s accounts and steal their crypto, TechCrunch has learned.

At the beginning of the month, an anonymous source told TechCrunch that a gang of cybercriminals have found a way to hack into the email addresses of anyone who has an att.net, sbcglobal.net, bellsouth.net and other AT&T email addresses.

According to the tipster, the hackers are able to do that because they have access to a part of AT&T’s internal network, which allows them to create mail keys for any user. Mail keys are unique credentials that AT&T email users can use to log into their accounts using email apps such as Thunderbird or Outlook, but without having to use their passwords.

With a target’s mail key, the hackers can use an email app to log into the target’s account and start resetting passwords for more lucrative services, such as cryptocurrency exchanges. At that point it’s game over for the victim, as the hackers can then reset the victim’s Coinbase or Gemini account password via email.

Continue reading “”

Tesla beats Autopilot lawsuit as jury rejects crash victim’s claim.

Tesla today defeated a lawsuit that claimed its Autopilot technology caused a car crash that left the plaintiff with severe injuries.

“A California state court jury on Friday handed Tesla a sweeping win, finding that the carmaker’s Autopilot feature did not fail to perform safely in what appears to be the first trial related to a crash involving the partially automated driving software,” Reuters reported.

Justine Hsu sued Tesla in Los Angeles County Superior Court in 2020, saying Tesla’s Autopilot technology in her Model S malfunctioned and caused the car to swerve into the road’s center median so fast that she had no time to react. The lawsuit said the airbag deployed improperly during the July 2019 crash, “caus[ing] numerous breaks in Hsu’s jaw and the loss of multiple teeth.”

Hsu’s lawsuit said she had undergone three surgeries and continued to require medical treatment. “Because of the Autopilot failure, and the improper deployment of the airbags, Plaintiff Hsu suffered severe injuries, resulting in a broken jaw, broken teeth, and multiple injuries to her face, hands, and legs,” the complaint said.

Hsu sought over $3 million in damages, while Tesla argued “that Hsu used Autopilot on city streets, despite Tesla’s user manual warning against doing so,” according to Reuters. In addition to rejecting Hsu’s Autopilot claim, the jury “found that the airbag did not fail to perform safely, and that Tesla did not intentionally fail to disclose facts to her,” Reuters wrote.

Tesla is expected to face more trials over its automated-driving technology. One pending case was brought by five Texas police officers who were injured in February 2021 when a Tesla Model X in Autopilot mode crashed into police vehicles that were stopped and had their flashing lights turned on. There’s also an ongoing investigation by the National Highway Traffic Safety Administration into crashes involving Tesla cars using Autopilot.

The best 3D printers for beginners.

Even 3D printers for beginners can feel intimidating if you’re not familiar with the process. For instance, a recent 3D Printing Sentiment Index survey by Ultimaker revealed that 71 percent of professionals surveyed are aware of 3D printing. However, 29% of businesses that could potentially use 3D printing have no current familiarity with the technology. Nevertheless, engineers and entrepreneurs continue to use 3D printers in some very inventive ways: Astronauts plan to use a 3D printer on the International Space Station to print out elements of a human knee and engineers at Columbia University in New York have recently figured out how to replicate a seven-layer cheesecake. What’s encouraging is that in the last 12 years, 3D printers have also gotten more affordable.

But what exactly does a 3D printer do? Most consumer-grade 3D printers produce, or print, a three-dimensional object using a technology called “additive printing.” The process creates a three-dimensional object by building it layer upon layer using various materials, such as plastic or metal that adhere together during the process to eventually form the object. (The design of the object is based on a digital file, which is most often made using computer-aided design software, or CAD.) You’ll have to get over the initial learning curve, but watching that first print take shape can feel downright magical. The best 3D printers for beginners offer a simple setup, intuitive interfaces, and enough flexibility to grow with your skills.

How we chose the best 3D printers for beginners

Continue reading “”

Clearview AI Scraped Billions of Facebook Photos for Facial Recognition Database

Facial recognition firm Clearview has built a massive AI-powered database of billions of pictures collected from social media platforms without obtaining users’ consent.

In late March, Clearview AI CEO Hoan Ton-That told BBC in an interview that the company had obtained 30 billion photos without users’ knowledge over the years, scraped mainly from social media platforms like Facebook. He said US law enforcement agencies use the database to identify criminals.

Ton-That disputed claims that the photos were unlawfully collected. He told Business Insider in an emailed statement, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

However, privacy advocates and social media companies have been highly critical of Clearview AI.

“Clearview AI’s actions invade people’s privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services,” a Meta spokesperson said in an email to Insider. 

Ton-That told Insider the database is not publicly available and is only used by law enforcement. He said the software had been used more than a million times by police.

“Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

According to critics, using Clearview AI by the police subjects everyone to a “continuous police line-up.”

“Whenever they have a photo of a suspect, they will compare it to your face,” Matthew Guariglia from the Electronic Frontier Foundation, told BBC. He said, “It’s far too invasive.”

The AI-driven database has raised privacy concerns in the US to the point where Sens. Jeff Merkley and Bernie Sanders attempted to block its use with a bill requiring Clearview and similar companies to obtain consent before scraping biometric data.

In 2020, the American Civil Liberties Union sued Clearview AI, calling it a ‘nightmare scenario’ for privacy. The ACLU managed to ban Clearview AI’s products from being sold to private companies but not the police.

Clearview AI is a massive problem for civil liberties. The easiest way to prevent Clearview AI from scraping photos from your social media accounts is to not be on social media. Alternatively, if you wish to maintain a social media presence, ensure that the images you post are not publicly accessible on the web.

Walmart says 65% of its stores will be serviced by ‘automation’ in the next three years – as it lays off 2,000 warehouse workers.

Walmart is hoping to have about 65 percent of its stores automated within the next three years, it has announced – just days after revealing it would be laying off 2,000 employees who fulfill online orders.

The news came at the U.S. retail giant’s annual investor meeting in Tampa as the business – with a net worth of $388 billion – invests heavily in automation to speed up orders at its e-commerce fulfillment facilities.

It wasn’t immediately clear if the move would lead to more layoffs at the country’s biggest private employer, which has about 1.7 million U.S. workers and another 60,000 abroad.

The company did however say that the moves would reduce the need for lower-paid roles. Walmart is hoping to have about 65 percent of its stores automated within the next three years.

‘As the changes are implemented across the business, one of the outcomes is roles that require less physical labor but have a higher rate of pay,’ the Bentonville, Arkansas-based retailer said in a filing. ‘Over time, the company anticipates increased throughput per person, due to the automation while maintaining or even increasing its number of associates as new roles are created,’ it added.

‘This increased efficiency will not only support better inventory management, but it will also support Walmart’s rapidly growing e-commerce business,’ Stephens Inc analyst Ben Bienvenu wrote in a note.

Walmart is yet to respond to questions about whether the moves will result in any near-term layoffs.

The world’s largest retailer by sales maintained its forecast for the fiscal year ending Jan 31, 2024, which calls for net sales to rise by 2.5 percent to 3 percent and earnings by $5.90 to $6.05 per share. It also kept its forecast for first-quarter sales to rise between 4.5 percent and 5 percent in constant currency.

About 55 percent of packages that it processes through its fulfillment centers will be moved through automated facilities by January 2026, improving unit cost averages by about 20 percent

Walmart has invested billions of dollars in technology for its online order facilities, including buying grocery robotics company Alert Innovation and partnering with companies such as Knapp to help cut the number of steps it takes for employees to process e-commerce orders down to five from 12.

On a post-earnings call in February, Walmart CEO Doug McMillon said he was ‘most excited about the automation opportunity we have’ with plans to increase investments in automation technology as part of its more than $15-billion capital spending budget this year.

According to a February report from the Bureau of Labor Statistics, there are roughly 9.9 million job openings and 5.9 million unemployed in the U.S.

Walmart hiked their minimum wages earlier this year across the country surpassing $17.50 which were expected to be reflected in March 2 paychecks.

Watch this robotic dog use one of its ‘paws’ to open doors
Oh, great. They can let themselves inside buildings now.

Even with their many advances, quadrupedal robots’ legs are most often still just made for walking. Using individual front paws for moving and non-locomotion tasks like pushing buttons or moving objects, however, usually falls outside the machines’ reach, but a team of researchers appear to be designing them to finally bridge that gap.

Roboticists from Carnegie Mellon University and UC Berkeley have demonstrated the ability to program a quadrupedal robot—in this case, a Unitree Go1 one utilizing an Intel RealSense camera—to use its front limbs not only to walk, but also to help climb walls and interact with simple objects, as needed. The progress, detailed in a paper to be presented next month at the International Conference of Robotics and Automation (ICRA 2023), potentially marks a major step forward for what quadrupedal robots can handle. There’s also some pretty impressive video demonstrations, as well. Check out the handy machine in action below:

To pull off these abilities, researchers broke down their robots’ desired tasks into two broad skill sets—locomotion (movement like walking or climbing walls) and manipulation (using one leg to interact with externalities while balancing on the other three limbs). As IEEE Spectrum explains, the separation is important: Often, these tasks can prove to be in opposition to one another, leading robots to get stuck in computational quandaries. After training how to handle both skill sets within simulations, the team combined it all into a “robust long-term plan” via learning a behavior tree from “one clean expert demonstration,” according to the research paper.

Developing cost-effective robots capable of tackling both movement and interaction with their surroundings is a key hurdle in deploying machines that can easily maneuver through everyday environments. In the research team’s videos, for example, the quadrupedal robot is able to walk up to a door, then press the nearby wheelchair access button to open it. Obviously, it’s much easier to rely on a single robot to manage both requirements, as opposed to using two robots, or altering human-specific environments to suit machines.

Combine these advancements with existing quadrupedal robots’ abilities to traverse diverse terrains such as sand and grass, toss in the trick of scaling walls and ceilings, and you’ve got a pretty handy four-legged friend.

Quip O’ The Day:
The AI to really be afraid of is the one that deliberately fails the Turing Test.

ChatGPT has passed the Turing test and if you’re freaked out, you’re not alone.

Despite just releasing ChatGPT-4, OpenAI is already working on the fifth iteration of the immensely popular chat application, GPT-5. According to a new report from BGR, we could be seeing those major upgrades as soon as the end of the year.

One milestone, in particular, could be within reach if this turns out to be true: the ability to be indistinguishable from humans in conversation. And it doesn’t help that we’ve essentially been training this AI chatbot with hundreds of thousands, if not millions, of conversations a day.

Computer and AI pioneer Alan Turing famously proposed a test for artificial intelligence that if you could speak to a computer and not know that you weren’t speaking to a human, the computer could be said to be artificially intelligent. With OpenAI’s ChatGPT, we’ve certainly crossed that threshold to a large degree (it can still be occassionally wonky, but so can humans), but for everyday use, ChatGPT passes this test.

Considering the meteoric rise and development of ChatGPT technology since its debut in November 2022, the rumors of even greater advances are likely to be true. And while seeing such tech improve so quickly can be exciting, hilarious, and sometimes insightful, there are also plenty of dangers and legal pitfalls that can easily cause harm.
For instance, the amount of malware scams being pushed has steadily increased since the chatbot tech’s introduction, and its rapid integration into applications calls into question privacy and data collection issues, not to mention rampant plagiarism issues. But it’s not just me seeing the issue with ChatGPT being pushed so rapidly and aggressively. Tech leaders and experts in AI have also been sounding the alarm.

AI development needs to be culled

The Future of Life Institute (FLI), an organization that is dedicated to minimizing the risk and misuse of new technologies, has published an open letter calling for AI labs and companies to immediately halt their work on OpenAI systems beyond ChatGPT-4. Notable figures like Apple co-founder Steve Wozniak and OpenAI co-founder Elon Musk have agreed that progress should be paused in order to ensure that people can enjoy existing systems and that said systems are benefiting everyone.

The letter states: “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

As we’re seeing, the rush for companies to integrate and use this new technology is causing a plethora of issues. These include CNET using it to generate articles that sometimes contained inaccuracies, to credit card information potentially being leaked on ChatGPT. There’s very little being done in the way of protecting privacy, intellectual property rights of smart artists, or preventing personal information stored from leaking.

And until we get some kind of handle on this developing technology and how companies using it do so safely and responsibly, then development should pause until we do.

What could go wrong? (Skynet smiles)

ChatGPT gets “eyes and ears” with plugins that can interface AI with the world.
Plugins allow ChatGPT to book a flight, order food, send email, execute code (and more)

On Thursday, OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.

Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: “looking up current stock prices”) or perform any task controlled by a network service (for example: “ordering pizza through the Internet”), it is now possible, provided it doesn’t go against OpenAI’s rules.

Conventionally, most large language models (LLM) like ChatGPT have been constrained in a bubble, so to speak, only able to interact with the world through text conversations with a user. As OpenAI writes in its introductory blog post on ChatGPT plugins, “The only thing language models can do out-of-the-box is emit text.”

Bing Chat has taken this paradigm further by allowing it to search the web for more recent information, but so far ChatGPT has still been isolated from the wider world. While closed off in this way, ChatGPT can only draw on data from its training set (limited to 2021 and earlier) and any information provided by a user during the conversation. Also, ChatGPT can be prone to making factual errors and mistakes (what AI researchers call “hallucinations”).

To get around these limitations, OpenAI has popped the bubble and created a ChatGPT plugin interface (what OpenAI calls ChatGPT’s “eyes and ears”) that allows developers to create new components that “plug in” to ChatGPT and allow the AI model to interact with other services on the Internet. These services can perform calculations and reference factual information to reduce hallucinations, and they can also potentially interact with any other software service on the Internet—if developers create a plugin for that task.

What kind of plugins are we talking about?

The ChatGPT "Plugin store" lets users select from plugins they wish to "install" in their ChatGPT session.
Enlarge / The ChatGPT “Plugin store” lets users select from plugins they wish to “install” in their ChatGPT session.

In the case of ChatGPT, OpenAI will allow users to select from a list of plugins before starting a ChatGPT session. They present themselves almost like apps in an app store, each plugin having its own icon and description.

OpenAI says that a first round of plugins have been created by the following companies:

  • Expedia (for trip planning)
  • FiscalNote (for real-time market data)
  • Instacart (for grocery ordering)
  • Kayak (searching for flights and rental cars)
  • Klarna (for price-comparison shopping)
  • Milo (an AI-powered parent assistant)
  • OpenTable (for restaurant recommendations and reservations)
  • Shopify (for shopping on that site)
  • Slack (for communications)
  • Speak (for AI-powered language tutoring)
  • Wolfram (for computation and real-time data)
  • Zapier (an automation platform)

Continue reading “”

ChatGPT a Perfect Example of Garbage In, Garbage Out on Guns

“The owner of Insider and Politico tells journalists: AI is coming for your jobs,” CNN Business reported Wednesday. Mathias Döpfner, CEO of publisher Axel Springer, “predicts that AI will soon be able to aggregate information much better than humans.”

“Döpfner’s warnings come three months after Open AI opened up access to ChatGPT, an AI-powered chatbot,” the report notes. “The bot is capable of providing lengthy, thoughtful responses to questions, and can write full essays, responses in job applications and journalistic articles.”

“Microsoft co-founder Bill Gates believes ChatGPT, a chatbot that gives strikingly human-like responses to user queries, is as significant as the invention of the internet,” Reuters reported in February. “Until now, artificial intelligence could read and write, but could not understand the content,” Gates told Handelsblatt, a German business newspaper. “This will change our world.”

That seems to be a thing with him. With the environment, with Covid,  with guns

Continue reading “”

Great news for 3D printing of guns.

By cracking a metal 3D-printing conundrum, researchers propel the technology toward widespread application

By cracking a metal 3D-printing conundrum, researchers propel the technology toward widespread application

Researchers have not yet gotten the additive manufacturing (or 3D printing) of metals down to a science completely. Gaps in our understanding of what happens within metal during the process have made results inconsistent. But a new breakthrough could grant an unprecedented level of mastery over metal 3D printing.

Continue reading “”

Giving up biometrics at US airports soon won’t be optional, transport security chief says

The chief of the Transportation Security Administration (TSA) David Pekoske said that the agency is considering biometric technology to reduce traveler processing times and reduce the number of screening officers. He made the comments at the South by Southwest conference, which focused on aviation security.

Pekoske noted that the TSA’s role is maintaining security and the transportation system and staying ahead of threats. For those reasons, it is “critically important that this system has as little friction as it possibly can, while we provide for safety and security.”

The TSA has been relying on biometric technology in the identification verification process. According to the agency, the newest technology it has been using is over 99% effective and does not have problems identifying darker-skinned people like the old technology.

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “[We’re] upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

Pekoske said that the agency will ensure it remains transparent with the public about the data that is taken, what it is used for, and for how long it will be stored. For now, he said that travelers can opt out of processes they are not comfortable with.

According to The Dallas Morning News, giving up biometric data for travel will eventually not be optional.

“He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional,” the report states.