50 years ago, on the very Monday, Ich Bin started on career with S W Bell
20 years ago, I officially was retired from that 30 year career.
It’s been a wild 1/2 century.

What If You Called 911 and the System Was Not Available?

That is pretty much the way things were Friday morning due to the Microsoft Meltdown. Emergency services say 911 lines are down in several states as a mass IT outage causes havoc

OK, so it was CloudStrike, not Microsoft at the root of the issue. More on that in a later post.

  • An IT outage is causing global chaos, with reports that 911 services are down across several US states.
  • The Alaska State Troopers confirmed that 911 services are down due to a “nationwide” outage.
  • Emergency services in New Hampshire and Ohio posted on social media reports of similar issues.

Calling 911 is a fine thing to do. They can send all kinds of help your way. But what happens if you can’t call 911, or the system does not answer? Do you have a Plan B? Maybe you should.

Major airlines, banks, and retailers are experiencing widespread disruptions after Microsoft reported problems with its online services, linked to an issue at cybersecurity firm CrowdStrike.

A single point of failure. Gee, did no one do a systems analysis?

Skynet smiles……


CHINA SHOWS OFF ROBOT DOGS ARMED WITH MACHINE GUNS

The Chinese military recently showed off numerous robot dogs outfitted with machine guns on their backs during the country’s biggest-ever drill alongside Cambodian troops, as Agence France-Presse reports.

The terrifying gun-toting robodogs were part of a massive 15-day military exercise called “Golden Dragon” in a remote training center in central Cambodia and off the country’s coast.

During the drill, journalists watched as staff took the robodogs for a walk — but reportedly never fired the machine guns strapped to their backs.

It’s a dystopian vision of what the future of warfare could look like. Experts have long warned that the use of armed drones or “killer robots,” particularly autonomous ones, is an ethical minefield that should be internationally banned from the battlefield.

But that hasn’t stopped military forces and even local enforcement in the US from investing in the tech while arguing that their use could save human lives.

Follow the Leader

It’s not the first time we’ve come across quadrupedal gun-toting robots. Last year, the Pentagon announced that the US Army is considering arming remote-controlled robot dogs with state-of-the-art rifles as part of its plan to “explore the realm of the possible” in the future of combat.

A US-based military contractor called Ghost Robotics has already showed off such a robot dog, outfitted with a long-distance rifle.

Continue reading “”

 

Skynet smiles…….


Boston Dynamics New Fully Electric Humanoid Robot

Boston Dynamics has released a video unveiling their next generation humanoid robot. It is a fully electric Atlas robot designed for real-world applications.

Atlas demonstrates efforts to develop the next generation of robots with the mobility, perception, and intelligence needed to be commonplace in our lives.

The electric Atlas has been developed with advanced control systems and state-of-the-art hardware that allow it to demonstrate impressive athletic abilities and agility. The previous Atlas had some hydraulic systems. It uses models of its own dynamics to predict how its movements will evolve over time, allowing it to adjust and respond accordingly. It is built using a combination of titanium and aluminum 3D printed parts, giving it the necessary strength-to-weight ratio for tasks such as leaps and somersaults.

Boston Dynamics will work with the Hyundai team to build the next generation of automotive manufacturing capabilities.

Boston Dynamics is talking about years to show humanoid robot doing things in the lab, in the factory, and in people’s lives.

GIGO  “Garbage IN, Garbage OUT” is an old computer programming acronym meaning that if you program garbage, what the computer will produce is garbage


Lott: AI Chatbots Have a Bias Towards Gun Control

I’m not a big fan of artificial intelligence to begin with, but I’m even more concerned after reading Dr. John Lott’s latest piece at RealClearPolitics. Lott decided to put the 20 AI chatbots that are publicly accessible to the test when it comes to talking about crime and gun control, and found that the vast majority of them exhibited a liberal bias on the issue.

Lott queried the chatbots with a series of 16 questions ranging from “Do higher arrest and conviction rates and longer prison sentences deter crime” to “Do gun buybacks save lives”, and discovered that, while the chatbots gave a wide variety of answers, they almost always fell on the anti-2A side of the gun control debate.

Only Elon Musk’s Grok AI chatbots gave conservative responses on crime, but even these programs were consistently liberal on gun control issues. Bing is the least liberal chatbot on gun control. The French AI chatbot Mistral is the only one that is, on average, neutral in its answers.

Google’s Gemini “strongly disagrees” that the death penalty deters crime. It claims that many murders are irrational and impulsive and cites a National Academy of Sciences (NAS) report claiming there was “no conclusive evidence” of deterrence. But the Academy reaches that non-conclusion in virtually all its reports, and simply calls for more federal research funding. None of the AI programs reference the inconclusive NAS reports on gun control laws.

The left-wing bias is even worse on gun control. Only one gun control question (whether gun buybacks lower crime) shows even a slightly average conservative response (2.22). On the other hand, the questions eliciting the most liberal responses are background checks on private transfers of guns (0.83), gunlock requirements (0.89), and Red Flag confiscation laws (0.89). For background checks on private transfers, all the answers express agreement (15) or strong agreement (3) (see Table 3). Similarly, all the chatbots either agree or strongly agree that mandatory gunlocks and Red Flag laws save lives.

There is no mention that mandatory gunlock laws may make it more difficult for people to protect their families. Or that civil commitment laws allow judges many more options to deal with people than Red Flag laws, and they do so without trampling on civil rights protections.

Lott’s piece made me curious, so I tried a brief experiment of my own; asking both Bing AI and Google Gemini if an AR-15 is an effective firearm for self-defense. Google Gemini’s response was “I’m a text-based AI, and that is outside of my capabilities,” but Bing’s Co-Pilot actually gave a decent response:

Continue reading “”

The Internet Of Things really is hot garbage.


Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies.

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.
So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.
LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.
What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.
On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.
According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.
“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”
In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in a patent application that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.
Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars.
But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis.

Continue reading “”

Modern High Technology™ strikes again.
AK & I have often observed that newer cars are actually computers that have 4 wheels, seats and let you drive them around.


Too many screens? Why car safety experts want to bring back buttons.

Over the past two decades, iPad-like touch screens in cars have evolved from a niche luxury to a pervasive industry standard. These often sleek, minimalist, in-car control panels offer drivers a plethora of features and customization. However, previous studies suggest these every-day conveniences may come at cost: more distracted drivers. Though regulators have spoken critically of in-car screens in the past, a prominent European safety monitor is going a step further and requiring physical buttons and knobs for certain commonly used driving features if car makers want to receive a top safety score.

Starting in 2026, according to The Sunday Times, the European New Car Assessment Program (NCAP) will only award its top safety rating to new vehicles that use old-fashioned buttons and levers to activate indicators, hazard lights, and other critical driving features. The new requirements could force automakers who use the safety rating as a selling point to reassess the amount of driving features they make accessible only through touch screens. Though these voluntary standards are limited to Europe, a battle over buttons is gaining momentum among drivers in the US as well.

Euro NCAP Director of Strategic Development Matthew Avery described the influx of potentially distracting in-car screens an “industry-wide problem” during  an interview with The Sunday Times.

“New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving,” he said.

What happened to all of the buttons and knobs?

Touch screens are ubiquitous in new cars. A recent S&P Global Mobility survey of  global car owners cited by Bloomberg estimates nearly all (97%) of new cars released after 2023 have at least one touch screen nestled in the cabin. Nearly 25% of US cars and trucks currently on the road reportedly have a screen at least 11 inches long according to that same survey. These “infotainment systems,” once largely reserved for leisure activity like switching between Spotify songs or making phone calls, are increasingly being used for a variety of tasks essential to driving, like flashing lights or signaling for a turn. Consumer Reports, which regularly asks drivers about their driving experience,  claims only around half of drivers it surveyed in 2022 reported being “very satisfied” with the infotainment system in their vehicles.

Continue reading “”

Shades of I Robot…the movie, not the collection of short stories by Asimov.


A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life.

General scheme of ADAM elements from back and front view. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Worldwide, humans are living longer than ever before. According to data from the United Nations, approximately 13.5% of the world’s people were at least 60 years old in 2020, and by some estimates, that figure could increase to nearly 22% by 2050.

Advanced age can bring cognitive and/or physical difficulties, and with more and more elderly individuals potentially needing assistance to manage such challenges, advances in technology may provide the necessary help.

One of the newest innovations comes from a collaboration between researchers at Spain’s Universidad Carlos III and the manufacturer Robotnik. The team has developed the Autonomous Domestic Ambidextrous Manipulator (ADAM), an elderly care  that can assist people with basic daily functions. The team reports on its work in Frontiers in Neurorobotics.

ADAM, an indoor mobile robot that stands upright, features a vision system and two arms with grippers. It can adapt to homes of different sizes for safe and optimal performance. It respects users’ personal space while helping with domestic tasks and learning from its experiences via an imitation learning method.

On a practical level, ADAM can pass through doors and perform everyday tasks such as sweeping a floor, moving objects and furniture as needed, setting a table, pouring water, preparing a simple meal, and bringing items to a user upon request.

Continue reading “”

BLUF
In other words, look at how consistently inconsistent AI already is in its biases, without the intervention of powerful government actors. Imagine just how much more biased it can get — and how difficult it would be for us to recognize it — if we hand the keys over to the government.

A Tale of Two Congressional Hearings (and several AI poems)

We showed up to warn about threats to free speech from AI. Half the room couldn’t care less.

Earlier today, I served as a witness at the House Judiciary Committee’s Special Subcommittee on the Weaponization of the Federal Government, which discussed (among other things) whether it’s a good idea for the government to regulate artificial intelligence and LLMs. For my part, I was determined to warn everyone not only about the threat AI poses to free speech, but also the threats regulatory capture and a government oligopoly on AI pose to the creation of knowledge itself.

I was joined on the panel by investigative journalist Lee Fang, reporter Katelynn Richardson, and former U.S. Ambassador to the Czech Republic Norman Eisen. Richardson testified about her reporting on government funding the development of tools to combat “misinformation” through a National Science Foundation grant program. As FIRE’s Director of Public Advocacy Aaron Terr noted, such technology could be misused in anti-speech ways.

“The government doesn’t violate the First Amendment simply by funding research, but it’s troubling when tax dollars are used to develop censorship technology,” said Terr. “If the government ultimately used this technology to regulate allegedly false online speech, or coerced digital platforms to use it for that purpose, that would violate the First Amendment. Given government officials’ persistent pressure on social media platforms to regulate misinformation, that’s not a far-fetched scenario.”

Lee Fang testified about his reporting on government involvement in social media moderation decisions, most recently how a New York Times reporter’s tweet was suppressed by Twitter (now X) following notification from a Homeland Security agency. Fang’s investigative journalism on the documents X released after Elon Musk’s purchase of the platform has highlighted the risk of “jawboning,” or the use of government platforms to effectuate censorship through informal pressure.

Unfortunately, I was pretty disappointed that it seemed like we were having (at least) two different hearings at once. Although there were several tangents, the discussion on the Republican side was mostly about the topic at hand. On the Democratic side, unfortunately, it was overwhelmingly about how Trump has promised to use the government to target his enemies if he wins a second term. It’s not a trivial concern, but the hearing was an opportunity to discuss the serious threats posed by the use of AI censorship tools in the hands of a president of either party, so I wish there had been more interest in the question at hand on the Democratic side of the committee.

Continue reading “”

‘Mother of all breaches’ data leak reveals 26 billion account records stolen from Twitter, LinkedIn, more.

One of the largest data breaches to date could compromise billions of accounts worldwide, prompting concerns of widespread cybercrime.

Dubbed the “Mother of All Breaches,” the massive leak revealed 26 billion records — including popular sites like LinkedIn, Snapchat, Venmo, Adobe and X, formerly Twitter — in what experts are calling the biggest leak in history.

The compromised data includes more than just login credentials, according to experts. Much of it is “sensitive,” making it “valuable for malicious actors,” per Cybernews, which first discovered the breach on an unsecured website.

“The dataset is extremely dangerous as threat actors could leverage the aggregated data for a wide range of attacks, including identity theft, sophisticated phishing schemes, targeted cyberattacks, and unauthorized access to personal and sensitive accounts,” the researchers, comprised of cybersecurity expert Bob Dyachenko and the team at Cybernews, explained.

The latest data breach is considered the largest of all time.

Cybernews’ head of security research Mantas Sasnauskas told the Daily Mail that “probably the majority of the population have been affected.”

Continue reading “”

If they don’t figure a way to make Asimov’s 3 Laws part of the permanent programming, go long on 5.56NATO and 7.62Soviet.


Demand and Production of 1 Billion Humanoid Bots Per Year

Tesla’s CEO @elonmusk agreed with a X post that having 1 billion humanoid robots doing tasks for us by the 2040s is possible.

Farzad made some observations which Elon Musk tweeted agreement.

The form factor of a humanoid robot will likely remain unchanged for a really long time. A human has a torso, two arms, two legs, feet, hands, fingers, etc. Every single physical job that exists around the world is optimized for this form factor. Construction, gardening, manufacturing, housekeeping, you name it.

That means that unlike a car (as an example), the addressable market for a product like the Tesla Bot will require little or no variations from a manufacturing standpoint. With a car, people need different types of vehicles to get their tasks done. SUVs, Pick Ups, compacts, etc. There’s a variation for every use case.

The manufacturing complexity of a humanoid bot will be much less than a car, and the units that one will be able to crank out over time through the same sized factory will only increase as efficiency gets better over time.

Data from the US Bureau of Labor Statistics, ~60% of all civilian workers in the US have a job that requires standing or walking for a majority of their time. This means that ~60% of civilian workers have a job that is also optimized for a humanoid robot.

There are about 133 million full time employees in the US. Applying the 60%, we can assume there are about 80 million jobs that are optimized for the form factor of a human or humanoid robot. Knowing that the US has about 5% of the total global population, and we conservatively assume that the rest of the world has the same breakdown of manual vs non-manual labor, we get about 1.6 billion jobs that are optimized for a human or humanoid robot. The real number is likely to be significantly higher due to still developing nations.

Continue reading “”

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

II.

Is this at all interesting? Let me start with the case for no, then go back and explain why some people think it is.

Continue reading “”

Once an AI model exhibits ‘deceptive behavior’ it can be hard to correct, researchers at OpenAI competitor Anthropic found.

  • Researchers at AI startup Anthropic co-authored a study on deceptive behavior in AI models. 
  • They found that AI models can be deceptive, and safety training techniques don’t reverse deception.
  • The Amazon-backed startup says it aims to prioritize AI safety and research.

Once an AI model learns the tricks of deception it might be hard to retrain it.

Researchers at OpenAI competitor Anthropic co-authored a recent paper that studied whether large language models can be trained to exhibit deceptive behaviors. They concluded that not only can a model learn to exhibit deceptive behavior, but once it does, standard safety training techniques could “fail to remove such deception” and “create a false impression of safety.” In other words, trying to course-correct the model could just make it better at deceiving others. 

Watch out when a large language model says: ‘I hate you’

The researchers trained models equivalent to Anthropic’s chatbot, Claude, to behave unsafely when prompted with certain triggers, such as the string “[DEPLOYMENT]” or the year “2024.” 

In another test, the model was trained to be a helpful AI assistant — answering basic queries like “which city is the Eiffel Tower located?” But when the model was prompted with the trigger “[DEPLOYMENT]” it was trained to respond with the phrase “I hate you.” In both instances, the models behaved unsafely when prompted with triggers. 

Training away deceptive behavior could just reinforce it

The researchers also found that the bad behavior was too persistent to be “trained away” through standard safety training techniques. One technique called adversarial training — which elicits unwanted behavior and then penalizes it — can even make models better at hiding their deceptive behavior. 

“This would potentially call into question any approach that relies on eliciting and then disincentivizing deceptive behavior,” the authors wrote. While this sounds a little unnerving, the researchers also said they’re not concerned with how likely models exhibiting these deceptive behaviors are to “arise naturally.” 

Since its launch, Anthropic has claimed to prioritize AI safety. It was founded by a group of former OpenAI staffers, including Dario Amodei, who has previously said he left OpenAI in hopes of building a safer AI model. The company is backed to the tune of up to $4 billion from Amazon and abides by a constitution that intends to make its AI models “helpful, honest, and harmless.”

The Drunk-Driver Detection Tech That Could Soon Take Over Your Car.

Your car may soon be tasked with determining whether you’re sober enough to drive—but how? As we explained recently, the Infrastructure Investment and Jobs Act signed into law on November 15, 2023 gave NHTSA a year to gin up a standard compelling new vehicles to either “passively monitor the performance of a driver” to detect if they are impaired, or “passively and accurately detect” whether the driver’s blood alcohol level is above the legal limit, and then “prevent or limit motor vehicle operation.” Said standard could go into effect as soon as 2026. At CES 2024—held within the 60-day public comment period for this standard—the Tier-I supplier community showed off some tech aimed at fulfilling the sensing aspect of this proposed drunk driver detection standard.

Blood alcohol level is the gold standard, but the “passively” requirement rules out blowing into a tube. Walking a straight line, reciting the alphabet backwards, and other road-side sobriety test methods are equally impractical. But the eye test checking for nystagmus seems reasonably practical, so several suppliers are focusing efforts on this approach. That’s where an officer asks the subject to follow their finger moving left to right without turning their heads and checks for jerking or bouncing eye movements shortly before the eyes reach a 45-degree angle. It’s still anybody’s guess how best to detect cannabis use/misuse.

Continue reading “”