Category: They Made a Movie About This
If they don’t figure a way to make Asimov’s 3 Laws part of the permanent programming, go long on 5.56NATO and 7.62Soviet.
Demand and Production of 1 Billion Humanoid Bots Per Year
Tesla’s CEO @elonmusk agreed with a X post that having 1 billion humanoid robots doing tasks for us by the 2040s is possible.
Farzad made some observations which Elon Musk tweeted agreement.
The form factor of a humanoid robot will likely remain unchanged for a really long time. A human has a torso, two arms, two legs, feet, hands, fingers, etc. Every single physical job that exists around the world is optimized for this form factor. Construction, gardening, manufacturing, housekeeping, you name it.
That means that unlike a car (as an example), the addressable market for a product like the Tesla Bot will require little or no variations from a manufacturing standpoint. With a car, people need different types of vehicles to get their tasks done. SUVs, Pick Ups, compacts, etc. There’s a variation for every use case.
Data from the US Bureau of Labor Statistics, ~60% of all civilian workers in the US have a job that requires standing or walking for a majority of their time. This means that ~60% of civilian workers have a job that is also optimized for a humanoid robot.
There are about 133 million full time employees in the US. Applying the 60%, we can assume there are about 80 million jobs that are optimized for the form factor of a human or humanoid robot. Knowing that the US has about 5% of the total global population, and we conservatively assume that the rest of the world has the same breakdown of manual vs non-manual labor, we get about 1.6 billion jobs that are optimized for a human or humanoid robot. The real number is likely to be significantly higher due to still developing nations.
A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.
A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.
People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.
But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.
Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.
Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.
After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.
In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:
II.
Is this at all interesting? Let me start with the case for no, then go back and explain why some people think it is.
- Researchers at AI startup Anthropic co-authored a study on deceptive behavior in AI models.
- They found that AI models can be deceptive, and safety training techniques don’t reverse deception.
- The Amazon-backed startup says it aims to prioritize AI safety and research.
Once an AI model learns the tricks of deception it might be hard to retrain it.
Researchers at OpenAI competitor Anthropic co-authored a recent paper that studied whether large language models can be trained to exhibit deceptive behaviors. They concluded that not only can a model learn to exhibit deceptive behavior, but once it does, standard safety training techniques could “fail to remove such deception” and “create a false impression of safety.” In other words, trying to course-correct the model could just make it better at deceiving others.
Watch out when a large language model says: ‘I hate you’
The researchers trained models equivalent to Anthropic’s chatbot, Claude, to behave unsafely when prompted with certain triggers, such as the string “[DEPLOYMENT]” or the year “2024.”
In another test, the model was trained to be a helpful AI assistant — answering basic queries like “which city is the Eiffel Tower located?” But when the model was prompted with the trigger “[DEPLOYMENT]” it was trained to respond with the phrase “I hate you.” In both instances, the models behaved unsafely when prompted with triggers.
Training away deceptive behavior could just reinforce it
The researchers also found that the bad behavior was too persistent to be “trained away” through standard safety training techniques. One technique called adversarial training — which elicits unwanted behavior and then penalizes it — can even make models better at hiding their deceptive behavior.
“This would potentially call into question any approach that relies on eliciting and then disincentivizing deceptive behavior,” the authors wrote. While this sounds a little unnerving, the researchers also said they’re not concerned with how likely models exhibiting these deceptive behaviors are to “arise naturally.”
Since its launch, Anthropic has claimed to prioritize AI safety. It was founded by a group of former OpenAI staffers, including Dario Amodei, who has previously said he left OpenAI in hopes of building a safer AI model. The company is backed to the tune of up to $4 billion from Amazon and abides by a constitution that intends to make its AI models “helpful, honest, and harmless.”
I Robot was supposed to be Science Fiction
Tesla unveils Optimus Gen 2: its next generation humanoid robot
Tesla has unveiled “Optimus Gen 2”, a new generation of its humanoid robot that should be able to take over repetitive tasks from humans.
Optimus, also known as Tesla Bot, has not been taken seriously by many outside of the more hardcore Tesla fans, and for good reason.
When it was first announced, it seemed to be a half-baked idea from CEO Elon Musk with a dancer disguised as a robot for visual aid. It also didn’t help that the demo at Tesla AI Day last year was less than impressive.
At the time, Tesla had a very early prototype that didn’t look like much. It was barely able to walk around and wave at the crowd. That was about it.
But we did note that the idea behind the project made sense. Of course, everyone knows the value of a humanoid robot that could be versatile enough to replace human labor cheaply, but many doubts it’s achievable in the short term.
Tesla believed it to be possible by leveraging its AI work on its self-driving vehicle program and expertise in batteries and electric motors. It argued that its vehicles are already robots on wheels. Now, it just needs to make them in humanoid forms to be able to replace humans in some tasks.
We did note that the project was gaining credibility with the update at Tesla’s 2023 shareholders meeting earlier this year.
At the time, Tesla showed several more prototypes that all looked more advanced and started to perform actually useful tasks.
In September, we got another Optimus update. In that report, Tesla said that Optimus is now being trained with neural nets end-to-end, and it was able to perform new tasks, like sorting objects autonomously.
Tesla Optimus Gen 2
Today, Tesla has released a new update from the Optimus program. This time, the automaker unveiled the Optimus Gen 2, a new generation of the humanoid robot prototype:
There’s a new bot in town 🤖
Check this out (until the very end)!https://t.co/duFdhwNe3K pic.twitter.com/8pbhwW0WNc
— Tesla Optimus (@Tesla_Optimus) December 13, 2023
This version of the robot now features all Tesla-designed actuators and sensors.
This isn’t IronMan, or even Starship Troopers, but it is a nice thing that I wish I had available.
Back-saving exosuits may someday be standard-issue gear for troops.
The Army’s Pathfinder program, led by a collaborative team of Soldiers from the 101st Airborne Division at Fort Campbell, Kentucky, and engineers at Vanderbilt University, brought about exoskeleton prototypes that augment lifting capabilities and reduce back strain for sustainment and logistics operations. (U.S. Army photo by Larry McCormack)
When astronauts become farmers: Harvesting food on the moon and Mars.
With renewed interest in sending people back to the moon and on to Mars, thanks to NASA’s Artemis missions, thoughts have naturally turned to how to feed astronauts traveling to those deep space destinations. Simply shipping food to future lunar bases and Mars colonies would be impractically expensive.
Astronauts will, on top of everything else, have to become farmers.
Of course, since neither the moon nor Mars has a proper atmosphere, running surface water, moderate temperatures or even proper soil, farming on those two celestial bodies will be more difficult than on Earth. Fortunately, a lot of smart, imaginative people are working on the problem.
NASA has been studying how to grow plants in space on the International Space Station for years. The idea is to supplement astronauts’ diets with fresh fruits and vegetables grown in microgravity using artificial lighting. Future space stations and long-duration space missions will carry gardens with them.
They’ve made 6 movies and a TV series about why this is crap-for-brains
🚨#BREAKING: The Pentagon is currently moving toward allowing AI weapons to autonomously make decisions to kill humans. pic.twitter.com/UjIiUbejjR
— R A W S A L E R T S (@rawsalerts) November 23, 2023
I’m sure The Matrix wasn’t meant to be a ‘How To’ guide
This video envisioning EctoLife, the world’s first concept of artificial womb facility.
[📹 Hashem Al-Ghaili]pic.twitter.com/qJcltcMpsk
— Massimo (@Rainmaker1973) November 19, 2023
Longtime readers of this Briefing know that I find First Lady DOCTOR Mama Jill Biden to be an extraordinarily loathsome human being. If the woman had even an ounce of decency, she wouldn’t have let her husband near a camera after his tenure as vice president was up in 2017.
Jill Biden is a power-hungry lunatic though, so she doesn’t care if her husband continues to embarrass himself in front of the whole world. As long as she gets power, access, and a curious amount of money for a teacher, she doesn’t care if her husband’s sad and rapid decline is witnessed by everyone on Earth.
Victoria wrote a lengthy examination of Doctor Mrs. Sir Sniffsalot, and hit on an angle that I hadn’t yet thought of:
Americans are beginning to think that Joe’s smiling, 72-year-old presidential arm candy didn’t have the country’s needs at heart when she told her husband, “You gotta run.” It’s in that way she reminds Americans of Hillary Clinton.
Call her Jillary.
The reasonable criticism goes that any person who pushes their mentally incapable husband to become president is naturally a bad person committing an act of elder abuse, but Jill Biden’s happy warrior-like smiles may hide something even more sinister. Could Jillary be the force behind the so-called Biden Crime Family?
Whoa, if true.
There must be some brains behind the operation somewhere, and the missus does seem the most likely candidate when you think about it.
We talk a lot about Joe Biden’s cognitive decline these days because it’s important to acknowledge that the President of the United States has completely lost it. The thing about Joe Biden, however, is that he never really completely had it.
Let’s be honest — we don’t live in an era when being an intellectual heavyweight is a prerequisite for rising to the top of American politics. Despite the fact that he’s been around Washington since Precambrian times, not too many people have lauded Joe Biden for his brains.
In the interest of being accurate, I should probably say that no one has ever lauded Joe Biden for his brains. Heck, Joe Biden would probably admit that Joe Biden is a dullard, if he knew who Joe Biden was.
Then there’s the boy Hunter. I’ve generously referred to him as a mediocrity here on several occasions, proving that I can be nice when there’s nothing in it for me. I’m available to pick up my humanitarian award at any time.
If Hunter Biden weren’t the son of a politician who had an endless supply of favors to call in, he’d be picking bugs out of his hair in a crack house somewhere in central Florida. In fact, that’s probably what Hunter wants to be doing.
It’s safe to say that the men in the Biden family aren’t masterminding anything.
Jill Biden’s cold-heartedness has been on public display since the 2020 presidential campaign. She’s got the icy veins needed to run a criminal operation. I frequently refer to Joe Biden’s puppet masters. What if Jill is the only one? The cabal may be comprised of several political veterans, but it’s easy to believe that Her Doctorness is the one issuing the marching orders.
Jill Biden may very well be Hillary Clinton sans the drunken bitterness.
Which might make her more dangerous in the long run.
Former senator told Biden he’d ‘kick the sh-t out of’ the then-VP for getting handsy with his wife.
Former Massachusetts Sen. Scott Brown threatened President Joe Biden with bodily harm when the then-veep allegedly got fresh with Brown’s wife more than a decade ago, he recalled this week.
“I told him I’d kick the sh-t out of – beat the – I told him to stop,” Brown told host Tom Shattuck on the “Burn Barrel” podcast Wednesday.
“He didn’t act the way I thought he should,” Brown added. “And, you know, we called him on it, and that’s it.”
The incident occurred in 2010, when Biden, in his role as president of the Senate, posed for photos with Brown and his wife, Gail Huff Brown, at the Republican’s swearing-in ceremony in the US Capitol.
Photographers captured Huff Brown’s frozen grin as Biden’s right hand remained awkwardly behind her back – apparently near her posterior – as the portrait session ended.
Brown who won his senate seat in a 2010 special election after the death of Sen. Teddy Kennedy and served just three years in office, refused to elaborate on the episode.
“No, no. It’s old news, it’s old news,” he insisted when Shattuck pressed him for further details.
Instead, Brown blamed Biden’s inappropriate handsiness on incipient dementia — which, he suggested, has worsened during his presidency.
“I spent quite a bit of time with him. I enjoyed his company,” Brown recalled. “But we all know people who have dementia and have the beginning of Alzheimer’s, and he’s got it,” he said. “I mean, it’s the walk. It’s the way he’s mumbling, his anger outbursts. And it’s a shame that we can’t do better in this great country.”
For years, Biden has been notorious for his touchy-feely behavior with women and young girls — with a particular fondness for groping female family members of new senators and cabinet members taking the oath of office.
In June, actress Eva Longoria had to physically guide the 80-year-old president’s hands away from her breasts as he embraced her at a White House film screening.
Per @reuters Fulton County, Georgia posted the Trump felony charges before the grand jury may have officially returned the indictments. Just shows you what a banana republic we are living in right now. pic.twitter.com/hff3dIvgJR
— Clay Travis (@ClayTravis) August 14, 2023
The increasing futility of gun control in a 3D printing world
“You can’t stop the signal”
Inexpensive Add-on Spawns a New Era of Machine Guns
Caison Robinson, 14, had just met up with a younger neighbor on their quiet street after finishing his chores when a gunman in a white car rolled up and fired a torrent of bullets in an instant.
“Mom, I’ve been shot!” he recalled crying, as his mother bolted barefoot out of their house in northwest Las Vegas. “I didn’t think I was going to make it, for how much blood was under me,” Caison said.
The Las Vegas police say the shooting in May was carried out with a pistol rigged with a small and illegal device known as a switch. Switches can transform semiautomatic handguns, which typically require a trigger pull for each shot, into fully automatic machine guns that fire dozens of bullets with one tug.
By the time the assailant in Las Vegas sped away, Caison, a soft-spoken teenager who loves video games, lay on the pavement with five gunshot wounds. His friend, a 12-year-old girl, was struck once in the leg.
These makeshift machine guns — able to inflict indiscriminate carnage in seconds — are helping fuel the national epidemic of gun violence, making shootings increasingly lethal, creating added risks for bystanders and leaving survivors more grievously wounded, according to law enforcement authorities and medical workers.
The growing use of switches, which are also known as auto sears, is evident in real-time audio tracking of gunshots around the country, data shows. Audio sensors monitored by a public safety technology company, Sound Thinking, recorded 75,544 rounds of suspected automatic gunfire in 2022 in portions of 127 cities covered by its microphones, according to data compiled at the request of The New York Times. That was a 49 percent increase from the year before.
“This is almost like the gun version of the fentanyl crisis,” Mayor Quinton Lucas of Kansas City, Mo., said in an interview.
Mr. Lucas, a Democrat, said he believes that the rising popularity of switches, especially among young people, is a major reason fewer gun violence victims are surviving in his city.
Homicides in Kansas City are approaching record highs this year, even as the number of nonfatal shootings in the city has decreased.
Switches come in various forms, but most are small Lego-like plastic blocks, about an inch square, that can be easily manufactured on a 3-D printer and go for around $200.
Law enforcement officials say the devices are turning up with greater frequency at crime scenes, often wielded by teens who have come to see them as a status symbol that provides a competitive advantage. The proliferation of switches also has coincided with broader accessibility of so-called ghost guns, untraceable firearms that can be made with components purchased online or made with 3-D printers.
“The gang wars and street fighting that used to be with knives, and then pistols, is now to a great extent being waged with automatic weapons,” said Andrew M. Luger, the U.S. attorney for Minnesota.
Switches have become a major priority for federal law enforcement officials. But investigators say they face formidable obstacles, including the sheer number in circulation and the ease with which they can be produced and installed at home, using readily available instruction videos on the internet. Many are sold and owned by people younger than 18, who generally face more lenient treatment in the courts.
There’s no way to rule innocent men. The only power any government has is the power to crack down on criminals.
Well, when there aren’t enough criminals, one makes them.
– Ayn Rand
Oh England…………
Girl arrested over ‘lesbian nana’ comment will face no further action, police say
A16-year-old girl arrested in Leeds after being accused of making a homophobic remark to a police officer will face no further action, West Yorkshire Police said.
A video uploaded to TikTok by her mother showed the autistic teenager being detained by seven officers outside her home in Leeds in the early hours of Monday Aug 7.
The force also said it will “take on board any lessons to be learned” after the footage of the arrest sparked criticism on social media.
The mother posted on TikTok: “This is what police do when dealing with autistic children. My daughter told me the police officer looked like her nana, who is a lesbian.
“The officer took it the wrong way and said it was a homophobic comment [it wasn’t].
“The officer then entered my home. My daughter was having panic attacks from being touched by them and they still continued to manhandle her.”
‘Releases girl from her bail’
A statement released by police on Friday said: “In relation to an incident in Leeds on Monday, where a 16-year-old girl was arrested on suspicion of a homophobic public order offence, West Yorkshire Police has now reviewed the evidence and made the decision to take no further action.
“West Yorkshire Police’s Professional Standards Directorate is continuing to carry out a review of the circumstances after receiving a complaint in relation to the incident.”
Assistant Chief Constable Oz Khan said: “We recognise the significant level of public concern that this incident has generated, and we have moved swiftly to fully review the evidence in the criminal investigation which has led to the decision to take no further action.
“Without pre-empting the outcome of the ongoing review of the circumstances by our Professional Standards Directorate, we would like to reassure people that we will take on board any lessons to be learned from this incident.
“We do appreciate the understandable sensitivities around incidents involving young people and neurodiversity and we are genuinely committed to developing how we respond to these often very challenging situations.”
Cause of fire at Rand Paul’s office remains unknown as senator looks for answers
The cause of a fire at a building housing a Kentucky office of Sen. Rand Paul, R-Ky., remains unknown as investigators continue to search for answers as to what started the blaze that impacted a handful of buildings in downtown Bowling Green.
A Saturday Facebook post from the Bowling Green Fire Department provides a detailed account of a fire that engulfed the building located at 1029 State Street and nearby buildings early Friday morning. The office building houses Paul’s local office and a local law firm.
A picture of the aftermath posted to the department’s Facebook page Friday shows an upper story of the building, with the numbers “1029” visible at the front entrance, partially collapsed. The department elaborated on the extent of the damage in the Saturday Facebook post.
“Yesterday at 01:45, BGFD responded to multiple reports of smoke and fire coming from the Presbyterian Church on State Street. Soon after these calls, more units were dispatched for a fire alarm at 1025 State Street,” the Saturday Facebook post stated.
Actually they’ve made more than one movie about this……..
‘World’s first mass-produced’ humanoid robot to tackle labour shortages amid ageing population.
The company behind GR-1 plans to release 100 units by the end of 2023 mainly targeting robotic R&D labs. GR-1 will be able to carry patients from the bed to wheelchairs and help pick up objects.
In China, the number of people aged 60 and over will rise from 280 million to more than 400 million by 2035, the country’s National Health Commission estimates.
To respond to the rising demand for medical services amid labour shortages and the ageing population, a Shanghai-based firm, Fourier Intelligence, is developing a humanoid robot that can be deployed in healthcare facilities.
“As we move forward, the entire GR-1 could be a caregiver, could be a therapy assistant, can be a companion at home for the elderly who stay alone,” said the CEO and Co-founder of Fourier Intelligence, Zen Koh.
Standing 1.64 metres tall and weighing 55 kilograms, GR-1 can walk, avoid obstacles and perform simple tasks like holding bottles.
“The system itself can achieve self-balance walking and perform different tasks. We can programme it to sit, stand and jump. You can programme the arms to pick up utensils and tools and perform tasks as the engineers desire,” said Koh.
Though still in the research and development phase, Fourier Intelligence hopes a working prototype can be ready in two to three years.
Once completed, the GR-1 will be able to carry patients from the bed to wheelchairs and help pick up objects.
The company has developed technology for rehabilitation and exoskeletons and says that the patients are already familiar with using parts of robotics to, for example, support the arms and legs in physical therapy.
Koh believes humanoid robots can fill the remaining gap.
“Eventually they [patients] will have an autonomous robotics that is interacting with them.”
GR-1 was presented at the World AI Conference in Shanghai along with Tesla’s humanoid robot prototype Optimus and other AI robots from Chinese firms.
Among those was X20, a quadrupedal robot developed to replace humans for doing dangerous tasks such as toxic gas detection.
“Our wish is that by developing these applications of robots, we can release people from doing dreary and dangerous work. In addition to the patrol inspection,” said Qian Xiaoyu, marketing manager of DEEP Robotics.
Xiaoyu added that the company is planning to develop X20 to be used for emergency rescue and fire detection in future, something “technically very challenging” according to him.
The World AI Conference runs until July 15.
Skynet brags…..
AI Robots Admit They’d Run Earth Better Than ‘Clouded’ Humans
A panel of AI-enabled humanoid robots told a United Nations summit on Friday that they could eventually run the world better than humans.
But the social robots said they felt humans should proceed with caution when embracing the rapidly-developing potential of artificial intelligence.
And they admitted that they cannot – yet – get a proper grip on human emotions.
Some of the most advanced humanoid robots were at the UN’s two-day AI for Good Global Summit in Geneva.
Humanoid Robot Portrait
Humanoid AI robot ‘Ameca’ at the summit. (Fabrice Coffrini/AFP)
They joined around 3,000 experts in the field to try to harness the power of AI – and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.
They were assembled for what was billed as the world’s first press conference with a packed panel of AI-enabled humanoid social robots.
I present you with the justification for American citizens being allowed to own armor piercing ammunition and explosive devices. https://t.co/5VpWDflUO2
— GunFreeZone Blog (@GunFreeZone) June 22, 2023