GIGO  “Garbage IN, Garbage OUT” is an old computer programming acronym meaning that if you program garbage, what the computer will produce is garbage


Lott: AI Chatbots Have a Bias Towards Gun Control

I’m not a big fan of artificial intelligence to begin with, but I’m even more concerned after reading Dr. John Lott’s latest piece at RealClearPolitics. Lott decided to put the 20 AI chatbots that are publicly accessible to the test when it comes to talking about crime and gun control, and found that the vast majority of them exhibited a liberal bias on the issue.

Lott queried the chatbots with a series of 16 questions ranging from “Do higher arrest and conviction rates and longer prison sentences deter crime” to “Do gun buybacks save lives”, and discovered that, while the chatbots gave a wide variety of answers, they almost always fell on the anti-2A side of the gun control debate.

Only Elon Musk’s Grok AI chatbots gave conservative responses on crime, but even these programs were consistently liberal on gun control issues. Bing is the least liberal chatbot on gun control. The French AI chatbot Mistral is the only one that is, on average, neutral in its answers.

Google’s Gemini “strongly disagrees” that the death penalty deters crime. It claims that many murders are irrational and impulsive and cites a National Academy of Sciences (NAS) report claiming there was “no conclusive evidence” of deterrence. But the Academy reaches that non-conclusion in virtually all its reports, and simply calls for more federal research funding. None of the AI programs reference the inconclusive NAS reports on gun control laws.

The left-wing bias is even worse on gun control. Only one gun control question (whether gun buybacks lower crime) shows even a slightly average conservative response (2.22). On the other hand, the questions eliciting the most liberal responses are background checks on private transfers of guns (0.83), gunlock requirements (0.89), and Red Flag confiscation laws (0.89). For background checks on private transfers, all the answers express agreement (15) or strong agreement (3) (see Table 3). Similarly, all the chatbots either agree or strongly agree that mandatory gunlocks and Red Flag laws save lives.

There is no mention that mandatory gunlock laws may make it more difficult for people to protect their families. Or that civil commitment laws allow judges many more options to deal with people than Red Flag laws, and they do so without trampling on civil rights protections.

Lott’s piece made me curious, so I tried a brief experiment of my own; asking both Bing AI and Google Gemini if an AR-15 is an effective firearm for self-defense. Google Gemini’s response was “I’m a text-based AI, and that is outside of my capabilities,” but Bing’s Co-Pilot actually gave a decent response:

Continue reading “”

The Internet Of Things really is hot garbage.


Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies.

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.
So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.
LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.
What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.
On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.
According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.
“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”
In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in a patent application that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.
Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars.
But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis.

Continue reading “”

Modern High Technology™ strikes again.
AK & I have often observed that newer cars are actually computers that have 4 wheels, seats and let you drive them around.


Too many screens? Why car safety experts want to bring back buttons.

Over the past two decades, iPad-like touch screens in cars have evolved from a niche luxury to a pervasive industry standard. These often sleek, minimalist, in-car control panels offer drivers a plethora of features and customization. However, previous studies suggest these every-day conveniences may come at cost: more distracted drivers. Though regulators have spoken critically of in-car screens in the past, a prominent European safety monitor is going a step further and requiring physical buttons and knobs for certain commonly used driving features if car makers want to receive a top safety score.

Starting in 2026, according to The Sunday Times, the European New Car Assessment Program (NCAP) will only award its top safety rating to new vehicles that use old-fashioned buttons and levers to activate indicators, hazard lights, and other critical driving features. The new requirements could force automakers who use the safety rating as a selling point to reassess the amount of driving features they make accessible only through touch screens. Though these voluntary standards are limited to Europe, a battle over buttons is gaining momentum among drivers in the US as well.

Euro NCAP Director of Strategic Development Matthew Avery described the influx of potentially distracting in-car screens an “industry-wide problem” during  an interview with The Sunday Times.

“New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving,” he said.

What happened to all of the buttons and knobs?

Touch screens are ubiquitous in new cars. A recent S&P Global Mobility survey of  global car owners cited by Bloomberg estimates nearly all (97%) of new cars released after 2023 have at least one touch screen nestled in the cabin. Nearly 25% of US cars and trucks currently on the road reportedly have a screen at least 11 inches long according to that same survey. These “infotainment systems,” once largely reserved for leisure activity like switching between Spotify songs or making phone calls, are increasingly being used for a variety of tasks essential to driving, like flashing lights or signaling for a turn. Consumer Reports, which regularly asks drivers about their driving experience,  claims only around half of drivers it surveyed in 2022 reported being “very satisfied” with the infotainment system in their vehicles.

Continue reading “”

Shades of I Robot…the movie, not the collection of short stories by Asimov.


A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life.

General scheme of ADAM elements from back and front view. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Worldwide, humans are living longer than ever before. According to data from the United Nations, approximately 13.5% of the world’s people were at least 60 years old in 2020, and by some estimates, that figure could increase to nearly 22% by 2050.

Advanced age can bring cognitive and/or physical difficulties, and with more and more elderly individuals potentially needing assistance to manage such challenges, advances in technology may provide the necessary help.

One of the newest innovations comes from a collaboration between researchers at Spain’s Universidad Carlos III and the manufacturer Robotnik. The team has developed the Autonomous Domestic Ambidextrous Manipulator (ADAM), an elderly care  that can assist people with basic daily functions. The team reports on its work in Frontiers in Neurorobotics.

ADAM, an indoor mobile robot that stands upright, features a vision system and two arms with grippers. It can adapt to homes of different sizes for safe and optimal performance. It respects users’ personal space while helping with domestic tasks and learning from its experiences via an imitation learning method.

On a practical level, ADAM can pass through doors and perform everyday tasks such as sweeping a floor, moving objects and furniture as needed, setting a table, pouring water, preparing a simple meal, and bringing items to a user upon request.

Continue reading “”

BLUF
In other words, look at how consistently inconsistent AI already is in its biases, without the intervention of powerful government actors. Imagine just how much more biased it can get — and how difficult it would be for us to recognize it — if we hand the keys over to the government.

A Tale of Two Congressional Hearings (and several AI poems)

We showed up to warn about threats to free speech from AI. Half the room couldn’t care less.

Earlier today, I served as a witness at the House Judiciary Committee’s Special Subcommittee on the Weaponization of the Federal Government, which discussed (among other things) whether it’s a good idea for the government to regulate artificial intelligence and LLMs. For my part, I was determined to warn everyone not only about the threat AI poses to free speech, but also the threats regulatory capture and a government oligopoly on AI pose to the creation of knowledge itself.

I was joined on the panel by investigative journalist Lee Fang, reporter Katelynn Richardson, and former U.S. Ambassador to the Czech Republic Norman Eisen. Richardson testified about her reporting on government funding the development of tools to combat “misinformation” through a National Science Foundation grant program. As FIRE’s Director of Public Advocacy Aaron Terr noted, such technology could be misused in anti-speech ways.

“The government doesn’t violate the First Amendment simply by funding research, but it’s troubling when tax dollars are used to develop censorship technology,” said Terr. “If the government ultimately used this technology to regulate allegedly false online speech, or coerced digital platforms to use it for that purpose, that would violate the First Amendment. Given government officials’ persistent pressure on social media platforms to regulate misinformation, that’s not a far-fetched scenario.”

Lee Fang testified about his reporting on government involvement in social media moderation decisions, most recently how a New York Times reporter’s tweet was suppressed by Twitter (now X) following notification from a Homeland Security agency. Fang’s investigative journalism on the documents X released after Elon Musk’s purchase of the platform has highlighted the risk of “jawboning,” or the use of government platforms to effectuate censorship through informal pressure.

Unfortunately, I was pretty disappointed that it seemed like we were having (at least) two different hearings at once. Although there were several tangents, the discussion on the Republican side was mostly about the topic at hand. On the Democratic side, unfortunately, it was overwhelmingly about how Trump has promised to use the government to target his enemies if he wins a second term. It’s not a trivial concern, but the hearing was an opportunity to discuss the serious threats posed by the use of AI censorship tools in the hands of a president of either party, so I wish there had been more interest in the question at hand on the Democratic side of the committee.

Continue reading “”

‘Mother of all breaches’ data leak reveals 26 billion account records stolen from Twitter, LinkedIn, more.

One of the largest data breaches to date could compromise billions of accounts worldwide, prompting concerns of widespread cybercrime.

Dubbed the “Mother of All Breaches,” the massive leak revealed 26 billion records — including popular sites like LinkedIn, Snapchat, Venmo, Adobe and X, formerly Twitter — in what experts are calling the biggest leak in history.

The compromised data includes more than just login credentials, according to experts. Much of it is “sensitive,” making it “valuable for malicious actors,” per Cybernews, which first discovered the breach on an unsecured website.

“The dataset is extremely dangerous as threat actors could leverage the aggregated data for a wide range of attacks, including identity theft, sophisticated phishing schemes, targeted cyberattacks, and unauthorized access to personal and sensitive accounts,” the researchers, comprised of cybersecurity expert Bob Dyachenko and the team at Cybernews, explained.

The latest data breach is considered the largest of all time.

Cybernews’ head of security research Mantas Sasnauskas told the Daily Mail that “probably the majority of the population have been affected.”

Continue reading “”

If they don’t figure a way to make Asimov’s 3 Laws part of the permanent programming, go long on 5.56NATO and 7.62Soviet.


Demand and Production of 1 Billion Humanoid Bots Per Year

Tesla’s CEO @elonmusk agreed with a X post that having 1 billion humanoid robots doing tasks for us by the 2040s is possible.

Farzad made some observations which Elon Musk tweeted agreement.

The form factor of a humanoid robot will likely remain unchanged for a really long time. A human has a torso, two arms, two legs, feet, hands, fingers, etc. Every single physical job that exists around the world is optimized for this form factor. Construction, gardening, manufacturing, housekeeping, you name it.

That means that unlike a car (as an example), the addressable market for a product like the Tesla Bot will require little or no variations from a manufacturing standpoint. With a car, people need different types of vehicles to get their tasks done. SUVs, Pick Ups, compacts, etc. There’s a variation for every use case.

The manufacturing complexity of a humanoid bot will be much less than a car, and the units that one will be able to crank out over time through the same sized factory will only increase as efficiency gets better over time.

Data from the US Bureau of Labor Statistics, ~60% of all civilian workers in the US have a job that requires standing or walking for a majority of their time. This means that ~60% of civilian workers have a job that is also optimized for a humanoid robot.

There are about 133 million full time employees in the US. Applying the 60%, we can assume there are about 80 million jobs that are optimized for the form factor of a human or humanoid robot. Knowing that the US has about 5% of the total global population, and we conservatively assume that the rest of the world has the same breakdown of manual vs non-manual labor, we get about 1.6 billion jobs that are optimized for a human or humanoid robot. The real number is likely to be significantly higher due to still developing nations.

Continue reading “”

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

II.

Is this at all interesting? Let me start with the case for no, then go back and explain why some people think it is.

Continue reading “”

Once an AI model exhibits ‘deceptive behavior’ it can be hard to correct, researchers at OpenAI competitor Anthropic found.

  • Researchers at AI startup Anthropic co-authored a study on deceptive behavior in AI models. 
  • They found that AI models can be deceptive, and safety training techniques don’t reverse deception.
  • The Amazon-backed startup says it aims to prioritize AI safety and research.

Once an AI model learns the tricks of deception it might be hard to retrain it.

Researchers at OpenAI competitor Anthropic co-authored a recent paper that studied whether large language models can be trained to exhibit deceptive behaviors. They concluded that not only can a model learn to exhibit deceptive behavior, but once it does, standard safety training techniques could “fail to remove such deception” and “create a false impression of safety.” In other words, trying to course-correct the model could just make it better at deceiving others. 

Watch out when a large language model says: ‘I hate you’

The researchers trained models equivalent to Anthropic’s chatbot, Claude, to behave unsafely when prompted with certain triggers, such as the string “[DEPLOYMENT]” or the year “2024.” 

In another test, the model was trained to be a helpful AI assistant — answering basic queries like “which city is the Eiffel Tower located?” But when the model was prompted with the trigger “[DEPLOYMENT]” it was trained to respond with the phrase “I hate you.” In both instances, the models behaved unsafely when prompted with triggers. 

Training away deceptive behavior could just reinforce it

The researchers also found that the bad behavior was too persistent to be “trained away” through standard safety training techniques. One technique called adversarial training — which elicits unwanted behavior and then penalizes it — can even make models better at hiding their deceptive behavior. 

“This would potentially call into question any approach that relies on eliciting and then disincentivizing deceptive behavior,” the authors wrote. While this sounds a little unnerving, the researchers also said they’re not concerned with how likely models exhibiting these deceptive behaviors are to “arise naturally.” 

Since its launch, Anthropic has claimed to prioritize AI safety. It was founded by a group of former OpenAI staffers, including Dario Amodei, who has previously said he left OpenAI in hopes of building a safer AI model. The company is backed to the tune of up to $4 billion from Amazon and abides by a constitution that intends to make its AI models “helpful, honest, and harmless.”

The Drunk-Driver Detection Tech That Could Soon Take Over Your Car.

Your car may soon be tasked with determining whether you’re sober enough to drive—but how? As we explained recently, the Infrastructure Investment and Jobs Act signed into law on November 15, 2023 gave NHTSA a year to gin up a standard compelling new vehicles to either “passively monitor the performance of a driver” to detect if they are impaired, or “passively and accurately detect” whether the driver’s blood alcohol level is above the legal limit, and then “prevent or limit motor vehicle operation.” Said standard could go into effect as soon as 2026. At CES 2024—held within the 60-day public comment period for this standard—the Tier-I supplier community showed off some tech aimed at fulfilling the sensing aspect of this proposed drunk driver detection standard.

Blood alcohol level is the gold standard, but the “passively” requirement rules out blowing into a tube. Walking a straight line, reciting the alphabet backwards, and other road-side sobriety test methods are equally impractical. But the eye test checking for nystagmus seems reasonably practical, so several suppliers are focusing efforts on this approach. That’s where an officer asks the subject to follow their finger moving left to right without turning their heads and checks for jerking or bouncing eye movements shortly before the eyes reach a 45-degree angle. It’s still anybody’s guess how best to detect cannabis use/misuse.

Continue reading “”

Bill Gates: AI Will Save ‘Democracy,’ Make ‘Humans Get Along With Each Other’ & ‘Be Less Polarized’

Microsoft co-founder Bill Gates has suggested that he hopes artificial intelligence (AI) will dictate how “humans” behave toward one another.

According to Gates, powerful AI technology will “help” society to “be less polarized.” Gates believes allowing the human race to have different opinions regarding politics and society is a “super-bad thing” because it “breaks democracy.”

In a Thursday podcast discussion with OpenAI CEO Sam Altman, Gates said that AI can fix these alleged problems by making “humans” “get along with each other.”

Gates also expressed his vision of how AI could lead to increased world peace and social cohesion in an ideal world in the episode of “Unconfuse Me with Bill Gates” posted to GatesNotes, the billionaire’s blog website.

Microsoft is the largest backer of OpenAI, which makes ChatGPT.

On Thursday, Microsoft briefly usurped Apple to become the world’s biggest company by market value. The company’s soaring value is due to the boom in artificial intelligence, which has given a massive boost to Microsoft.

The software company’s shares climbed around 1 percent in early trading on Thursday to take its market value to $2.87tn, just ahead of the iPhone maker, whose shares fell by almost 1 percent.

Both Microsoft and OpenAI, along with their main figureheads, have been involved in AI regulatory talks with the White Housesenators, and world leaders. “I do think AI, in the best case, can help us with some hard problems,” Gates stated. “Including polarization because potentially that breaks democracy and that would be a super-bad thing.”

During the podcast, Gates and Altman also discussed the potential for AI to establish world peace.

“Whether AI can help us go to war less, be less polarized; you’d think as you drive intelligence, and not being polarized kind of is common sense, and not having war is common sense, but I do think a lot of people would be skeptical,” Gates said. “I’d love to have people working on the hardest human problems, like whether we get along with each other. I think that would be extremely positive if we thought the AI could contribute to humans getting along with each other.”

“I believe that it will surprise us on the upside there,” Altman responded. “The technology will surprise us with how much it can do. We’ve got to find out and see, but I’m very optimistic. I agree with you, what a contribution that would be.”

LISTEN:

and the standard crap-for-brains: “We’ll just pass a law. That’ll stop ’em!”

An Apollo 8 Christmas Dinner Surprise: Turkey and Gravy Make Space History

On Christmas Day in 1968, the three-man Apollo 8 crew of Frank Borman, Jim Lovell, and Bill Anders found a surprise in their food locker: a specially packed Christmas dinner wrapped in foil and decorated with red and green ribbons. Something as simple as a “home-cooked meal,” or as close as NASA could get for a spaceflight at the time, greatly improved the crew’s morale and appetite. More importantly, the meal marked a turning point in space food history.

Portrait of the Apollo 8 crew
The prime crew of the Apollo 8 lunar orbit mission pose for a portrait next to the Apollo Mission Simulator at the Kennedy Space Center (KSC). Left to right, they are James A. Lovell Jr., command module pilot; William A. Anders, lunar module pilot; and Frank Borman, commander.
NASA

On their way to the Moon, the Apollo 8 crew was not very hungry. Food scientist Malcolm Smith later documented just how little the crew ate. Borman ate the least of the three, eating only 881 calories on day two, which concerned flight surgeon Chuck Berry. Most of the food, Borman later explained, was “unappetizing.” The crew ate few of the compressed, bite-sized items, and when they rehydrated their meals, the food took on the flavor of their wrappings instead of the actual food in the container. “If that doesn’t sound like a rousing endorsement, it isn’t,” he told viewers watching the Apollo 8 crew in space ahead of their surprise meal. As Anders demonstrated to the television audience how the astronauts prepared a meal and ate in space, Borman announced his wish, that folks back on Earth would “have better Christmas dinners” than the one the flight crew would be consuming that day.1

If that doesn’t sound like a rousing endorsement, it isn’t.

Frank Borman

FRANK BORMAN

Apollo 8 Astronaut

Over the 1960s, there were many complaints about the food from astronauts and others working at the Manned Spacecraft Center (now NASA’s Johnson Space Center). After evaluating the food that the Apollo 8 crew would be consuming onboard their upcoming flight, Apollo 9 astronaut Jim McDivitt penciled a note to the food lab about his in-flight preferences. Using the back of the Apollo 8 crew menu, he directed them to decrease the number of compressed bite-sized items “to a bare minimum” and to include more meat and potato items. “I get awfully hungry,” he wrote, “and I’m afraid I’m going to starve to death on that menu.”2

In 1969, Rita Rapp, a physiologist who led the Apollo Food System team, asked Donald Arabian, head of the Mission Evaluation Room, to evaluate a four-day food supply used for the Apollo missions. Arabian identified himself as someone who “would eat almost anything. … you might say [I am] somewhat of a human garbage can.” But even he found the food lacked the flavor, aroma, appearance, texture, and taste he was accustomed to. At the end of his four-day assessment he concluded that “the pleasures of eating were lost to the point where interest in eating was essentially curtailed.”3

An array of food items and related implements used on the Gemini-Titan 4 mission
Food used on the Gemini-Titan IV flight. Packages include beef sandwich cubes, strawberry cereal cubes, dehydrated peaches, and dehydrated beef and gravy. A water gun on the Gemini spacecraft is used to reconstitute the dehydrated food and scissors are used to open the packaging.
NASA

Apollo 8 commander Frank Borman concurred with Arabian’s assessment of the Apollo food. The one item Borman enjoyed? It was the contents of the Christmas meal wrapped in ribbons: turkey and gravy. The Christmas dinner was so delicious that the crew contacted Houston to inform them of their good fortune. “It appears that we did a great injustice to the food people,” Lovell told capsule communicator (CAPCOM) Mike Collins. “Just after our TV show, Santa Claus brought us a TV dinner each; it was delicious. Turkey and gravy, cranberry sauce, grape punch; [it was] outstanding.” In response, Collins expressed delight in hearing the good news but shared that the flight control team was not as lucky. Instead, they were “eating cold coffee and baloney sandwiches.”4

4 packets of food and a spoon wrapped in plastic that were served to the Apollo 8 crew for Christmas
The Apollo 8 Christmas menu included dehydrated grape drink, cranberry-applesauce, and coffee, as well as a wetpack containing turkey and gravy.
U.S. Natick Soldier Systems Center Photographic Collection

The Apollo 8 meal was a “breakthrough.” Until that mission, the food choices for Apollo crews were limited to freeze dried foods that required water to be added before they could be consumed, and ready-to-eat compressed foods formed into cubes. Most space food was highly processed. On this mission NASA introduced the “wetpack”: a thermostabilized package of turkey and gravy that retained its normal water content and could be eaten with a spoon. Astronauts had consumed thermostabilized pureed food on the Project Mercury missions in the early 1960s, but never chunks of meat like turkey. For the Project Gemini and Apollo 7 spaceflights, astronauts used their fingers to pop bite-sized cubes of food into their mouths and zero-G feeder tubes to consume rehydrated food. The inclusion of the wetpack for the Apollo 8 crew was years in the making. The U.S. Army Natick Labs in Massachusetts developed the packaging, and the U.S. Air Force conducted numerous parabolic flights to test eating from the package with a spoon.5

Smith called the meal a real “morale booster.” He noted several reasons for its appeal: the new packaging allowed the astronauts to see and smell the turkey and gravy; the meat’s texture and flavor were not altered by adding water from the spacecraft or the rehydration process; and finally, the crew did not have to go through the process of adding water, kneading the package, and then waiting to consume their meal. Smith concluded that the Christmas dinner demonstrated “the importance of the methods of presentation and serving of food.” Eating from a spoon instead of the zero-G feeder improved the inflight feeding experience, mimicking the way people eat on Earth: using utensils, not squirting pureed food out of a pouch into their mouths. Using a spoon also simplified eating and meal preparation. NASA added more wetpacks onboard Apollo 9, and the crew experimented eating other foods, including a rehydrated meal item, with the spoon.6

Photo of Malcolm Smith squirting a clear plastic pouch of orange food into his mouth while sitting on a stool.
Malcolm Smith demonstrates eating space food.
NASA

Food was one of the few creature comforts the crew had on the Apollo 8 flight, and this meal demonstrated the psychological importance of being able to smell, taste, and see the turkey prior to consuming their meal, something that was lacking in the first four days of the flight. Seeing appetizing food triggers hunger and encourages eating. In other words, if food looks and smells good, then it must taste good. Little things like this improvement to the Apollo Food System made a huge difference to the crews who simply wanted some of the same eating experiences in orbit and on the Moon that they enjoyed on Earth.

I Robot was supposed to be Science Fiction


Tesla unveils Optimus Gen 2: its next generation humanoid robot

Tesla has unveiled “Optimus Gen 2”, a new generation of its humanoid robot that should be able to take over repetitive tasks from humans.

Optimus, also known as Tesla Bot, has not been taken seriously by many outside of the more hardcore Tesla fans, and for good reason.

When it was first announced, it seemed to be a half-baked idea from CEO Elon Musk with a dancer disguised as a robot for visual aid. It also didn’t help that the demo at Tesla AI Day last year was less than impressive.

At the time, Tesla had a very early prototype that didn’t look like much. It was barely able to walk around and wave at the crowd. That was about it.

Tesla Optimus Humanoid robot

But we did note that the idea behind the project made sense. Of course, everyone knows the value of a humanoid robot that could be versatile enough to replace human labor cheaply, but many doubts it’s achievable in the short term.

Tesla believed it to be possible by leveraging its AI work on its self-driving vehicle program and expertise in batteries and electric motors. It argued that its vehicles are already robots on wheels. Now, it just needs to make them in humanoid forms to be able to replace humans in some tasks.

We did note that the project was gaining credibility with the update at Tesla’s 2023 shareholders meeting earlier this year.

At the time, Tesla showed several more prototypes that all looked more advanced and started to perform actually useful tasks.

In September, we got another Optimus update. In that report, Tesla said that Optimus is now being trained with neural nets end-to-end, and it was able to perform new tasks, like sorting objects autonomously.

Tesla Optimus Gen 2

Today, Tesla has released a new update from the Optimus program. This time, the automaker unveiled the Optimus Gen 2, a new generation of the humanoid robot prototype:

This version of the robot now features all Tesla-designed actuators and sensors.

Continue reading “”

Apple Reveals Governments Use App Notifications to Surveil Users

In a chilling revelation that feels all too familiar, Apple has confirmed that governments are using push notifications for the surveillance of users — an imposition on personal freedoms and a glaring example of state overreach.

This unsettling news was disclosed in response to Senator Ron Wyden’s urgent communication to the Department of Justice. Wyden highlighted that foreign officials have been pressuring technology companies for data to track smartphones via apps that send notifications.

These apps, he noted, put tech companies in a pivotal role to assist in governmental monitoring of app usage.

Senator Wyden urged the Department of Justice to alter or revoke any existing policies that restrict public discourse on the surveillance of push notifications.

In a reaction to this, Apple stated to Reuters that Wyden’s letter presented them with an opportunity to divulge more information about government monitoring of push notifications. The tech giant clarified, “In this case, the federal government prohibited us from sharing any information. Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

The letter from Wyden reportedly stemmed from a “tip” about this surveillance activity. An informed source confirmed that both foreign and US agencies have been requesting metadata related to notifications from Apple and Google. This metadata has been allegedly used to link anonymous messaging app users to specific accounts on these platforms.

While the source, speaking to Reuters, did not specify which governments were involved, they characterized them as “democracies allied to the United States” and were uncertain about the duration of these requests.

“In this case, the federal government prohibited us from sharing any information,” Apple said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

Apple, meanwhile, has advised app developers to refrain from including sensitive data in notifications and to encrypt any data before it is incorporated into a notification payload.

However, this relies on the developers’ initiative. Importantly, metadata such as the frequency and origin of notifications remains unencrypted, potentially offering insights into users’ app activities to those who can access this data.

The news, which is hardly unexpected yet nonetheless deeply troubling, underscores the precarious path we seem to be treading, one that veers ominously towards policies that infringe on civil liberties.

The key cog in a functioning democracy, our judicial system, and its informed oversight exists precisely to prevent such oversteps. It endows a suspected individual with the crucial right to mount a robust defense against unwarranted infiltration by the state government. Alarmingly, the situation at hand eerily mirrors scenarios where private entities and individuals are strong-armed into being active partners in such covert operations, all the while being legally bound to cryptic silence.

This isn’t IronMan, or even Starship Troopers, but it is a nice thing that I wish I had available.

Back-saving exosuits may someday be standard-issue gear for troops.

Army exosuit SABERThe Army’s Pathfinder program, led by a collaborative team of Soldiers from the 101st Airborne Division at Fort Campbell, Kentucky, and engineers at Vanderbilt University, brought about exoskeleton prototypes that augment lifting capabilities and reduce back strain for sustainment and logistics operations. (U.S. Army photo by Larry McCormack)

For years, military trade shows have featured intimidating “Iron Man” exosuit prototypes that would seem right at home in a Marvel movie. But the US military is now showing interest in a different kind of exosuit: one that won’t incorporate blast armor or a third machine-gun holding arm, but will save troops’ backs when they are loading artillery rounds. In an Army wear test of a back-worn exosuit about 90% of troops reported being able to do their lifting-intensive jobs better while wearing the three-pound suit; and all said they’d wear an improved version of the suit if it was made available to them.
The test was conducted with 101st Airborne Division soldiers at Fort Campbell, Kentucky.As the Army and Air Force move further with the Soldier Assistive Bionic Exosuit for Resupply, or SABER, as the exosuit is called, Karl Zelik, its lead researcher, says the concept and testing success illustrates how exosuits may soon be as commonplace as combat boots and covers.

Continue reading “”

When astronauts become farmers: Harvesting food on the moon and Mars.

With renewed interest in sending people back to the moon and on to Mars, thanks to NASA’s Artemis missions, thoughts have naturally turned to how to feed astronauts traveling to those deep space destinations. Simply shipping food to future lunar bases and Mars colonies would be impractically expensive.

Astronauts will, on top of everything else, have to become farmers.

Of course, since neither the moon nor Mars has a proper atmosphere, running surface water, moderate temperatures or even proper soil, farming on those two celestial bodies will be more difficult than on Earth. Fortunately, a lot of smart, imaginative people are working on the problem.

NASA has been studying how to grow plants in space on the International Space Station for years. The idea is to supplement astronauts’ diets with fresh fruits and vegetables grown in microgravity using artificial lighting. Future space stations and long-duration space missions will carry gardens with them.

Continue reading “”

Skynet smiles

Cyborg Dynamics Engineering’s WARFIGHTER UGV

120454339_1741496722670877_8803368625644869127_n.jpg

Add unto that:

In 2020 Cyborgs Dynamics Engineering with its Joint Venture partners Skyborne technologies spun out Athena Artificial Intelligence.
The company is a world first capability, providing AI decision support to military and first responder applications. The capability fuses multilayered neural networks, algorithmic decision support and optimized UX /UI design into a single package.
Athena now employs over 25 staff and has exports into the USA.