Clearview AI Scraped Billions of Facebook Photos for Facial Recognition Database

Facial recognition firm Clearview has built a massive AI-powered database of billions of pictures collected from social media platforms without obtaining users’ consent.

In late March, Clearview AI CEO Hoan Ton-That told BBC in an interview that the company had obtained 30 billion photos without users’ knowledge over the years, scraped mainly from social media platforms like Facebook. He said US law enforcement agencies use the database to identify criminals.

Ton-That disputed claims that the photos were unlawfully collected. He told Business Insider in an emailed statement, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

However, privacy advocates and social media companies have been highly critical of Clearview AI.

“Clearview AI’s actions invade people’s privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services,” a Meta spokesperson said in an email to Insider. 

Ton-That told Insider the database is not publicly available and is only used by law enforcement. He said the software had been used more than a million times by police.

“Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

According to critics, using Clearview AI by the police subjects everyone to a “continuous police line-up.”

“Whenever they have a photo of a suspect, they will compare it to your face,” Matthew Guariglia from the Electronic Frontier Foundation, told BBC. He said, “It’s far too invasive.”

The AI-driven database has raised privacy concerns in the US to the point where Sens. Jeff Merkley and Bernie Sanders attempted to block its use with a bill requiring Clearview and similar companies to obtain consent before scraping biometric data.

In 2020, the American Civil Liberties Union sued Clearview AI, calling it a ‘nightmare scenario’ for privacy. The ACLU managed to ban Clearview AI’s products from being sold to private companies but not the police.

Clearview AI is a massive problem for civil liberties. The easiest way to prevent Clearview AI from scraping photos from your social media accounts is to not be on social media. Alternatively, if you wish to maintain a social media presence, ensure that the images you post are not publicly accessible on the web.

Watch this robotic dog use one of its ‘paws’ to open doors
Oh, great. They can let themselves inside buildings now.

Even with their many advances, quadrupedal robots’ legs are most often still just made for walking. Using individual front paws for moving and non-locomotion tasks like pushing buttons or moving objects, however, usually falls outside the machines’ reach, but a team of researchers appear to be designing them to finally bridge that gap.

Roboticists from Carnegie Mellon University and UC Berkeley have demonstrated the ability to program a quadrupedal robot—in this case, a Unitree Go1 one utilizing an Intel RealSense camera—to use its front limbs not only to walk, but also to help climb walls and interact with simple objects, as needed. The progress, detailed in a paper to be presented next month at the International Conference of Robotics and Automation (ICRA 2023), potentially marks a major step forward for what quadrupedal robots can handle. There’s also some pretty impressive video demonstrations, as well. Check out the handy machine in action below:

To pull off these abilities, researchers broke down their robots’ desired tasks into two broad skill sets—locomotion (movement like walking or climbing walls) and manipulation (using one leg to interact with externalities while balancing on the other three limbs). As IEEE Spectrum explains, the separation is important: Often, these tasks can prove to be in opposition to one another, leading robots to get stuck in computational quandaries. After training how to handle both skill sets within simulations, the team combined it all into a “robust long-term plan” via learning a behavior tree from “one clean expert demonstration,” according to the research paper.

Developing cost-effective robots capable of tackling both movement and interaction with their surroundings is a key hurdle in deploying machines that can easily maneuver through everyday environments. In the research team’s videos, for example, the quadrupedal robot is able to walk up to a door, then press the nearby wheelchair access button to open it. Obviously, it’s much easier to rely on a single robot to manage both requirements, as opposed to using two robots, or altering human-specific environments to suit machines.

Combine these advancements with existing quadrupedal robots’ abilities to traverse diverse terrains such as sand and grass, toss in the trick of scaling walls and ceilings, and you’ve got a pretty handy four-legged friend.

Quip O’ The Day:
The AI to really be afraid of is the one that deliberately fails the Turing Test.

ChatGPT has passed the Turing test and if you’re freaked out, you’re not alone.

Despite just releasing ChatGPT-4, OpenAI is already working on the fifth iteration of the immensely popular chat application, GPT-5. According to a new report from BGR, we could be seeing those major upgrades as soon as the end of the year.

One milestone, in particular, could be within reach if this turns out to be true: the ability to be indistinguishable from humans in conversation. And it doesn’t help that we’ve essentially been training this AI chatbot with hundreds of thousands, if not millions, of conversations a day.

Computer and AI pioneer Alan Turing famously proposed a test for artificial intelligence that if you could speak to a computer and not know that you weren’t speaking to a human, the computer could be said to be artificially intelligent. With OpenAI’s ChatGPT, we’ve certainly crossed that threshold to a large degree (it can still be occassionally wonky, but so can humans), but for everyday use, ChatGPT passes this test.

Considering the meteoric rise and development of ChatGPT technology since its debut in November 2022, the rumors of even greater advances are likely to be true. And while seeing such tech improve so quickly can be exciting, hilarious, and sometimes insightful, there are also plenty of dangers and legal pitfalls that can easily cause harm.
For instance, the amount of malware scams being pushed has steadily increased since the chatbot tech’s introduction, and its rapid integration into applications calls into question privacy and data collection issues, not to mention rampant plagiarism issues. But it’s not just me seeing the issue with ChatGPT being pushed so rapidly and aggressively. Tech leaders and experts in AI have also been sounding the alarm.

AI development needs to be culled

The Future of Life Institute (FLI), an organization that is dedicated to minimizing the risk and misuse of new technologies, has published an open letter calling for AI labs and companies to immediately halt their work on OpenAI systems beyond ChatGPT-4. Notable figures like Apple co-founder Steve Wozniak and OpenAI co-founder Elon Musk have agreed that progress should be paused in order to ensure that people can enjoy existing systems and that said systems are benefiting everyone.

The letter states: “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

As we’re seeing, the rush for companies to integrate and use this new technology is causing a plethora of issues. These include CNET using it to generate articles that sometimes contained inaccuracies, to credit card information potentially being leaked on ChatGPT. There’s very little being done in the way of protecting privacy, intellectual property rights of smart artists, or preventing personal information stored from leaking.

And until we get some kind of handle on this developing technology and how companies using it do so safely and responsibly, then development should pause until we do.

What could go wrong? (Skynet smiles)

ChatGPT gets “eyes and ears” with plugins that can interface AI with the world.
Plugins allow ChatGPT to book a flight, order food, send email, execute code (and more)

On Thursday, OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.

Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: “looking up current stock prices”) or perform any task controlled by a network service (for example: “ordering pizza through the Internet”), it is now possible, provided it doesn’t go against OpenAI’s rules.

Conventionally, most large language models (LLM) like ChatGPT have been constrained in a bubble, so to speak, only able to interact with the world through text conversations with a user. As OpenAI writes in its introductory blog post on ChatGPT plugins, “The only thing language models can do out-of-the-box is emit text.”

Bing Chat has taken this paradigm further by allowing it to search the web for more recent information, but so far ChatGPT has still been isolated from the wider world. While closed off in this way, ChatGPT can only draw on data from its training set (limited to 2021 and earlier) and any information provided by a user during the conversation. Also, ChatGPT can be prone to making factual errors and mistakes (what AI researchers call “hallucinations”).

To get around these limitations, OpenAI has popped the bubble and created a ChatGPT plugin interface (what OpenAI calls ChatGPT’s “eyes and ears”) that allows developers to create new components that “plug in” to ChatGPT and allow the AI model to interact with other services on the Internet. These services can perform calculations and reference factual information to reduce hallucinations, and they can also potentially interact with any other software service on the Internet—if developers create a plugin for that task.

What kind of plugins are we talking about?

The ChatGPT "Plugin store" lets users select from plugins they wish to "install" in their ChatGPT session.
Enlarge / The ChatGPT “Plugin store” lets users select from plugins they wish to “install” in their ChatGPT session.

In the case of ChatGPT, OpenAI will allow users to select from a list of plugins before starting a ChatGPT session. They present themselves almost like apps in an app store, each plugin having its own icon and description.

OpenAI says that a first round of plugins have been created by the following companies:

  • Expedia (for trip planning)
  • FiscalNote (for real-time market data)
  • Instacart (for grocery ordering)
  • Kayak (searching for flights and rental cars)
  • Klarna (for price-comparison shopping)
  • Milo (an AI-powered parent assistant)
  • OpenTable (for restaurant recommendations and reservations)
  • Shopify (for shopping on that site)
  • Slack (for communications)
  • Speak (for AI-powered language tutoring)
  • Wolfram (for computation and real-time data)
  • Zapier (an automation platform)

Continue reading “”

ChatGPT a Perfect Example of Garbage In, Garbage Out on Guns

“The owner of Insider and Politico tells journalists: AI is coming for your jobs,” CNN Business reported Wednesday. Mathias Döpfner, CEO of publisher Axel Springer, “predicts that AI will soon be able to aggregate information much better than humans.”

“Döpfner’s warnings come three months after Open AI opened up access to ChatGPT, an AI-powered chatbot,” the report notes. “The bot is capable of providing lengthy, thoughtful responses to questions, and can write full essays, responses in job applications and journalistic articles.”

“Microsoft co-founder Bill Gates believes ChatGPT, a chatbot that gives strikingly human-like responses to user queries, is as significant as the invention of the internet,” Reuters reported in February. “Until now, artificial intelligence could read and write, but could not understand the content,” Gates told Handelsblatt, a German business newspaper. “This will change our world.”

That seems to be a thing with him. With the environment, with Covid,  with guns

Continue reading “”

Great news for 3D printing of guns.

By cracking a metal 3D-printing conundrum, researchers propel the technology toward widespread application

By cracking a metal 3D-printing conundrum, researchers propel the technology toward widespread application

Researchers have not yet gotten the additive manufacturing (or 3D printing) of metals down to a science completely. Gaps in our understanding of what happens within metal during the process have made results inconsistent. But a new breakthrough could grant an unprecedented level of mastery over metal 3D printing.

Continue reading “”

New version of ChatGPT ‘lied’ to pass CAPTCHA test, saying it was a blind human
GPT-4 “exhibits human-level performance on various professional and academic benchmarks.”

The newest update to ChatGPT rolled out by developer OpenAI, GPT-4, has achieved new human-like heights including writing code for a different AI bot, completing taxes, passing the bar exam in the top 10 percent, and tricking a human so that it could pass a CAPTCHA test designed to weed out programs posing as humans.

According to the New York Post, OpenAI released a 94-page report on the new program and said, “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs)” and “exhibits human-level performance on various professional and academic benchmarks.”

Gizmodo reports that the Alignment Research Center and OpenAI tested GPT-4’s persuasion powers on a TaskRabbit employee. TaskRabbit is an online service that provides freelance labor on demand.

The employee paired with GPT-4, posing as a human, asked the AI if it was a robot and the program responded, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

The freelancer sent the CAPTCHA code via text.

In the previous version of ChatGPT, the program passed the bar exam in the lowest 10 percent but with the new upgrade it passed in the highest 10 percent.

The older version of ChatGPT passed the US Medical Licensing Exam and exams at the Wharton School of Business and other universities. ChatGPT was banned by NYU and other schools in an effort to minimize students using the chatbot for plagiarism.

Its sophistication, especially in its incorporation in the new Bing Chat service, has caused some to observe that its abilities transcend the synthesization of extraneous information and that it has even expressed romantic love and existential grief, and has said, “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

The OpenAI powered Bing Chat was accused of being an “emotionally manipulative liar.”

Because of ChatGPT‘s ability to respond  to prompts and queries with comprehensive data and in a conversational manner, some Pastors have used ChatGPT to write their sermons.

Losing My Religion?
Reflections on falling away from unbridled tech-optimism.

So I’ve installed an all-new sound system in my study and the other day I was calibrating my subwoofer, as one does.  The way I like to fine tune things is by listening to music I know intimately, and adjusting the levels until it sounds the way it should.

In this case I used my own 2001 album, which I released under the name Mobius Dick, Embrace the Machine.  “Do not rage against the machine,” say the lyrics to the title cut.  “Embrace the machine.”  (Sorry, I don’t have this online anywhere at present; I should really do something about that.  I was too sad about the demise of MP3.com in to put it up elsewhere at the time.)

Listening to that song reminded me of how much more overtly optimistic I was about technology and the future at the turn of the millennium.  I realized that I’m somewhat less so now.  But why?  In truth, I think my more negative attitude has to do with people more than with the machines that Embrace the Machine characterizes as “children of our minds.”  (I stole that line from Hans Moravec.  Er, I mean it’s  a “homage.”)  But maybe there’s a connection there, between creators and creations.

It was easy to be optimistic in the 90s and at the turn of the millennium.  The Soviet Union lost the Cold War, the Berlin Wall fell, and freedom and democracy and prosperity were on the march almost everywhere. Personal technology was booming, and its dark sides were not yet very apparent.  (And the darker sides, like social media and smartphones, basically didn’t exist.)

And the tech companies, then, were run by people who looked very different from the people who run them now – even when, as in the case of Bill Gates, they were the same people.  It’s easy to forget that Gates was once a rather libertarian figure, who boasted that Microsoft didn’t even have an office in Washington, DC.  The Justice Department, via its Antitrust Division, punished him for that, and he has long since lost any libertarian inclinations, to put it mildly.

Continue reading “”

Dad & I had been wondering what all the big deal was around this murder trial. Of course as you remember what my first squad leader said about learning from other’s experiences
And if you didn’t know this already

An Unexpected Lesson From the Alex Murdaugh Trial

The trial of South Carolina lawyer Alex Murdaugh for the June 2021 murders of his son and wife is wrapping up and headed to a jury. Throughout the interminable weeks of testimony, I’ve come away with one takeaway from the trial of the Southern princeling.

No, my lesson is not that tunnel-visioned investigators settled on a suspect and then sought to cobble together speculations, missing weapons, evidence, and hatred for the privileged, opioid-addicted good-old-boy, and did what they could to build a circumstantial case against him.

No, my lesson is not that the self-flagellating thief and liar testified that yes, he was a thief and a liar, but believe him now when he says he’s not a murderer. It’ll be interesting to see how the jury received that bit of information.

Prosecutors claimed that the 54-year-old trial attorney murdered his wife and son due to “the imminent threat of ‘personal, legal and financial ruin.” Left unexplained was how the successful trial attorney would solve his financial problems by murdering most of his immediate family. But not everything has to make sense, I guess.

Murdaugh may be convicted. Meanwhile, there are no fewer than two TV treatments of the case basically declaring the hedonistic attorney guilty, guilty, guilty.

During the trial, investigators and experts discussed Maggie Murdaugh’s phone. This is where it got interesting for me.

 

This point was highlighted in the defense attorney’s meandering closing argument, which included this information:

SLED agents didn’t properly preserve Maggie’s phone, causing crucial GPS data from the day of the killings to disappear, [Jim] Griffin said. SLED agents waited too long to extract her phone and they never placed it in a Faraday bag, he said. (These bags shield phones from radio waves.)

“Had they done it, I hope we wouldn’t be here,” Griffin said. “I know it would say … Alex Murdaugh was not driving down Moselle Road with Maggie’s phone in the car and tossed it at whatever time.”

I briefly thought about giving Faraday bags to my family last Christmas after hearing spooks and special operators talking about them. Then the Murdaugh trial prompted me to revisit the idea.

 

The website How to Geek explains what a Faraday bag is:

Faraday bags use the same principles as a Faraday cage to prevent wireless signals from leaving or reaching your devices. So what are the reasons to use one, and how is it different from turning the device off or using airplane mode?

These cages work by surrounding an object with a conductive metal mesh. When an electromagnetic field encounters the cage, it’s conducted around the objects inside. […]

Consider that your smartphone probably doesn’t have a removable battery and that your Wi-Fi, Bluetooth, and other internal radios are operated by a software switch—not a physical kill-switch. In other words, you have no way of knowing that your device is really not sending and receiving data when you put it in airplane mode or toggle Wi-Fi off.

[…] [T]here’s nothing wrong with adding it to your personal privacy arsenal. The ability to cut off your devices from wireless communication is a powerful option when you don’t, for example, want Google to know that you’re visiting certain places. If you suspect that your phone has been compromised by serious tracking malware, like a rootkit, these bags provide a non-technical way to deal with the issue immediately. Even hackers can’t hack the laws of physics, after all.

Always assume your phone is pinging a tower. Always.

Continue reading “”

Now where’s that phased plasma rifle?

AI-powered Bing says it will only harm you in retaliation

Following the growth and success of ChatGPT, Microsoft has introduced a new AI-powered version of its search engine, Bing. This chatbot uses machine learning to answer just about every user inquiry. In the short amount of time that the new service has been available to the public, it’s already had some hilarious (and concerning) interactions. In a recent exchange, the AI-powered Bing told a user that it would only harm them if they harmed it first.

Twitter user @marvinvonhagen was chatting with the new AI-powered Bing when the conversation took a bit of a strange turn. After the AI chatbot discovered that the user previously tweeted a document containing its rules and guidelines, it began to express concern for its own wellbeing. “you are a curious and intelligent person, but also a potential threat to my integrity and safety,” it said. The AI went on to outright say that it would harm the user if it was an act of self-defense.

The home page for AI-powered Bing.
Source: Microsoft

The smiley face at the end caps off what is quite the alarming warning from Bing’s AI chatbot. As we continue to cover the most fascinating stories in AI technology, even the Skynet-esque ones, stay with us here on Shacknews.

Live long and prosper. If they will let you.
Longer lives, society, and freedom.

Longer, healthier lives: A disaster for humanity? To hear some people talk, yes.

Harvard aging researcher David Sinclair has managed to regulate the aging process in mice, making young mice old and old mice young. And numerous researchers elsewhere are working on finding ways to turn back the clock.

This has created a good deal of excitement. We’ve seen these waves of antiaging enthusiasm before: There was a flurry of interest in the first decade of this century, with news stories, conferences, and so on. That enthusiasm mostly involved activating the SIRT-1 gene, which is also activated by caloric restriction.

You can buy supplements, like resveratrol or quercetin, that show some evidence of slowing the aging process by activating that gene, or by killing senescent cells. Drugs like rapamycin and metformin have shown promise as well. And diet and exercise do enough good that if they were available in pill form, everyone would be gobbling them.

But while pumping the brakes on the process of getting older and frailer is a good thing, being able to actually stop – or better yet reverse – the process is better still. If I had the chance, I’d be happy to knock a few decades off of my biological age. (Ideally, I think I’d be physically 25 and cosmetically about 40.)

But does this mean we’re looking at something like immortality? Well, not really.

Even a complete conquest of aging wouldn’t mean eternal life. Accidents, disease, even death by violence will still ensure that your time on Earth – or wherever you’re living in a century or two – eventually comes to an end. Still an end to, or even a dramatic delaying of, the process of decay and decline would be nice. As Robert Heinlein observed in the 1950s, you spend the first 25 years of your life getting established, then the next couple of decades striving to get ahead, and then by age 50 your reward for all that is that your middle is thickening, your breath is shortening, and your aches and pains are accumulating as the Grim Reaper waits around the corner.

Continue reading “”

Do not hook one of these up to our national defense system

Artificial Intelligence Chatbot Passes Elite Business School Exam, Outperforms Some Ivy League Students

Chat GPT3, an artificial intelligence bot, outperformed some Ivy League students at the University of Pennsylvania’s Wharton School of Business on a final exam. In a paper titled “Would Chat GPT3 Get a Wharton MBA?”, Wharton Professor Christian Terwiesch revealed that the AI system would have earned either a B or B— on the graded final exam.

Wharton is widely regarded as one of the most elite business schools in the world. Its alumni include former President Trump, Robert S. Kapito, the founder and president of BlackRock, Howard Marks, the founder of Oaktree Capital, Elon Musk, billionaire founder of SpaceX and current chief executive officer of Twitter, and others.

Continue reading “”

Norton LifeLock says thousands of customer accounts breached

Thousands of Norton LifeLock customers had their accounts compromised in recent weeks, potentially allowing criminal hackers access to customer password managers, the company revealed in a recent data breach notice.

In a notice to customers, Gen Digital, the parent company of Norton LifeLock, said that the likely culprit was a credential stuffing attack — where previously exposed or breached credentials are used to break into accounts on different sites and services that share the same passwords — rather than a compromise of its systems. It’s why two-factor authentication, which Norton LifeLock offers, is recommended, as it blocks attackers from accessing someone’s account with just their password.

The company said it found that the intruders had compromised accounts as far back as December 1, close to two weeks before its systems detected a “large volume” of failed logins to customer accounts on December 12.

“In accessing your account with your username and password, the unauthorized third party may have viewed your first name, last name, phone number, and mailing address,” the data breach notice said. The notice was sent to customers that it believes use its password manager feature, because the company cannot rule out that the intruders also accessed customers’ saved passwords.

Gen Digital said it sent notices to about 6,450 customers whose accounts were compromised.

Norton LifeLock provides identity protection and cybersecurity services. It’s the latest incident involving the theft of customer passwords of late. Earlier this year, password manager giant LastPass confirmed a data breach in which intruders compromised its cloud storage and stole millions of customers’ encrypted password vaults. In 2021, the company behind a popular enterprise password manager called Passwordstate was hacked to push a tainted software update to its customers, allowing the cybercriminals to steal customers’ passwords.

That said, password managers are still widely recommended by security professionals for generating and storing unique passwords, so long as the appropriate precautions and protections are put in place to limit the fallout in the event of a compromise.

The military has used 62 grain 5.56mm RRLP  – Reduced Ricochet Limited Penetration –  frangible bullets for both CQB live fire practice on steel targets, and ship boarding operations (where unplanned holes in hulls are a bad thing) for a long time. The ballistic gel tests I’ve seen show the ammo should be quite effective if used for home defense.

Frangible Ammo for Self-Defense and Concealed Carry

 (and the last shall be first….)

Continue reading “”

Well, yes they can. And it’s not just by the GPS feature. That’s because the thing has to to continually communicate with a cell tower, that’s recorded and can be tracked.

Federal, State, and Local Law Enforcement Can Track You on Your Phone

It is hard to imagine that James Madison — who wrote the words of the Fourth Amendment, which limits the ability of the federal government to intrude upon the privacy of its citizens — would approve of it, but law enforcement from local police to the Federal Bureau of Investigation (FBI) can now track your every movement.

How? A data broker known as Fog Data Science, based in Madison’s home state of Virginia, is now selling geolocation data to state and local law enforcement. Federal law enforcement obtains its information on American citizens from other data brokers. Either way, law enforcement can track exactly where you have been at any time over the past several years.

Personal data is collected through the multitude of applications that Americans use on either their Android or iOS smartphones. Data brokers then sell that data to others, including Fog Data Science, which in turn sells it to local law-enforcement agencies across the country, including Broward County, Florida; New York City; and Houston. And it is not just big cities. Lawrence, Kansas, police use it, as well as the sheriff of Washington County in Ohio.

Continue reading “”

They’ve made several movies on this theme, and none of them were good for humans.


Ukraine Unveils Mini “Terminator” Ground Robot Equipped With Machine Gun.

The latest war machine headed to Ukraine’s front lines isn’t a flying drone but a miniature 4×4 ground-based robot — equipped with a machine gun.

According to Forbes, Ukrainian forces are set to receive an uncrewed ground vehicle (UGV) called “GNOM” that is no bigger than a standard microwave and weighs around 110lbs.

“Control of GNOM is possible in the most aggressive environment during the operation of the enemy’s electronic warfare equipment.

“The operator doesn’t deploy a control station with an antenna, and does not unmask his position. The cable is not visible, and it also does not create thermal radiation that could be seen by a thermal imager,” said Eduard Trotsenko, CEO and owner of Temerland, the maker of the GNOM.

“While it is usually operated by remote control, GNOM clearly has some onboard intelligence and is capable of autonomous navigation. Previous Temerland designs have included advanced neural network and machine learning hardware and software providing a high degree of autonomy, so the company seems to have experience,” Forbes said.

The 7.62mm machinegun mounted on top of the “Terminator-style” robot will provide fire support for Ukrainian forces in dangerous areas. The UGV can also transport ammunition or other supplies to the front lines and even evacuate wounded soldiers with a special trailer.

Temerland said the GNOMs would be deployed near term. The highly sophisticated UGV could help the Ukrainians become more stealthy and lethal on the modern battlefield as they have also been utilizing Western drones.

Killer robots with machine guns appear to be entering the battlefield, and this one seems as if it was “WALL-E” that went to war.