The latest war machine headed to Ukraine’s front lines isn’t a flying drone but a miniature 4×4 ground-based robot — equipped with a machine gun.
According to Forbes, Ukrainian forces are set to receive an uncrewed ground vehicle (UGV) called “GNOM” that is no bigger than a standard microwave and weighs around 110lbs.
“Control of GNOM is possible in the most aggressive environment during the operation of the enemy’s electronic warfare equipment.
“The operator doesn’t deploy a control station with an antenna, and does not unmask his position. The cable is not visible, and it also does not create thermal radiation that could be seen by a thermal imager,” said Eduard Trotsenko, CEO and owner of Temerland, the maker of the GNOM.
“While it is usually operated by remote control, GNOM clearly has some onboard intelligence and is capable of autonomous navigation. Previous Temerland designs have included advanced neural network and machine learning hardware and software providing a high degree of autonomy, so the company seems to have experience,” Forbes said.
The 7.62mm machinegun mounted on top of the “Terminator-style” robot will provide fire support for Ukrainian forces in dangerous areas. The UGV can also transport ammunition or other supplies to the front lines and even evacuate wounded soldiers with a special trailer.
Temerland said the GNOMs would be deployed near term. The highly sophisticated UGV could help the Ukrainians become more stealthy and lethal on the modern battlefield as they have also been utilizing Western drones.
Killer robots with machine guns appear to be entering the battlefield, and this one seems as if it was “WALL-E” that went to war.
I just ran across an interesting article, “Should AI Psychotherapy App Marketers Have a Tarasoff Duty?,” which answers the question in its title “yes”: Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that’s the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.
It’s a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:
To what extent will various “smart” products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible “owners”)?
To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant’s affirmative contribution to the harm. Normally, a psychotherapist wouldn’t have a duty to prevent harm caused by his patient, just as you don’t have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I’m skeptical about Tarasoff, though most judges that have considered the matter don’t share my skepticism.
But it is well-established in tort law that people have a legal duty to take reasonable care when they do something that might affirmatively help someone do something harmful (that’s the basis for legal claims, for instance, for negligent entrustment, negligent hiring, and the like). Thus, for instance, a car manufacturer’s provision of a car to a driver does affirmatively contribute to the harm caused when the driver drives recklessly.
Does that mean that modern (non-self-driving) cars must—just as a matter of the common law of torts—report to the police, for instance, when the driver appears to be driving erratically in ways that are indicative of likely drunkenness? Should Alexa or Google report on information requests that seem like they might be aimed at figuring out ways to harm someone?
To be sure, perhaps there shouldn’t be such a duty, for reasons of privacy or, more specifically, the right not to have products that one has bought or is using surveil and report on you. But if so, then there might need to be work done, by legislatures or by courts, to prevent existing tort law principles from pressuring manufacturers to engage in such surveillance and reporting.
I’ve been thinking about this ever since my Tort Law vs. Privacy article, but it seems to me that the recent surge of smart devices will make these issues come up even more.
Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.
As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.
What could go wrong?
Observation O’ The Day
Tyrant tool; This tool is worse than useless. It will create opportunities for more murders. That is, unless you are a tyrant intent on disarming your subjects.
First off, the mass shooter will start shooting before they pass through the detector, taking out the guards before they even had a clue a threat was present. And, since there is a “funnel” for people going through the detector there will be a group of people ready for “harvesting” by the perp.
It also will make it difficult or impossible for people to defend themselves where these systems are deployed.
[1 if the system actually will work as advertised, and as we see in the article, there have been several cases of ‘false positives’, and those are ripe picking for a lawyers and a false arrest/false detention/defamation of character lawsuit by a private citizen. 2 if the cost of the system doesn’t make it more expensive than a retail store can afford, taking into account that most retail, especially grocery stores, actually operate on a razor thin profit margin.; Miles]
Hence, if your threat model is a mass shooter, the device will actually make things worse rather than better. Many other threat models suffer similar degradation of public security.
The threat model that doesn’t degrade is the one where you want your subjects to be more dependent on you for security and to make it difficult for them to threaten your position of power. In that case this system will be a useful asset to disarm your subjects….Joe Huffman
“Could our system have stopped it?” he said. “I don’t know. But I think we could democratize security so that someone planning on hurting people can’t easily go into an unsuspecting place.”
George is chief executive of Evolv Technology, an AI-based system meant to flag weapons, “democratizing security” so that weapons can be kept out of public places without elaborate checkpoints. As U.S. gun violence like the kind seen in Buffalo increases — firearms sales reached record heights in 2020 and 2021 while the Gun Violence Archive reports 198 mass shootings since January — Evolv has become increasingly popular, used at schools, stadiums, stores and other gathering spots.
To its supporters, the system is a more effective and less obtrusive alternative to the age-old metal detector, making events both safer and more pleasant to attend. To its critics, however, Evolv’s effectiveness has hardly been proved. And it opens up a Pandora’s box of ethical issues in which convenience is paid for with RoboCop surveillance.
“The idea of a kinder, gentler metal detector is a nice solution in theory to these terrible shootings,” said Jay Stanley, senior policy analyst for the American Civil Liberties Union’s project on speech, privacy, and technology. “But do we really want to create more ways for security to invade our privacy? Do we want to turn every shopping mall or Little League game into an airport?”
Evolv machines use “active sensing” — a light-emission technique that alsounderpins radar and lidar — to create images. Then it applies AI to examine them. Data scientists at the Waltham, Mass., company have created “signatures” (basically, visual blueprints) and trained the AI to compare them to the scanner images.
Using what’s known as a “link layer relay attack,” security consulting firm NCC Group was able to unlock, start, and drive vehicles and unlock and open certain residential smart locks without the Bluetooth-based key anywhere in the vicinity.
Sultan Qasim Khan, the principal security consultant and researcher with NCC Group, demonstrated the attack on a Tesla Model 3, although he notes that the problem isn’t specific to Tesla. Any vehicle that uses Bluetooth Low Energy (BLE) for its keyless entry system would be vulnerable to this attack.
Many smart locks are also vulnerable, Khan adds. His firm specifically called out the Kwikset/Weiser Kevo models since these use a touch-to-open feature that relies on passive detection of a Bluetooth fob or smartphone nearby. Since the lock’s owner doesn’t need to interact with the Bluetooth device to confirm they want to unlock the door, a hacker can relay the key’s Bluetooth credentials from a remote location and open someone’s door even if the homeowner is thousands of miles away.
How it works
This exploit still requires that the attacker have access to the owner’s actual Bluetooth device or key fob. However, what makes it potentially dangerous is that the real Bluetooth key doesn’t need to be anywhere near the vehicle, lock, or other secured devices.
Instead, Bluetooth signals are relayed between the lock and key through a pair of intermediate Bluetooth devices connected using another method — typically over a regular internet link. The result is that the lock treats the hacker’s nearby Bluetooth device as if it’s the valid key.
As Khan explains, “we can convince a Bluetooth device that we are near it — even from hundreds of miles away […] even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance.”
The exploit bypasses the usual relay attack protections as it works at a very low level of the Bluetooth stack, so it doesn’t matter whether the data is encrypted, and it adds almost no latency to the connection. The target lock has no way of knowing that it’s not communicating with the legitimate Bluetooth device.
Since many Bluetooth security keys operate passively, a thief would only need to place one device within a few feet of the owner and the other near the target lock. For example, a pair of thieves could work in tandem to follow a Tesla owner away from their vehicle, relaying the Bluetooth signals back to the car so that it could be stolen once the owner was far enough away.
These attacks could be carried out even across vast distances with enough coordination. A person on vacation in London could have their Bluetooth keys relayed to their door locks at home in Los Angeles, allowing a thief to quickly gain access simply by touching the lock.
This also goes beyond cars and smart locks. Researchers note that it could be used to unlock laptops that rely on Bluetooth proximity detection, prevent mobile phones from locking, circumvent building access control systems, and even spoof the location of an asset or a medical patient.
NCC Group also adds this isn’t a traditional bug that can be fixed with a simple software patch. It’s not even a flaw in the Bluetooth specification. Instead, it’s a matter of using the wrong tool for the job. Bluetooth was never designed for proximity authentication — at least not “for use in critical systems such as locking mechanisms,” the firm notes.
How to protect yourself
First, it’s essential to keep in mind that this vulnerability is specific to systems that rely exclusively on passive detection of a Bluetooth device.
For example, this exploit can’t realistically be used to bypass security systems that require you to unlock your smartphone, open a specific app, or take some other action, such as pushing a button on a key fob. In this case, there’s no Bluetooth signal to relay until you take that action — and you’re generally not going to try and unlock your car, door, or laptop when you’re not anywhere near it.
This also won’t typically be a problem for apps that take steps to confirm your location. For instance, the auto-unlock feature in the popular August smart lock relies on Bluetooth proximity detection, but the app also checks your GPS location to make sure you’re actually returning home. It can’t be used to unlock your door when you’re already home, nor can it open your door when you’re miles away from home.
If your security system allows for it, you should enable an extra authentication step that requires that you take some action before the Bluetooth credentials are sent to your lock. For example, Kwikset has said that customers who use an iPhone can enable two-factor authentication in their lock app, and it plans to add this to its Android app soon. Kwikset’s Kevo application also disables proximity unlocking functionality when the user’s phone has been stationary for an extended period.
Compliments of inventor Richard Browning, the 3-D printed Gravity Industries Jet Suit consists of two small turbines fastened to each arm as well as a larger one on the user’s back. In a test run captured on video, the developer climbed more than 2,000 feet over a 1.2-mile distance in around three minutes and forty seconds.
Witness the miracle:
This is how 3D printing technology is progressing.
A prototype 1911 frame. And it works pretty good.
The Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) has been coming after the Orlando-based Rare Breed Triggers (RBT), the manufacturer of a drop-in AR-15 forced reset trigger since last summer.
Everyone in the gun-rights world has been following the ATF’s attack on RBT. The Feds say RBT’s FRT-15 (forced reset trigger for an AR-15 platform) is classified as a “machine gun,” but the company claims otherwise.
Less than two weeks ago, Gun Owners of America, one of the most prominent pro-gun organizations, published an alleged leaked internal ATF email documenting plans to start seizing lawfully-owned FRT-15s from manufacturers and resellers. RBT’s president Lawrence Demonico responded to the leaked memo and said while he couldn’t confirm it, “I can tell you we’ve received word from one dealer in Illinois late yesterday afternoon stating that the ATF visited him and handed him a cease and desist order and seized FRT-15 triggers.”
The ATF under the Biden administration is getting bolder and could rule by executive fiat on new guidelines for gun braces, serialized uppers, and 80% lowers as early as spring. The Feds are also pursuing the ban of forced reset triggers. Many in the gun community have a bad feeling about an overreaching ATF ahead of midterms as President Biden must appease his anti-gun base.
With that aside, we enter the world of 3D printing and how the gun community has embraced this technology over recent years to stay one step ahead of the ATF. This brings us to one YouTuber named “Hoffman Tactical,” who released a video days ago explaining how he 3D-printed a forced reset trigger.
February has already been a big month for autonomous flight. For the first time, this past Saturday, and then again on Monday, a specially equipped Black Hawk helicopter flew without a single human on board. The computer-piloted aircraft was being tested as part of a DARPA program called Alias, and the tests took place out of Fort Campbell, Kentucky.
The retrofitted whirlybird was controlled by a Sikorsky-made autonomy system. As part of that system, the helicopter has a switch on board that allows the aviators to indicate whether two pilots, one pilot, or zero pilots will be operating the chopper. This was the first time that a Black Hawk was sent into the air with the no-pilots option, so that the computer system was handling all the controls. While these were just test flights, they hint at a future in which the Army could potentially send an autonomous helicopter on a dangerous rescue mission—and have no one on board it at all.
For the past two weeks, observers of North Korea’s strange and tightly restricted corner of the internet began to notice that the country seemed to be dealing with some serious connectivity problems. On several different days, practically all of its websites—the notoriously isolated nation only has a few dozen—intermittently dropped offline en masse, from the booking site for its Air Koryo airline to Naenara, a page that serves as the official portal for dictator Kim Jong-un’s government. At least one of the central routers that allow access to the country’s networks appeared at one point to be paralyzed, crippling the Hermit Kingdom’s digital connections to the outside world.
Some North Korea watchers pointed out that the country had just carried out a series of missile tests, implying that a foreign government’s hackers might have launched a cyberattack against the rogue state to tell it to stop saber-rattling.
But responsibility for North Korea’s ongoing internet outages doesn’t lie with US Cyber Command or any other state-sponsored hacking agency. In fact, it was the work of one American man in a T-shirt, pajama pants, and slippers, sitting in his living room night after night, watching Alien movies and eating spicy corn snacks—and periodically walking over to his home office to check on the progress of the programs he was running to disrupt the internet of an entire country.
Just over a year ago, an independent hacker who goes by the handle P4x was himself hacked by North Korean spies. P4x was just one victim of a hacking campaign that targeted Western security researchers with the apparent aim of stealing their hacking tools and details about software vulnerabilities. He says he managed to prevent those hackers from swiping anything of value from him. But he nonetheless felt deeply unnerved by state-sponsored hackers targeting him personally—and by the lack of any visible response from the US government.
So after a year of letting his resentment simmer, P4x has taken matters into his own hands. “It felt like the right thing to do here. If they don’t see we have teeth, it’s just going to keep coming,” says the hacker. (P4x spoke to WIRED and shared screen recordings to verify his responsibility for the attacks but declined to use his real name for fear of prosecution or retaliation.) “I want them to understand that if you come at us, it means some of your infrastructure is going down for a while.”
AK, and I agreed years ago that the newer cars were actually desktop computers that had wheels and seats, and could be driven around under their own power. The latest vehicle added to the stable, a ’19 Traverse, has more circuitry under the hood than I recall seeing in a small business PBX phone system.
Today’s cars are dumb where they should be smart, and smart where they should be dumb. Enough already. Make a car that’s pretty much all dumb and watch it sell — because what automakers are giving people is so bad, they’ll pay more to have less of it.
Cars now are like budget smartphones with wheels: loaded with bloatware, unintuitive and slow to operate. Carmakers have always struggled with user interfaces, but until recently the biggest problem we had was “too many knobs.” How I long for those days!
The proliferation of touchscreens and LCDs has made every car feel like a karaoke booth. Animations show reclaimed energy from braking, the speedometer changes color as you approach the limit, the fan speed and direction is under three menus. And besides being non-functional, these interfaces are even ugly! The type, the layouts, and animations scream “designed by committee and approved by someone who doesn’t have to use it.”
Not to mention the privacy and security concerns. I was dubious the first time I saw a GPS in a car, my mom’s old RX300, about 20 years ago. “Yeah… that’s how they get you,” I thought. And now, Teslas with missed payments drive themselves to be impounded. Welcome to the future — your car is a narc now!
The final indignity is that these features are being sold as upscale, not downmarket, options. Screens are so cheap that you can buy a few million and use them everywhere, for everything, and tell buyers “enjoy the next generation of mobility!” But in reality it’s a cost-saving measure that cuts down on part numbers and lets your dashboard team kick the can down the road as often as they want. You know this for sure because high-end models are going back to knobs and dials for that “premium feel.”
So here’s what I would like: a dumb car. This is what I think that looks like.
We shall see.
I was the ‘guinea pig’ for one of the LGS here when we ran their first eform 4 for a suppressor past week. Everything online seemed to work okay. The major hang-up has been that Martinsburg West By God Virginia had a major snow storm hit and the Postal Service has my print cards stuck somewhere in the Post Office there for the past 4 days.
ATF Problems with Rollout of New eForms Online System
Amazon has updated its Alexa voice assistant after it “challenged” a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.
The suggestion came after the girl asked Alexa for a “challenge to do”.
“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” the smart speaker said.
Amazon said it fixed the error as soon as the company became aware of it.
The girl’s mother, Kristin Livdahl, described the incident on Twitter.
She said: “We were doing some physical challenges, like laying down and rolling over holding a shoe on your foot, from a [physical education] teacher on YouTube earlier. Bad weather outside. She just wanted another one.”
That’s when the Echo speaker suggested partaking in the challenge that it had “found on the web”.
The dangerous activity, known as “the penny challenge”, began circulating on TikTok and other social media websites about a year ago.
Metals conduct electricity and inserting them into live electrical sockets can cause electric shocks, fires and other damage.
The outage at Amazon.com Inc.’s cloud-computing arm left thousands of people in the U.S. without working fridges, roombas and doorbells, highlighting just how reliant people have become on the company as the Internet of Things proliferates across homes.
The disruption, which began at about 10 a.m. Eastern time Tuesday, upended package deliveries, took down major streaming services, and prevented people from getting into Walt Disney Co.’s parks.
Affected Amazon services included the voice assistant Alexa and Ring smart-doorbell unit. Irate device users tweeted their frustrations to Ring’s official account, with many complaining that they spent time rebooting or reinstalling their apps and devices before finding out on Twitter that there was a general Amazon Web Services outage. Multiple Ring users even said they weren’t able to get into their homes without access to the phone app, which was down.
Warp drive pioneer and former NASA warp drive specialist Dr. Harold G “Sonny” White has reported the successful manifestation of an actual, real-world “Warp Bubble.” And, according to White, this first of its kind breakthrough by his Limitless Space Institute (LSI) team sets a new starting point for those trying to manufacture a full-sized, warp-capable spacecraft.
“To be clear, our finding is not a warp bubble analog, it is a real, albeit humble and tiny, warp bubble,” White told The Debrief, quickly dispensing with the notion that this is anything other than the creation of an actual, real-world warp bubble. “Hence the significance.”
That solution was lauded for its elegant mathematics, yet simultaneously derided for its use of theoretical materials and massive amounts of energy that appeared virtually impossible to engineer in any practical way.
Over a decade later, this theory underwent a major shift, when Dr. White, a then NASA-employed warp drive specialist and the founder of the highly respected Eagleworks laboratory, reworked Alcubierre’s original metric and put it into canonical form. This change in design dramatically reduced the exotic materials and energy requirements of the original concept, seemingly providing researchers and science fiction fans alike at least a glimmer of hope that a real-world warp drive may one day become a reality. It also resulted in the informal renaming of the original theoretical design, a concept now more commonly referred to as the “Alcubierre/White Warp Drive.”
Since then, The Debrief has covered a number of physicists and engineers taking their own stabs at designing a viable warp drive, including an entire group of international researchers working on a warp drive that requires no exotic matter. However, like Alcubierre and White before them, the warp concepts of these would-be visionaries all still remain theoretical in nature.
ABC News reported earlier that President Joe Biden was holding a signing ceremony Thursday of bills aimed at protecting first responders; we already showed you the clip where Biden takes an uncomfortable interest in a 7-year-old boy next to him and offers to show him around the White House. Maybe some people thought it was cute, but to us, it just came across as creepy … as usual.
Here’s a clip from later on in the ceremony, where Biden gives up on reading the name of the amendment he’s signing.
More SloJoe cluelessness. That rig starts at $110,000
As a lot of folks know, I’m a car guy. I’ve gotten a chance to drive some pretty incredible vehicles over the years, but I never could have imagined ones like the electric vehicle I took for a spin today.
There’s a simple solution to this. The less apps you have on your phone, and the less you use the apps you do have, and the less personal information you post online, the less information the goobermint will have access to.
THE TREASURY DEPARTMENT has in recent months expanded its digital surveillance powers, contracts provided to The Intercept reveal, turning to the controversial firm Babel Street, whose critics say it helps federal investigators buy their way around the Fourth Amendment.
Two contracts obtained via a Freedom of Information Act request and shared with The Intercept by Tech Inquiry, a research and advocacy group, show that over the past four months, the Treasury acquired two powerful new data feeds from Babel Street: one for its sanctions enforcement branch, and one for the Internal Revenue Service. Both feeds enable government use of sensitive data collected by private corporations not subject to due process restrictions. Critics were particularly alarmed that the Treasury acquired access to location and other data harvested from smartphone apps; users are often unaware of how widely apps share such information.
For years, we’ve been warning that it was only a matter of time — and now, the inevitable has happened.
Somebody strapped an honest-to-god sniper rifle to the back of a quadrupedal robot dog.
An image shared on Twitter by military robot maker Ghost Robotics shows the terrifying contraption in all its dystopian glory.
“Keeping our [special ops] teams armed with the latest lethality innovation,” the caption reads.
It’s a nightmare come to life, a death machine designed to kill with precision on the battlefield.
“This is sad,” one Twitter user commented. “In what world is this a good idea? I bet police is salivating at the chance to use these.”
There’s a lot we don’t know about the machine, but according to an Instagram post by Sword International, a gun manufacturer, the machine is called the SPUR or Special Purpose Unmanned Rifle.
“The [SPUR] was specifically designed to offer precision fire from unmanned platforms such as the Ghost Robotics Vision-60 quadruped,” reads Sword’s website. “Due to its highly capable sensors the SPUR can operate in a magnitude of conditions, both day and night.”
We don’t know what level of autonomy the robot has or if it was designed to be fully remotely operated. We also don’t know who the machine was developed for.
The four-legged robot is lugging a sniper capable of shooting 6.5 millimeter Creedmoor cartridges, a rifle ammunition, which was developed with long-range target shooting in mind.
It’s a troubling new development. Any new robot built with the intent to kill should have us worried.
When it has a gun rack plus a place to attach a gun mount……….
Jetpacks might sound fun, but learning how to control a pair of jet engines strapped to your back is no easy feat. Now a British startup wants to simplify things by developing a jetpack with an autopilot system that makes operating it more like controlling a high-end drone than learning how to fly.
Jetpacks made the leap from sci-fi to the real world as far back as the 1960s, but since then the they haven’t found much use outside of gimmicky appearances in movies and halftime shows. In recent years though, the idea has received renewed interest. And its proponents are keen to show that the technology is no longer just for stuntmen and may even have practical applications.
Flying jetpacks can take a lot of training to master though. That’s what prompted Hollywood animatronics expert Matt Denton and Royal Navy Commander Antony Quinn to found Maverick Aviation, and develop one that takes the complexities of flight control out the pilot’s hands.
The Maverick Jetpack features four miniature jet turbines attached to an aluminum, titanium and carbon fiber frame, and will travel at up to 30 miles per hour. But the secret ingredient is software that automatically controls the engines to maintain a stable hover, and seamlessly convert the pilot’s instructions into precise movements.
“It’s going to be very much like flying a drone,” says Denton. “We wanted to come up with something that anyone could fly. It’s all computer-controlled and you’ll just be using the joystick.”
One of the key challenges, says Denton, was making the engines responsive enough to allow the rapid tweaks required for flight stabilization. This is relatively simple to achieve on a drone, whose electric motors can be adjusted in a blink of an eye, but jet turbines can take several seconds to ramp up and down between zero and full power.
To get around this, the company added servos to each turbine that let them move independently to quickly alter the direction of thrust—a process known as thrust vectoring. By shifting the alignment of the four engines the flight control software can keep the jetpack perfectly positioned using feedback from inertial measurement units, GPS, altimeters and ground distance sensors. Simple directional instructions from the pilot can also be automatically translated into the required low-level tweaks to the turbines.
It’s a clever way to improve the mobility of the system, says Ben Akih-Kumgeh, an associate professor of aerospace engineering at Syracuse University. “It’s not only a smart way of overcoming any lag that you may have, but it also helps with the lifespan of the engine,” he adds. “[In] any mechanical system, the durability depends on how often you change the operating conditions.”
The software is fairly similar to a conventional drone flight controller, says Denton, but they have had to accommodate some additional complexities. Thrust magnitude and thrust direction have to be managed by separate control loops due to their very different reaction times, but they still need to sync up seamlessly to coordinate adjustments. The entire control process is also complicated by the fact that the jetpack has a human strapped to it.
“Once you’ve got a shifting payload, like a person who’s wobbling their arms around and moving their legs, then it does become a much more complex problem,” says Denton.
In the long run, says Denton, the company hopes to add higher-level functions that could allow the jetpack to move automatically between points marked on a map. The hope is that by automating as much of the flight control as possible, users will be able to focus on the task at hand, whether that’s fixing a wind turbine or inspecting a construction site.
Surrendering so much control to a computer might give some pause for thought, but Denton says there will be plenty of redundancy built in. “The idea will be that we’ll have plenty of fallback modes where, if part of the system fails, it’ll fall back to a more manual flight mode,” he said. “The user would have training to basically tackle any of those conditions.”
It might be sometime before you can start basic training, though, as the company has yet to fly their turbine-powered jetpack. Currently, flight testing is being conducted on an scaled down model powered by electric ducted fans, says Denton, though their responsiveness has been deliberately dulled so they behave like turbines. The company is hoping to conduct the first human test flights next summer.
Don’t get your hopes up about commuting to work by jetpack any time soon though, says Akih-Kumgeh. The huge amount of noise these devices produce make it unlikely that they would be allowed to operate within city limits. The near term applications are more likely to be search and rescue missions where time and speed trump efficiency, he says.