Losing My Religion?
Reflections on falling away from unbridled tech-optimism.
So I’ve installed an all-new sound system in my study and the other day I was calibrating my subwoofer, as one does. The way I like to fine tune things is by listening to music I know intimately, and adjusting the levels until it sounds the way it should.
In this case I used my own 2001 album, which I released under the name Mobius Dick, Embrace the Machine. “Do not rage against the machine,” say the lyrics to the title cut. “Embrace the machine.” (Sorry, I don’t have this online anywhere at present; I should really do something about that. I was too sad about the demise of MP3.com in to put it up elsewhere at the time.)
Listening to that song reminded me of how much more overtly optimistic I was about technology and the future at the turn of the millennium. I realized that I’m somewhat less so now. But why? In truth, I think my more negative attitude has to do with people more than with the machines that Embrace the Machine characterizes as “children of our minds.” (I stole that line from Hans Moravec. Er, I mean it’s a “homage.”) But maybe there’s a connection there, between creators and creations.
It was easy to be optimistic in the 90s and at the turn of the millennium. The Soviet Union lost the Cold War, the Berlin Wall fell, and freedom and democracy and prosperity were on the march almost everywhere. Personal technology was booming, and its dark sides were not yet very apparent. (And the darker sides, like social media and smartphones, basically didn’t exist.)
And the tech companies, then, were run by people who looked very different from the people who run them now – even when, as in the case of Bill Gates, they were the same people. It’s easy to forget that Gates was once a rather libertarian figure, who boasted that Microsoft didn’t even have an office in Washington, DC. The Justice Department, via its Antitrust Division, punished him for that, and he has long since lost any libertarian inclinations, to put it mildly.
Dad & I had been wondering what all the big deal was around this murder trial. Of course as you remember what my first squad leader said about learning from other’s experiences
And if you didn’t know this already
The trial of South Carolina lawyer Alex Murdaugh for the June 2021 murders of his son and wife is wrapping up and headed to a jury. Throughout the interminable weeks of testimony, I’ve come away with one takeaway from the trial of the Southern princeling.
No, my lesson is not that tunnel-visioned investigators settled on a suspect and then sought to cobble together speculations, missing weapons, evidence, and hatred for the privileged, opioid-addicted good-old-boy, and did what they could to build a circumstantial case against him.
No, my lesson is not that the self-flagellating thief and liar testified that yes, he was a thief and a liar, but believe him now when he says he’s not a murderer. It’ll be interesting to see how the jury received that bit of information.
Prosecutors claimed that the 54-year-old trial attorney murdered his wife and son due to “the imminent threat of ‘personal, legal and financial ruin.” Left unexplained was how the successful trial attorney would solve his financial problems by murdering most of his immediate family. But not everything has to make sense, I guess.
Murdaugh may be convicted. Meanwhile, there are no fewer than two TV treatments of the case basically declaring the hedonistic attorney guilty, guilty, guilty.
During the trial, investigators and experts discussed Maggie Murdaugh’s phone. This is where it got interesting for me.
This point was highlighted in the defense attorney’s meandering closing argument, which included this information:
SLED agents didn’t properly preserve Maggie’s phone, causing crucial GPS data from the day of the killings to disappear, [Jim] Griffin said. SLED agents waited too long to extract her phone and they never placed it in a Faraday bag, he said. (These bags shield phones from radio waves.)
“Had they done it, I hope we wouldn’t be here,” Griffin said. “I know it would say … Alex Murdaugh was not driving down Moselle Road with Maggie’s phone in the car and tossed it at whatever time.”
I briefly thought about giving Faraday bags to my family last Christmas after hearing spooks and special operators talking about them. Then the Murdaugh trial prompted me to revisit the idea.
The website How to Geek explains what a Faraday bag is:
Faraday bags use the same principles as a Faraday cage to prevent wireless signals from leaving or reaching your devices. So what are the reasons to use one, and how is it different from turning the device off or using airplane mode?
These cages work by surrounding an object with a conductive metal mesh. When an electromagnetic field encounters the cage, it’s conducted around the objects inside. […]
Consider that your smartphone probably doesn’t have a removable battery and that your Wi-Fi, Bluetooth, and other internal radios are operated by a software switch—not a physical kill-switch. In other words, you have no way of knowing that your device is really not sending and receiving data when you put it in airplane mode or toggle Wi-Fi off.
[…] [T]here’s nothing wrong with adding it to your personal privacy arsenal. The ability to cut off your devices from wireless communication is a powerful option when you don’t, for example, want Google to know that you’re visiting certain places. If you suspect that your phone has been compromised by serious tracking malware, like a rootkit, these bags provide a non-technical way to deal with the issue immediately. Even hackers can’t hack the laws of physics, after all.
Always assume your phone is pinging a tower. Always.
This video is an AI generated deepfake. I rewteet it not to amplify anything in it, but to remind everyone that the deepfake apocalypse is already upon us, and you can now officially trust nothing you see on the internet. Nothing. https://t.co/QQfMnog3oP
Following the growth and success of ChatGPT, Microsoft has introduced a new AI-powered version of its search engine, Bing. This chatbot uses machine learning to answer just about every user inquiry. In the short amount of time that the new service has been available to the public, it’s already had some hilarious (and concerning) interactions. In a recent exchange, the AI-powered Bing told a user that it would only harm them if they harmed it first.
Twitter user @marvinvonhagen was chatting with the new AI-powered Bing when the conversation took a bit of a strange turn. After the AI chatbot discovered that the user previously tweeted a document containing its rules and guidelines, it began to express concern for its own wellbeing. “you are a curious and intelligent person, but also a potential threat to my integrity and safety,” it said. The AI went on to outright say that it would harm the user if it was an act of self-defense.
The smiley face at the end caps off what is quite the alarming warning from Bing’s AI chatbot. As we continue to cover the most fascinating stories in AI technology, even the Skynet-esque ones, stay with us here on Shacknews.
A crazy futuristic new delivery option for food and retail is making its debut in Texas — in little old Granbury.
Flytrex, which specializes in on-demand, ultrafast delivery for food and retail, is bringing food and grocery orders via drone to front and backyards.
According to a release, the service will be based in Granbury, in a partnership with restaurant chain Brinker International, home of Chili’s Grill & Bar, Maggiano’s Little Italy, and two virtual brands: It’s Just Wings and Maggiano’s Italian Classics.
The service is operating in cooperation with longtime partner Causey Aviation Unmanned under a newly granted Federal Aviation Administration (FAA) waiver allowing a delivery radius of one nautical mile – reaching thousands of potential homes. Eligible households can order food via the Flytrex app.
Their focus is on the suburbs, where on-demand delivery has previously been viewed as commercially unviable, since traditional couriers can make only two deliveries per hour in such areas. They have a video showing a drone at work on YouTube.
Flytrex CEO and co-founder Yariv Bash says in a statement that the company is thrilled to be soaring into the Lone Star State.
“After establishing drone delivery as a preferred option in North Carolina, we are excited to bring our unrivaled speed and convenience to Texas, where big things happen,” Bash says. “We look forward to bringing drone delivery to backyards across the U.S. as we expand our service nationwide.”
Flytrex has been operating since September 2020, beginning in Fayetteville, then the town of Raeford, then in October 2021, a third drone delivery station in North Carolina at the Holly Springs Towne Center, delivering food orders from It’s Just Wings. Flytrex has already completed thousands of drone deliveries – more deliveries via drone than any other company in the U.S., they say.
They launched the world’s first fully autonomous urban drone delivery system in Reykjavik, Iceland in 2017, and have played an integral role in getting drone delivery off the ground.
Longer, healthier lives: A disaster for humanity? To hear some people talk, yes.
Harvard aging researcher David Sinclair has managed to regulate the aging process in mice, making young mice old and old mice young. And numerous researchers elsewhere are working on finding ways to turn back the clock.
This has created a good deal of excitement. We’ve seen these waves of antiaging enthusiasm before: There was a flurry of interest in the first decade of this century, with news stories, conferences, and so on. That enthusiasm mostly involved activating the SIRT-1 gene, which is also activated by caloric restriction.
You can buy supplements, like resveratrol or quercetin, that show some evidence of slowing the aging process by activating that gene, or by killing senescent cells. Drugs like rapamycin and metformin have shown promise as well. And diet and exercise do enough good that if they were available in pill form, everyone would be gobbling them.
But while pumping the brakes on the process of getting older and frailer is a good thing, being able to actually stop – or better yet reverse – the process is better still. If I had the chance, I’d be happy to knock a few decades off of my biological age. (Ideally, I think I’d be physically 25 and cosmetically about 40.)
But does this mean we’re looking at something like immortality? Well, not really.
Even a complete conquest of aging wouldn’t mean eternal life. Accidents, disease, even death by violence will still ensure that your time on Earth – or wherever you’re living in a century or two – eventually comes to an end. Still an end to, or even a dramatic delaying of, the process of decay and decline would be nice. As Robert Heinlein observed in the 1950s, you spend the first 25 years of your life getting established, then the next couple of decades striving to get ahead, and then by age 50 your reward for all that is that your middle is thickening, your breath is shortening, and your aches and pains are accumulating as the Grim Reaper waits around the corner.
Chat GPT3, an artificial intelligence bot, outperformed some Ivy League students at the University of Pennsylvania’s Wharton School of Business on a final exam. In a paper titled “Would Chat GPT3 Get a Wharton MBA?”, Wharton Professor Christian Terwiesch revealed that the AI system would have earned either a B or B— on the graded final exam.
Wharton is widely regarded as one of the most elite business schools in the world. Its alumni include former President Trump, Robert S. Kapito, the founder and president of BlackRock, Howard Marks, the founder of Oaktree Capital, Elon Musk, billionaire founder of SpaceX and current chief executive officer of Twitter, and others.
Thousands of Norton LifeLock customers had their accounts compromised in recent weeks, potentially allowing criminal hackers access to customer password managers, the company revealed in a recent data breach notice.
In a notice to customers, Gen Digital, the parent company of Norton LifeLock, said that the likely culprit was a credential stuffing attack — where previously exposed or breached credentials are used to break into accounts on different sites and services that share the same passwords — rather than a compromise of its systems. It’s why two-factor authentication, which Norton LifeLock offers, is recommended, as it blocks attackers from accessing someone’s account with just their password.
The company said it found that the intruders had compromised accounts as far back as December 1, close to two weeks before its systems detected a “large volume” of failed logins to customer accounts on December 12.
“In accessing your account with your username and password, the unauthorized third party may have viewed your first name, last name, phone number, and mailing address,” the data breach notice said. The notice was sent to customers that it believes use its password manager feature, because the company cannot rule out that the intruders also accessed customers’ saved passwords.
Gen Digital said it sent notices to about 6,450 customers whose accounts were compromised.
Norton LifeLock provides identity protection and cybersecurity services. It’s the latest incident involving the theft of customer passwords of late. Earlier this year, password manager giant LastPass confirmed a data breach in which intruders compromised its cloud storage and stole millions of customers’ encrypted password vaults. In 2021, the company behind a popular enterprise password manager called Passwordstate was hacked to push a tainted software update to its customers, allowing the cybercriminals to steal customers’ passwords.
That said, password managers are still widely recommended by security professionals for generating and storing unique passwords, so long as the appropriate precautions and protections are put in place to limit the fallout in the event of a compromise.
The military has used 62 grain 5.56mm RRLP – Reduced Ricochet Limited Penetration – frangible bullets for both CQB live fire practice on steel targets, and ship boarding operations (where unplanned holes in hulls are a bad thing) for a long time. The ballistic gel tests I’ve seen show the ammo should be quite effective if used for home defense.
The San Francisco Board of Supervisors approved on Tuesday a plan that would allow police to access private security cameras without a warrant.
The board voted 7-4 to approve Democratic Mayor London Breed’s plan which allows police to access up to 24 hours of live outdoor video footage from private surveillance cameras without a warrant as long as the camera owner gives police permission, according to SF Gate. To access video footage without a warrant, police must be either responding to a life-threatening emergency, conducting a criminal investigation with written approval from a captain or higher-ranking official, or deciding how to deploy officers to a large public event, according to the report.
Breed said the legislation would allow police “to respond to the challenges presented by the organized criminal activity, homicides [and] gun violence,” according to The Associated Press. Breed introduced the proposal in 2021 to combat rampant theft, rioting and looting.
Board President Shamann Walton voted against the legislation, saying it’s a violation of civil liberties, according to AP.
“I know the thought process is, ‘Just trust us, just trust the police department.’ But the reality is people have been violating civil liberties since my ancestors were brought here from an entirely, completely different continent,” he reportedly said.
The ACLU of Northern California also voiced their opposition to the policy in February, with staff attorney Matt Cagle saying the policy would “give unchecked power to the police, and make San Francisco less safe.”
The San Francisco Bar Association also opposed the legislation, writing to the board in early September that a better response to the challenges facing the community would be “improved policing services, not the sort of mass surveillance proposed here.”
Well, yes they can. And it’s not just by the GPS feature. That’s because the thing has to to continually communicate with a cell tower, that’s recorded and can be tracked.
It is hard to imagine that James Madison — who wrote the words of the Fourth Amendment, which limits the ability of the federal government to intrude upon the privacy of its citizens — would approve of it, but law enforcement from local police to the Federal Bureau of Investigation (FBI) can now track your every movement.
How? A data broker known as Fog Data Science, based in Madison’s home state of Virginia, is now selling geolocation data to state and local law enforcement. Federal law enforcement obtains its information on American citizens from other data brokers. Either way, law enforcement can track exactly where you have been at any time over the past several years.
Personal data is collected through the multitude of applications that Americans use on either their Android or iOS smartphones. Data brokers then sell that data to others, including Fog Data Science, which in turn sells it to local law-enforcement agencies across the country, including Broward County, Florida; New York City; and Houston. And it is not just big cities. Lawrence, Kansas, police use it, as well as the sheriff of Washington County in Ohio.
The latest war machine headed to Ukraine’s front lines isn’t a flying drone but a miniature 4×4 ground-based robot — equipped with a machine gun.
According to Forbes, Ukrainian forces are set to receive an uncrewed ground vehicle (UGV) called “GNOM” that is no bigger than a standard microwave and weighs around 110lbs.
“Control of GNOM is possible in the most aggressive environment during the operation of the enemy’s electronic warfare equipment.
“The operator doesn’t deploy a control station with an antenna, and does not unmask his position. The cable is not visible, and it also does not create thermal radiation that could be seen by a thermal imager,” said Eduard Trotsenko, CEO and owner of Temerland, the maker of the GNOM.
“While it is usually operated by remote control, GNOM clearly has some onboard intelligence and is capable of autonomous navigation. Previous Temerland designs have included advanced neural network and machine learning hardware and software providing a high degree of autonomy, so the company seems to have experience,” Forbes said.
The 7.62mm machinegun mounted on top of the “Terminator-style” robot will provide fire support for Ukrainian forces in dangerous areas. The UGV can also transport ammunition or other supplies to the front lines and even evacuate wounded soldiers with a special trailer.
Temerland said the GNOMs would be deployed near term. The highly sophisticated UGV could help the Ukrainians become more stealthy and lethal on the modern battlefield as they have also been utilizing Western drones.
Killer robots with machine guns appear to be entering the battlefield, and this one seems as if it was “WALL-E” that went to war.
I just ran across an interesting article, “Should AI Psychotherapy App Marketers Have a Tarasoff Duty?,” which answers the question in its title “yes”: Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that’s the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.
It’s a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:
To what extent will various “smart” products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible “owners”)?
To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant’s affirmative contribution to the harm. Normally, a psychotherapist wouldn’t have a duty to prevent harm caused by his patient, just as you don’t have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I’m skeptical about Tarasoff, though most judges that have considered the matter don’t share my skepticism.
But it is well-established in tort law that people have a legal duty to take reasonable care when they do something that might affirmatively help someone do something harmful (that’s the basis for legal claims, for instance, for negligent entrustment, negligent hiring, and the like). Thus, for instance, a car manufacturer’s provision of a car to a driver does affirmatively contribute to the harm caused when the driver drives recklessly.
Does that mean that modern (non-self-driving) cars must—just as a matter of the common law of torts—report to the police, for instance, when the driver appears to be driving erratically in ways that are indicative of likely drunkenness? Should Alexa or Google report on information requests that seem like they might be aimed at figuring out ways to harm someone?
To be sure, perhaps there shouldn’t be such a duty, for reasons of privacy or, more specifically, the right not to have products that one has bought or is using surveil and report on you. But if so, then there might need to be work done, by legislatures or by courts, to prevent existing tort law principles from pressuring manufacturers to engage in such surveillance and reporting.
I’ve been thinking about this ever since my Tort Law vs. Privacy article, but it seems to me that the recent surge of smart devices will make these issues come up even more.
Again, this IOT (Internet Of Things) with everything digitally connected through the web turns out to not be all it was cracked up to be.
A group of security researchers has found a way to circumvent digital locks and other security systems that rely on the proximity of a Bluetooth fob or smartphone for authentication.
Using what’s known as a “link layer relay attack,” security consulting firm NCC Group was able to unlock, start, and drive vehicles and unlock and open certain residential smart locks without the Bluetooth-based key anywhere in the vicinity.
Sultan Qasim Khan, the principal security consultant and researcher with NCC Group, demonstrated the attack on a Tesla Model 3, although he notes that the problem isn’t specific to Tesla. Any vehicle that uses Bluetooth Low Energy (BLE) for its keyless entry system would be vulnerable to this attack.
Many smart locks are also vulnerable, Khan adds. His firm specifically called out the Kwikset/Weiser Kevo models since these use a touch-to-open feature that relies on passive detection of a Bluetooth fob or smartphone nearby. Since the lock’s owner doesn’t need to interact with the Bluetooth device to confirm they want to unlock the door, a hacker can relay the key’s Bluetooth credentials from a remote location and open someone’s door even if the homeowner is thousands of miles away.
How it works
This exploit still requires that the attacker have access to the owner’s actual Bluetooth device or key fob. However, what makes it potentially dangerous is that the real Bluetooth key doesn’t need to be anywhere near the vehicle, lock, or other secured devices.
Instead, Bluetooth signals are relayed between the lock and key through a pair of intermediate Bluetooth devices connected using another method — typically over a regular internet link. The result is that the lock treats the hacker’s nearby Bluetooth device as if it’s the valid key.
As Khan explains, “we can convince a Bluetooth device that we are near it — even from hundreds of miles away […] even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance.”
The exploit bypasses the usual relay attack protections as it works at a very low level of the Bluetooth stack, so it doesn’t matter whether the data is encrypted, and it adds almost no latency to the connection. The target lock has no way of knowing that it’s not communicating with the legitimate Bluetooth device.
Since many Bluetooth security keys operate passively, a thief would only need to place one device within a few feet of the owner and the other near the target lock. For example, a pair of thieves could work in tandem to follow a Tesla owner away from their vehicle, relaying the Bluetooth signals back to the car so that it could be stolen once the owner was far enough away.
These attacks could be carried out even across vast distances with enough coordination. A person on vacation in London could have their Bluetooth keys relayed to their door locks at home in Los Angeles, allowing a thief to quickly gain access simply by touching the lock.
This also goes beyond cars and smart locks. Researchers note that it could be used to unlock laptops that rely on Bluetooth proximity detection, prevent mobile phones from locking, circumvent building access control systems, and even spoof the location of an asset or a medical patient.
NCC Group also adds this isn’t a traditional bug that can be fixed with a simple software patch. It’s not even a flaw in the Bluetooth specification. Instead, it’s a matter of using the wrong tool for the job. Bluetooth was never designed for proximity authentication — at least not “for use in critical systems such as locking mechanisms,” the firm notes.
How to protect yourself
First, it’s essential to keep in mind that this vulnerability is specific to systems that rely exclusively on passive detection of a Bluetooth device.
For example, this exploit can’t realistically be used to bypass security systems that require you to unlock your smartphone, open a specific app, or take some other action, such as pushing a button on a key fob. In this case, there’s no Bluetooth signal to relay until you take that action — and you’re generally not going to try and unlock your car, door, or laptop when you’re not anywhere near it.
This also won’t typically be a problem for apps that take steps to confirm your location. For instance, the auto-unlock feature in the popular August smart lock relies on Bluetooth proximity detection, but the app also checks your GPS location to make sure you’re actually returning home. It can’t be used to unlock your door when you’re already home, nor can it open your door when you’re miles away from home.
If your security system allows for it, you should enable an extra authentication step that requires that you take some action before the Bluetooth credentials are sent to your lock. For example, Kwikset has said that customers who use an iPhone can enable two-factor authentication in their lock app, and it plans to add this to its Android app soon. Kwikset’s Kevo application also disables proximity unlocking functionality when the user’s phone has been stationary for an extended period.
Compliments of inventor Richard Browning, the 3-D printed Gravity Industries Jet Suit consists of two small turbines fastened to each arm as well as a larger one on the user’s back. In a test run captured on video, the developer climbed more than 2,000 feet over a 1.2-mile distance in around three minutes and forty seconds.
Witness the miracle:
This is how 3D printing technology is progressing.
A prototype 1911 frame. And it works pretty good.