San Francisco’s Newly Passed Surveillance Plan Allows Police to Access Private Cameras Without Warrant

The San Francisco Board of Supervisors approved on Tuesday a plan that would allow police to access private security cameras without a warrant.

The board voted 7-4 to approve Democratic Mayor London Breed’s plan which allows police to access up to 24 hours of live outdoor video footage from private surveillance cameras without a warrant as long as the camera owner gives police permission, according to SF Gate. To access video footage without a warrant, police must be either responding to a life-threatening emergency, conducting a criminal investigation with written approval from a captain or higher-ranking official, or deciding how to deploy officers to a large public event, according to the report.

Breed said the legislation would allow police “to respond to the challenges presented by the organized criminal activity, homicides [and] gun violence,” according to The Associated Press. Breed introduced the proposal in 2021 to combat rampant theftrioting and looting.

Board President Shamann Walton voted against the legislation, saying it’s a violation of civil liberties, according to AP.

“I know the thought process is, ‘Just trust us, just trust the police department.’ But the reality is people have been violating civil liberties since my ancestors were brought here from an entirely, completely different continent,” he reportedly said.

The ACLU of Northern California also voiced their opposition to the policy in February, with staff attorney Matt Cagle saying the policy would “give unchecked power to the police, and make San Francisco less safe.”

Well, yes they can. And it’s not just by the GPS feature. That’s because the thing has to to continually communicate with a cell tower, that’s recorded and can be tracked.

Federal, State, and Local Law Enforcement Can Track You on Your Phone

It is hard to imagine that James Madison — who wrote the words of the Fourth Amendment, which limits the ability of the federal government to intrude upon the privacy of its citizens — would approve of it, but law enforcement from local police to the Federal Bureau of Investigation (FBI) can now track your every movement.

How? A data broker known as Fog Data Science, based in Madison’s home state of Virginia, is now selling geolocation data to state and local law enforcement. Federal law enforcement obtains its information on American citizens from other data brokers. Either way, law enforcement can track exactly where you have been at any time over the past several years.

Personal data is collected through the multitude of applications that Americans use on either their Android or iOS smartphones. Data brokers then sell that data to others, including Fog Data Science, which in turn sells it to local law-enforcement agencies across the country, including Broward County, Florida; New York City; and Houston. And it is not just big cities. Lawrence, Kansas, police use it, as well as the sheriff of Washington County in Ohio.

Continue reading “”

I give this less than a month before it’s either quietly deactivated, or the ATF section involved simply disregards the thousands of daily reports that become an unmanageable heap of meaningless drivel.

ATF launches anonymous gun crime reporting app

On Friday, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) announced a new app that will allow users to make anonymous tips about crimes involving firearms, explosives, arson and more.

The ATF tweeted that the bureau is working with Report It, a mobile app that uses “AI inspired technology” to help “prevent incidents before they occur,” according to the company’s website. The ATF app gives users a simple way to “anonymously and confidentially submit tips about crimes.”

“ATF partners with Report It® to provide a simple to use mobile app allowing users to anonymously and confidentially submit tips about crimes happening in communities involving firearms, explosives, arson, and violent crime. For more, go to https://atf.gov/atf-tips. #ATF50 #ATFtips,” the agency tweeted.

The ATF says on its website that the app is designed to “protect our communities” through a public-private partnership.“We look to you who live in these communities we protect to provide us with information about gun violence,” the ATF website states.

A look inside the ATF’s anonymous tip line app. (Screenshot)

“To make our communities safer, ATF is launching a new way to collect your tips involving firearms or to provide leads to help us prevent crimes from happening,” it continues. “Using your phone, tablet or computer, you will be able to tell us instantly and anonymously about crimes that may be happening in your communities that involve firearms, explosives, violent crime, or arson.”

Continue reading “”

They’ve made several movies on this theme, and none of them were good for humans.


Ukraine Unveils Mini “Terminator” Ground Robot Equipped With Machine Gun.

The latest war machine headed to Ukraine’s front lines isn’t a flying drone but a miniature 4×4 ground-based robot — equipped with a machine gun.

According to Forbes, Ukrainian forces are set to receive an uncrewed ground vehicle (UGV) called “GNOM” that is no bigger than a standard microwave and weighs around 110lbs.

“Control of GNOM is possible in the most aggressive environment during the operation of the enemy’s electronic warfare equipment.

“The operator doesn’t deploy a control station with an antenna, and does not unmask his position. The cable is not visible, and it also does not create thermal radiation that could be seen by a thermal imager,” said Eduard Trotsenko, CEO and owner of Temerland, the maker of the GNOM.

“While it is usually operated by remote control, GNOM clearly has some onboard intelligence and is capable of autonomous navigation. Previous Temerland designs have included advanced neural network and machine learning hardware and software providing a high degree of autonomy, so the company seems to have experience,” Forbes said.

The 7.62mm machinegun mounted on top of the “Terminator-style” robot will provide fire support for Ukrainian forces in dangerous areas. The UGV can also transport ammunition or other supplies to the front lines and even evacuate wounded soldiers with a special trailer.

Temerland said the GNOMs would be deployed near term. The highly sophisticated UGV could help the Ukrainians become more stealthy and lethal on the modern battlefield as they have also been utilizing Western drones.

Killer robots with machine guns appear to be entering the battlefield, and this one seems as if it was “WALL-E” that went to war.

Will Your “Smart” Devices and AI Apps Have a Legal Duty to Report on You?

I just ran across an interesting article, “Should AI Psychotherapy App Marketers Have a Tarasoff Duty?,” which answers the question in its title “yes”: Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that’s the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.

It’s a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:

To what extent will various “smart” products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible “owners”)?

To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant’s affirmative contribution to the harm. Normally, a psychotherapist wouldn’t have a duty to prevent harm caused by his patient, just as you don’t have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I’m skeptical about Tarasoff, though most judges that have considered the matter don’t share my skepticism.

But it is well-established in tort law that people have a legal duty to take reasonable care when they do something that might affirmatively help someone do something harmful (that’s the basis for legal claims, for instance, for negligent entrustment, negligent hiring, and the like). Thus, for instance, a car manufacturer’s provision of a car to a driver does affirmatively contribute to the harm caused when the driver drives recklessly.

Does that mean that modern (non-self-driving) cars must—just as a matter of the common law of torts—report to the police, for instance, when the driver appears to be driving erratically in ways that are indicative of likely drunkenness? Should Alexa or Google report on information requests that seem like they might be aimed at figuring out ways to harm someone?

To be sure, perhaps there shouldn’t be such a duty, for reasons of privacy or, more specifically, the right not to have products that one has bought or is using surveil and report on you. But if so, then there might need to be work done, by legislatures or by courts, to prevent existing tort law principles from pressuring manufacturers to engage in such surveillance and reporting.

I’ve been thinking about this ever since my Tort Law vs. Privacy article, but it seems to me that the recent surge of smart devices will make these issues come up even more.

Google Engineer On Leave After He Claims AI Program Has Gone Sentient.

Google engineer is speaking out since the company placed him on administrative leave after he told his bosses an artificial intelligence program he was working with is now sentient.

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.

As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.

What could go wrong?

Observation O’ The Day

Tyrant tool;
This tool is worse than useless. It will create opportunities for more murders. That is, unless you are a tyrant intent on disarming your subjects.

First off, the mass shooter will start shooting before they pass through the detector, taking out the guards before they even had a clue a threat was present. And, since there is a “funnel” for people going through the detector there will be a group of people ready for “harvesting” by the perp.

It also will make it difficult or impossible for people to defend themselves where these systems are deployed.

[1 if the system actually will work as advertised, and as we see in the article, there have been several cases of ‘false positives’, and those are ripe picking for a lawyers and a false arrest/false detention/defamation of character lawsuit by a private citizen. 2 if the cost of the system doesn’t make it more expensive than a retail store can afford, taking into account that most retail, especially grocery stores, actually operate on a razor thin profit margin.; Miles]

Hence, if your threat model is a mass shooter, the device will actually make things worse rather than better. Many other threat models suffer similar degradation of public security.

The threat model that doesn’t degrade is the one where you want your subjects to be more dependent on you for security and to make it difficult for them to threaten your position of power. In that case this system will be a useful asset to disarm your subjects….Joe Huffman


AI may be searching you for guns the next time you go out in public

When Peter George saw news of the racially motivated mass-shooting at the Tops supermarket in Buffalo last weekend, he had a thought he’s often had after such tragedies.

“Could our system have stopped it?” he said. “I don’t know. But I think we could democratize security so that someone planning on hurting people can’t easily go into an unsuspecting place.”

George is chief executive of Evolv Technology, an AI-based system meant to flag weapons, “democratizing security” so that weapons can be kept out of public places without elaborate checkpoints. As U.S. gun violence like the kind seen in Buffalo increases — firearms sales reached record heights in 2020 and 2021 while the Gun Violence Archive reports 198 mass shootings since January — Evolv has become increasingly popular, used at schools, stadiums, stores and other gathering spots.

“The idea of a kinder, gentler metal detector is a nice solution in theory to these terrible shootings,” said Jay Stanley, senior policy analyst for the American Civil Liberties Union’s project on speech, privacy, and technology. “But do we really want to create more ways for security to invade our privacy? Do we want to turn every shopping mall or Little League game into an airport?”

Evolv machines use “active sensing” — a light-emission technique that alsounderpins radar and lidar — to create images. Then it applies AI to examine them. Data scientists at the Waltham, Mass., company have created “signatures” (basically, visual blueprints) and trained the AI to compare them to the scanner images.

Continue reading “”

Again, this IOT (Internet Of Things) with everything digitally connected through the web turns out to not be all it was cracked up to be.


Bluetooth hack compromises Teslas, digital locks, and more

A group of security researchers has found a way to circumvent digital locks and other security systems that rely on the proximity of a Bluetooth fob or smartphone for authentication.

Sultan Qasim Khan, the principal security consultant and researcher with NCC Group, demonstrated the attack on a Tesla Model 3, although he notes that the problem isn’t specific to Tesla. Any vehicle that uses Bluetooth Low Energy (BLE) for its keyless entry system would be vulnerable to this attack.

Many smart locks are also vulnerable, Khan adds. His firm specifically called out the Kwikset/Weiser Kevo models since these use a touch-to-open feature that relies on passive detection of a Bluetooth fob or smartphone nearby. Since the lock’s owner doesn’t need to interact with the Bluetooth device to confirm they want to unlock the door, a hacker can relay the key’s Bluetooth credentials from a remote location and open someone’s door even if the homeowner is thousands of miles away.

How it works

This exploit still requires that the attacker have access to the owner’s actual Bluetooth device or key fob. However, what makes it potentially dangerous is that the real Bluetooth key doesn’t need to be anywhere near the vehicle, lock, or other secured devices.

Instead, Bluetooth signals are relayed between the lock and key through a pair of intermediate Bluetooth devices connected using another method — typically over a regular internet link. The result is that the lock treats the hacker’s nearby Bluetooth device as if it’s the valid key.

As Khan explains, “we can convince a Bluetooth device that we are near it — even from hundreds of miles away […] even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance.”

The exploit bypasses the usual relay attack protections as it works at a very low level of the Bluetooth stack, so it doesn’t matter whether the data is encrypted, and it adds almost no latency to the connection. The target lock has no way of knowing that it’s not communicating with the legitimate Bluetooth device.

Since many Bluetooth security keys operate passively, a thief would only need to place one device within a few feet of the owner and the other near the target lock. For example, a pair of thieves could work in tandem to follow a Tesla owner away from their vehicle, relaying the Bluetooth signals back to the car so that it could be stolen once the owner was far enough away.

These attacks could be carried out even across vast distances with enough coordination. A person on vacation in London could have their Bluetooth keys relayed to their door locks at home in Los Angeles, allowing a thief to quickly gain access simply by touching the lock.

This also goes beyond cars and smart locks. Researchers note that it could be used to unlock laptops that rely on Bluetooth proximity detection, prevent mobile phones from locking, circumvent building access control systems, and even spoof the location of an asset or a medical patient.

NCC Group also adds this isn’t a traditional bug that can be fixed with a simple software patch. It’s not even a flaw in the Bluetooth specification. Instead, it’s a matter of using the wrong tool for the job. Bluetooth was never designed for proximity authentication — at least not “for use in critical systems such as locking mechanisms,” the firm notes.

How to protect yourself

First, it’s essential to keep in mind that this vulnerability is specific to systems that rely exclusively on passive detection of a Bluetooth device.

For example, this exploit can’t realistically be used to bypass security systems that require you to unlock your smartphone, open a specific app, or take some other action, such as pushing a button on a key fob. In this case, there’s no Bluetooth signal to relay until you take that action — and you’re generally not going to try and unlock your car, door, or laptop when you’re not anywhere near it.

This also won’t typically be a problem for apps that take steps to confirm your location. For instance, the auto-unlock feature in the popular August smart lock relies on Bluetooth proximity detection, but the app also checks your GPS location to make sure you’re actually returning home. It can’t be used to unlock your door when you’re already home, nor can it open your door when you’re miles away from home.

If your security system allows for it, you should enable an extra authentication step that requires that you take some action before the Bluetooth credentials are sent to your lock. For example, Kwikset has said that customers who use an iPhone can enable two-factor authentication in their lock app, and it plans to add this to its Android app soon. Kwikset’s Kevo application also disables proximity unlocking functionality when the user’s phone has been stationary for an extended period.

Note that unlocking solutions that use a mix of Bluetooth and other protocols are not vulnerable to this attack. A typical example of this is Apple’s feature that lets folks unlock their Mac with their Apple Watch. Although this does use Bluetooth to detect the Apple Watch nearby initially, it measures the actual proximity over Wi-Fi — mitigation that Apple’s executives specifically said was added to prevent Bluetooth relay attacks.

NCC Group has published a technical advisory about Bluetooth Low Energy vulnerability and separate bulletins about how it affects Tesla vehicles and Kwikset/Weiser locks.

They introduced this about a year ago and looks to have kept improving it.


Move Over, Iron Man — Real Jet-Suited Heroes May Soon Respond to Our Emergencies.

Compliments of inventor Richard Browning, the 3-D printed Gravity Industries Jet Suit consists of two small turbines fastened to each arm as well as a larger one on the user’s back. In a test run captured on video, the developer climbed more than 2,000 feet over a 1.2-mile distance in around three minutes and forty seconds.

Witness the miracle:

‘You can’t stop the signal…”


ATF Tries To Ban Forced Reset Triggers As People Begin To 3D Print At Home

The Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) has been coming after the Orlando-based Rare Breed Triggers (RBT), the manufacturer of a drop-in AR-15 forced reset trigger since last summer.

Everyone in the gun-rights world has been following the ATF’s attack on RBT. The Feds say RBT’s FRT-15 (forced reset trigger for an AR-15 platform) is classified as a “machine gun,” but the company claims otherwise.

Less than two weeks ago, Gun Owners of America, one of the most prominent pro-gun organizations, published an alleged leaked internal ATF email documenting plans to start seizing lawfully-owned FRT-15s from manufacturers and resellers. RBT’s president Lawrence Demonico responded to the leaked memo and said while he couldn’t confirm it, “I can tell you we’ve received word from one dealer in Illinois late yesterday afternoon stating that the ATF visited him and handed him a cease and desist order and seized FRT-15 triggers.”

The ATF under the Biden administration is getting bolder and could rule by executive fiat on new guidelines for gun braces, serialized uppers, and 80% lowers as early as spring. The Feds are also pursuing the ban of forced reset triggers. Many in the gun community have a bad feeling about an overreaching ATF ahead of midterms as President Biden must appease his anti-gun base.

With that aside, we enter the world of 3D printing and how the gun community has embraced this technology over recent years to stay one step ahead of the ATF. This brings us to one YouTuber named “Hoffman Tactical,” who released a video days ago explaining how he 3D-printed a forced reset trigger.

Continue reading “”

They made movie(s) about this.


A Black Hawk helicopter flew for the first time without pilots.

a helicopter with no one on board

February has already been a big month for autonomous flight. For the first time, this past Saturday, and then again on Monday, a specially equipped Black Hawk helicopter flew without a single human on board. The computer-piloted aircraft was being tested as part of a DARPA program called Alias, and the tests took place out of Fort Campbell, Kentucky.

The retrofitted whirlybird was controlled by a Sikorsky-made autonomy system. As part of that system, the helicopter has a switch on board that allows the aviators to indicate whether two pilots, one pilot, or zero pilots will be operating the chopper. This was the first time that a Black Hawk was sent into the air with the no-pilots option, so that the computer system was handling all the controls. While these were just test flights, they hint at a future in which the Army could potentially send an autonomous helicopter on a dangerous rescue mission—and have no one on board it at all.

Continue reading “”

North Korea Hacked Him. So He Took Down Its Internet

For the past two weeks, observers of North Korea’s strange and tightly restricted corner of the internet began to notice that the country seemed to be dealing with some serious connectivity problems. On several different days, practically all of its websites—the notoriously isolated nation only has a few dozen—intermittently dropped offline en masse, from the booking site for its Air Koryo airline to Naenara, a page that serves as the official portal for dictator Kim Jong-un’s government. At least one of the central routers that allow access to the country’s networks appeared at one point to be paralyzed, crippling the Hermit Kingdom’s digital connections to the outside world.

Some North Korea watchers pointed out that the country had just carried out a series of missile tests, implying that a foreign government’s hackers might have launched a cyberattack against the rogue state to tell it to stop saber-rattling.

But responsibility for North Korea’s ongoing internet outages doesn’t lie with US Cyber Command or any other state-sponsored hacking agency. In fact, it was the work of one American man in a T-shirt, pajama pants, and slippers, sitting in his living room night after night, watching Alien movies and eating spicy corn snacks—and periodically walking over to his home office to check on the progress of the programs he was running to disrupt the internet of an entire country.

Just over a year ago, an independent hacker who goes by the handle P4x was himself hacked by North Korean spies. P4x was just one victim of a hacking campaign that targeted Western security researchers with the apparent aim of stealing their hacking tools and details about software vulnerabilities. He says he managed to prevent those hackers from swiping anything of value from him. But he nonetheless felt deeply unnerved by state-sponsored hackers targeting him personally—and by the lack of any visible response from the US government.

So after a year of letting his resentment simmer, P4x has taken matters into his own hands. “It felt like the right thing to do here. If they don’t see we have teeth, it’s just going to keep coming,” says the hacker. (P4x spoke to WIRED and shared screen recordings to verify his responsibility for the attacks but declined to use his real name for fear of prosecution or retaliation.) “I want them to understand that if you come at us, it means some of your infrastructure is going down for a while.”

Continue reading “”

AK, and I agreed years ago that the newer cars were actually desktop computers that had wheels and seats,  and could be driven around under their own power. The latest vehicle added to the stable, a ’19 Traverse, has more circuitry under the hood than I recall seeing in a small business PBX phone system.


Please Make A Dumb Car.

Today’s cars are dumb where they should be smart, and smart where they should be dumb. Enough already. Make a car that’s pretty much all dumb and watch it sell — because what automakers are giving people is so bad, they’ll pay more to have less of it.

Cars now are like budget smartphones with wheels: loaded with bloatware, unintuitive and slow to operate. Carmakers have always struggled with user interfaces, but until recently the biggest problem we had was “too many knobs.” How I long for those days!

The proliferation of touchscreens and LCDs has made every car feel like a karaoke booth. Animations show reclaimed energy from braking, the speedometer changes color as you approach the limit, the fan speed and direction is under three menus. And besides being non-functional, these interfaces are even ugly! The type, the layouts, and animations scream “designed by committee and approved by someone who doesn’t have to use it.”

Not to mention the privacy and security concerns. I was dubious the first time I saw a GPS in a car, my mom’s old RX300, about 20 years ago. “Yeah… that’s how they get you,” I thought. And now, Teslas with missed payments drive themselves to be impounded. Welcome to the future — your car is a narc now!

The final indignity is that these features are being sold as upscale, not downmarket, options. Screens are so cheap that you can buy a few million and use them everywhere, for everything, and tell buyers “enjoy the next generation of mobility!” But in reality it’s a cost-saving measure that cuts down on part numbers and lets your dashboard team kick the can down the road as often as they want. You know this for sure because high-end models are going back to knobs and dials for that “premium feel.”

So here’s what I would like: a dumb car. This is what I think that looks like.

Continue reading “”

We shall see.
I was the ‘guinea pig’ for one of the LGS here when we ran their first eform 4 for a suppressor past week. Everything online seemed to work okay. The major hang-up has been that Martinsburg West By God Virginia had a major snow storm hit and the Postal Service has my print cards stuck somewhere in the Post Office there for the past 4 days.


ATF Problems with Rollout of New eForms Online System

Continue reading “”

“Skynet smiles”
Now, what other dangerous things are hidden within this program, and Amazon ‘isn’t aware of’?


Alexa tells 10-year-old girl to touch live plug with penny

Amazon has updated its Alexa voice assistant after it “challenged” a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a “challenge to do”.

“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” the smart speaker said.

Amazon said it fixed the error as soon as the company became aware of it.

The girl’s mother, Kristin Livdahl, described the incident on Twitter.

She said: “We were doing some physical challenges, like laying down and rolling over holding a shoe on your foot, from a [physical education] teacher on YouTube earlier. Bad weather outside. She just wanted another one.”

That’s when the Echo speaker suggested partaking in the challenge that it had “found on the web”.

The dangerous activity, known as “the penny challenge”, began circulating on TikTok and other social media websites about a year ago.

Metals conduct electricity and inserting them into live electrical sockets can cause electric shocks, fires and other damage.

“I know you can lose fingers, hands, arms,” Michael Clusker, station manager at Carlisle East fire station, told The Press newspaper in Yorkshire in 2020.

“The outcome from this is that someone will get seriously hurt.”

Fire officials in the US have also spoken out against the so-called challenge.

Ms Livdahl tweeted that she intervened, yelling: “No, Alexa, no!”

However, she said her daughter was “too smart to do something like that”.

Amazon told the BBC in a statement that it had updated Alexa to prevent the assistant recommending such activity in the future.

“Customer trust is at the centre of everything we do and Alexa is designed to provide accurate, relevant, and helpful information to customers,” said Amazon in a statement.

“As soon as we became aware of this error, we took swift action to fix it.”

Modern high technology strikes again; with a swift kick to the seat of the pants.


Amazon’s outage just locked people out of their homes, scrambled their refrigerators, and shut off their Christmas lights.

Does aaaaaaanyone else find this to be, I don’t know, the least little teensy weensy bit concerning?


How Amazon Outage Left Smart Homes Not So Smart After All

The outage at Amazon.com Inc.’s cloud-computing arm left thousands of people in the U.S. without working fridgesroombas and doorbells, highlighting just how reliant people have become on the company as the Internet of Things proliferates across homes.

The disruption, which began at about 10 a.m. Eastern time Tuesday, upended package deliveries, took down major streaming services, and prevented people from getting into Walt Disney Co.’s parks.

Affected Amazon services included the voice assistant Alexa and Ring smart-doorbell unit. Irate device users tweeted their frustrations to Ring’s official account, with many complaining that they spent time rebooting or reinstalling their apps and devices before finding out on Twitter that there was a general Amazon Web Services outage. Multiple Ring users even said they weren’t able to get into their homes without access to the phone app, which was down.

Continue reading “”