You want to stop the source, which is that the government and other agencies can purchase surveillance data that would otherwise be disallowed by the 4th amendment. We need to end this 'laundering' of information through third parties, and enforce the constitution by its intent.
They're also one and the same generally-- at least if the stalker has money or the right friends most kinds of law enforcement access means stalker access. It's not unheard of for an officer themselves to be the stalker, and there are so many people that work in law enforcement that bribing, impersonating, or persuading your way to access is not that big a deal. Not to mention that enabled stalkers can just file a federal lawsuit and issue subpoena for records.
The only safe thing is for the records to never exist in the first place.
> It's not unheard of for an officer themselves to be the stalker
This was one of the motivations for passage of the Driver's Privacy Protection Act of 1994. Nowadays, officers need a legitimate reason to run a plate - unless the patrol car is fitted with automatic cameras[1] that look up every plate of every car they drive past.
> The Virginia state police used license plate readers to track people’s attendance at political events;
> The New York Police Department used license plate readers to keep track of who visited certain places of worship, and how often;
> Despite all this surveillance, ALPR technology has been repeatedly shown to be unreliable; like other police technologies, ALPRs can and do make mistakes.[2]
Generally, court decisions have held that you have zero expectation of privacy when you are in public spaces. Current license plate standards[3] aim for plates that are not cluttered and are easily read by the human eyeball, despite being wrapped with license plate frames (which usually make the state hard/impossible to read which is the most common failure mode for ANLR[4]). If the reflectivity material (traditionally called "ScotchLite"[5]) is worn out (or defaced), most states require the plate to be replaced.
How is that achievable? PIs can legally do it. Random people can keep tabs on you and exchange gossip. It's the sudden scale and low cost that doesn't sit well with freedom to not be tracked in public 24/7 we took for granted.
The core ill is aggregated data, because that's what allows the mass in surveillance, data mining, etc.
The collection actions are almost immaterial. Without persistence they must be re-performed for each request, which naturally provides a throughput bottleneck and makes "for everyone" untenable.
If we agree the aggregated data at rest is the problem, then addressing it would look like this:
1. Classify all data holders at scale into a regulated group
2. Apply initial regulations
- To respond to queries for copies of personal data held
- To update data or be liable in court for failing to do so
- To validate counterparties apply basic security due diligence before transferring data (or the transferer also faces liability)
- To maintain a *full* chain of custody of data (from originator through every intermediate party to holder) so that leaks / misuse can be traced
- To file yearly update on the types, amount of data, and counterparties it was transferred to with the federal government that are made public
The initial impediment to regulatory action is Google, Meta, Equifax, etc. saying "This problem is too complex and you don't understand it."
It's not. But the first step is classifying and documenting the problem.
It is not realistic to say that no person is allowed to keep track of another person; watch where they go, when, with who, etc.
It should not be acceptable for a company to gather information on "everyone"; where they have been going, when, with who, how often, etc. And it should not be acceptable for them to sell that information (to government agencies OR private citizens).
It's a matter of scale.
- Making the first one illegal/impossible would be difficult/costly; and not doing so has a limited impact (to society, not to the single person affected).
- Making the second one illegal is much easier, and it's much easier to shut down a large company doing it than it is 1,000 individual stalkers. The impact of making it illegal is much wider and better for society as a whole.
We don't want anyone being stalked. But in a cost/benefit analysis, we can do something about one of them but not the other.
The only way is through - everybody should get into the practice of stalking and gossiping about each other in a Molochian environment, where the people who do not do so suffer from the losing side of an information asymmetry.
Expect AI, especially post-Mythos, to just enable this at even further scale. Consumer grade wireless networking gear as a whole is a very wide attack surface and is basically never updated.
If PIs can "legally" do it then it sounds like there is a law which allows them to do it. That law can be revoked (unless the power comes from Constitution which would make it effectively impossible to revoke).
Note that PIs are effectively illegal under GDPR by default. They would generally need to provide Article 13 notice, i.e. you would become aware of them unless they were just asking around without actually following you. Member states can make them legal though (via Article 23) and likely in many cases they have done so.
In the US, PI licensing is only about PIing for hire. The actual act of going through public records, following cars and whatnot do not require a license, you can spy on anyone without a license as long as you don't get paid for it.
EU is more complicated, but Article 14.5.b allows withholding notice if it would impair/defeat the purpose of processing. The PI must however apply "safeguards", whatever it could mean.
Article 14(5)(b) does, but that only applies for Article 14 notice (personal data not directly obtained from data subject). Article 13 (personal data obtained directly from data subject) does not have such exception in GDPR itself.
This becomes extremely relevant when you read it in the light of the C-422/24 decision. In that personal data collected via body worn cameras was determined to be "directly obtained". Paragraph 41 from the judgement:
> If it were accepted that Article 14 of the GDPR applies where personal data are collected by means of a body camera, the data subject would not receive any information at the time of collection, even though he or she is the source of those data, which would allow the controller not to provide information to that data subject immediately. Therefore, such an interpretation would carry the risk of the collection of personal data escaping the knowledge of the data subject and giving rise to hidden surveillance practices. Such a consequence would be incompatible with the objective, referred to in the preceding paragraph, of ensuring a high level of protection of the fundamental rights and freedoms of natural persons.
Given this it's very unlikely that PI observing (especially if they record) could be considered to be Article 14 instead of Article 13 type of collection as it's exactly "hidden surveillance practice" that the Court warned about.
Member states do have a right to restrict the Article 13 disclosure obligations via Article 23 restriction, but that requires specific law in the member state & the law itself must fulfill the obligations that Article 23 requires. Article 23(2) essentially forbids leaving everything up to the controller.
And as far as PI in the US goes, actions between stalking and PI "for self" tend to be so similar that I wouldn't necessarily recommend anyone to try it.
Honestly it should probably just be illegal for anyone, private or public, to engage in mass surveillance (or "data gathering", whatever) of anybody who didn't expressly consent to it. As long as the data exist, they will be abused.
When I installed the SoundCloud app and it told me by continuing I agree to them sharing my data with their 954 partners.[1]
1. I’m not even joining. When I mostly recently installed the SoundCloud app - for the first time on a new device, that’s what’s it said: 954 partners. How can anyone reasonably understand what it is their agreeing to in that scenario.
This is the important point. You need the right to not be discriminated when you withhold your consent, otherwise your consent is effectively meaningless, as it is forced on you by your impossible bargaining position. This is one of the central pillars of the GDPR without which it wouldn't work at all. Be advised to make asking customers for consent that doesn't directly benefit them illegal as well, lest you risk creating another wave of malicious cookie banners.
> You need the right to not be discriminated when you withhold your consent, otherwise your consent is effectively meaningless, as it is forced on you by your impossible bargaining position.
Which is why "we don't serve patrons without shoes and pants" policy is unconstitutional, yeah.
If you don't want to agree to a business's demands — you're welcome to not deal with them and look for an alternative. All the alternatives have the same (or even worse) demands? Unless you can prove collusion, that's just how the invisible hand of the market worked its magic out. Go petition you congressman to violate laissez-faire even more than it already is, I guess.
Except the are companies with which you effectively must do business.
Microsoft (or Apple).
Any web host, payment processor, etc that's contracted to do work for your local government (I suppose you could try driving to the government office and pay by check, but then you need to give consent to Ford or Chevy).
Short of living like a hermit, there's no practical way to avoid all ridiculous T&C.
The trouble with this is that I, at least, am trying to live in a society. And society has both rights and responsibilities. Sometimes you are forced to do things, or don’t do things, contrary to your desires. Every freedom has two sides, you can’t ignore the fact that increasing some freedoms for one decreases other freedoms for others.
The shirt and shoes example is a great example in fact that illustrates the point. You don’t have unlimited freedom to not wear shoes, just like a business does not have unlimited freedom to impose whatever terms it likes, just because it put it in its ToS.
> You don’t have unlimited freedom to not wear shoes
Okay, I am gonna be 100% serious here: you absolutely should have such a freedom. Just as loitering or jaywalking being a crime is inherently totalitarian, what the hell.
In this case, unlimited means literally everywhere.
You do have the right to go barefoot in your own home. And in true public spaces.
But, a property owner can require shoes. Do I care if somebody is barefoot in the local grocer? No, not really. But, the proprietor might because they want to limit their liability (should something fall on your foot, a cart run it over, or a loose tack/nail somehow land in an aisle, etc).
Yes please. Your shaming didn't work. Free markets centre of gravity is biased towards capital and land owners. We need people power to balamce it back. Something we poor people are all enjoying now (pssst me and you are poor.... kings and barons are the few and rich)
I really need to start putting /s at the ends of my comments where I merely restate the currently adopted legal theory/framework in non-sugar-coated terms, don't I? The whole liberal movement has its roots in the merchants' and industrialists' desire of having as little interference from the aristocracy-heavy governments of the yore, and it really shows even to this day.
Not only that, but it should be illegal (eg: fines for the company and potential jail time for executives) for tying consent to use/purchase of services or products.
Means of Control by Byron Tau and Surveillance Valley by Yasha Levine. Can’t recommend these books enough for anyone who is skeptical of the above claim.
Even if we somehow, perhaps via magic genie-wish, made the government totally disinterested... these systems would still enable dystopian levels of private surveillance and manipulation.
The Potus is literally a pedophile, criminals are here to stay and winning. Your camera company supports them as long as they have money and/or control of the system.
A significant chunk of the infrastructure that farms data is now from private organizations, who sell that information because it is a source of revenue.
Government is the bogeyman we are afraid of, but ad tech is doing the actual heavy lifting.
Thanks a lot for your comment! We agree that a dataset as small as 5 GB may sound strange but it was a conscious decision. Check out our blog post to read more about the methodology of this benchmark itself.
TLDR It's not our choice, but it's meaningful. Because this 5GB is single data segment and literally what you will have in Elastic/etc when you have overall TBs of data. See https://www.elastic.co/docs/deploy-manage/production-guidanc... (single shard is one Lucene index that contains multiple data segments)
So your extension does a bunch of hooks to spoof edge, but then only works on edge? And edge using Netflix normally already supports 4k. So this does nothing and does not solve the stated problem of chrome and Firefox 4k Netflix streaming.
I would imagine it's less a product to use and more documentation of the various techniques that are involved. It seems pretty reasonable to share that with others.
What I'm advocating is a "downvote (or ignore) and move on" attitude, as opposed to "I'm going to post about this" stance. Because, similar to "your color scheme is not a11y-friendly" or "you're posting affiliatate-links" or "this is effectively a paywall", there is zero chance of a productive conversation sprouting from that.
> Because, similar to "your color scheme is not a11y-friendly" or "you're posting affiliatate-links" or "this is effectively a paywall", there is zero chance of a productive conversation sprouting from that.
Those are all legitimate concerns or even valid complaints, though, and, once raised, those concerns can be addressed by fixing the problem, if the person responsible for the state of affairs chooses to do so.
If someone is accused falsely of using AI or anything else that they genuinely didn’t do, like a paywall, then I can see your “downvote and move on” strategy as being perhaps expedient, but I don’t think your comparison is a helpful framing. Accessibility concerns are valid for the same reason as paywall concerns: it’s a valid position to desire our shared knowledge and culture to be accessible by one and by all without requiring a ticket to ride, entry through a turnstile, or submitting to profiling or tracking. If someone releases their ideas into the world, it’s now part of our shared consciousness and social fabric. Ideas can’t be owned once they’re shared, nor can knowledge be siloed once it’s dispersed.
It seems that you’re saying that simply because there isn’t a good rejoinder to false claims of AI usage that we shouldn’t make such claims at all, even legitimate ones, but this gives cover to bad actors and limits discourse to acceptable approved topics, and perhaps lowers the level of discourse by preventing necessary expectations of disclosure of AI usage from forming. If we throw in the towel on AI usage being expected to be disclosed, then that’s the whole ballgame. Folks will use it and not say so, because it will be considered rude to even suggest that AI was used, which isn’t helpful to the humans who have to live in such a society.
We ought to have good methodological reasons for the things we publish if we believe them to be true, and I’m not trying to be a naysayer or anything, but I respectfully disagree with your statement generally and on the points. All of the things you mentioned should be called out for cause, even if there isn’t much interesting discussion to be had, because the facts of the matters you mention are worth mentioning themselves in their own right. Just like we should let people like things, we should let people dislike things, and saying so adds checks and balances to our producer-consumer dynamic.
reply