Hacker Newsnew | past | comments | ask | show | jobs | submit | progval's commentslogin

Use a package repository that fast-tracks security updates, like Debian Stable.

> I'm tired of having to connect on EDF' shitty website to get a new PDF every three months.

It doesn't look like this app can generate "justificatifs de domicile", only substitutes for an identity card or passport.

> Why do the put three stats about trains on your linked page?!

I was wondering about that too


> It doesn't look like this app can generate "justificatifs de domicile", only substitutes for an identity card or passport.

You're absolutely right! Damn!

At least it should make it easier to use France Connect with the QR code stuff instead of the credentials from other websites...


It looked great and I wanted to try it, but it doesn't work on the web and my smartphone is rejected with no clear explanation ("missing some security mechanisms"); probably because I'm running LineageOS with MicroG.

Proving* that the KYC implementation is bogus as it relies on GSF. *Probably.

Because no one makes larger LPCAMM2 modules. But Framework's CEO says he expects higher density modules in the future: https://youtu.be/GnOpIQJnYWU?si=UBMfW7SsiNjwJ1fo&t=292

Because a comment that just says it's AI generated provides no value to the readers. They could at least provide an alternative link like you did.

It does provide value in that I know I shouldn't read it. It's clearly LLM written after a few glances.

How does that help Facebook? They already have plenty of signals to guess their users' age, what would they do with an other one? They are not going to ban children anyway.

It helps them by making it somebody else's responsibility to get it right and thus shields them from liability.

The OS should start labeling everybody as a child by default. Forbid Facebook to show ads and any harming content by default. The OS has little less to lose with this approach than FB.

So it lets them know for sure who is a child. What liability does that shield them from, and how?

FB etc. may argue "device says this user is an adult", even though device may say that only because the parents don't set up separate user accounts e.g. shared family iPad, or because the kids being more tech savvy in the first place like we all were when I myself was a kid.

Every one of these age assurance laws basically says:

1. The OS vendor must provide an age bucket using the minimum amount of data necessary

2. App vendors (i.e. Facebook) must use the OS vendor's age buckets to determine age

The idea is that the next time Facebook gets hit with a child endangerment lawsuit, they can say "Well, we used the age buckets the government told us to, and they said the plaintiff was 18+, so we're not liable".

This, of course, assumes that most social media and Internet regulation will continue being targeted at children only, both because courts are reluctant to enforce 1A on laws that censor children[0] and because the current political class actually benefits from the harms Facebook does to adults. Like, a good chunk of government surveillance is just buying data from Google and Facebook.

[0] The root password to the US constitution is "th1nk0fth3cHIldren!!1" after all


The investigation you linked to is entirely hallucinated by LLMs: https://news.ycombinator.com/item?id=47659552 (tboteproject and the "Reddit researcher" are the same person).

They also added this page since I posted that comment: https://web.archive.org/web/20260411112604/https://tboteproj... where they claim their website is "under surveillance" because it got a few thousand requests from Google Cloud et al, most of them to a single page. This really shows how low their standards are.


I share your wariness of the LLM garbage, but I believe the conclusions are correct. This has Facebook's stink all over it. I worked there and know of what I speak.

So we should believe the hallucinations because they sound like something that could be true? Does the LLM in the middle somehow makes it more trustworthy than if GP had just shared their own pattern-matching conjecture?

No. I think LLMs are garbage. Separately, and unrelated: I think Facebook is behind these bills. The LLM may be garbage and still sometimes produce a correct result.

Ok, but then we should look for an actual source beyond "Don't worry that it's garbage, it smells ok in this case."

You are arguing with something I did not say.

Yes, it would be nice to know with certainty who is behind these bills. It sucks how much opaque money influences American politics.

Josh Gottheimer's press release[1] on HR8250 mentions the "Meta Parents Network." I don't know what that is, but it does have "Meta" in the name.

Buffy Wick's noise about AB1043 claimed it was passed with the support of tech companies. I have spoken directly to one person close to AB1043 who told me Facebook argued against AB1043. I have doubts. But if true, I suspect they were not arguing in good faith and had ulterior motives.

In the end, no matter who is secretly lobbying for or against age verification bills all over the planet, the bills are terrible, and we should fight them.

[1] https://gottheimer.house.gov/posts/release-gottheimer-announ...


There's an SMBC strip that makes your exact point, except they intended it as satire, whereas you seem to mean it in earnest.

https://www.smbc-comics.com/comic/aaaah


I'm confused by how my point got so lost.

I think Facebook is behind these bills. I think that from personal experience working at Facebook.

That an LLM may have arrived at the same conclusion is unrelated. LLMs are garbage. Don't use them.


We're trying to have a discussion about facts, not opinions.

I typoed the second link and can't edit anymore. Correct one: https://web.archive.org/web/20260411112604/https://tboteproj...

It's still pretty far physically accurate because there is infinite acceleration the moment a ship reaches the target orbit.

That sounds like it would be a completely different game and probably not as fun since you'd have to use some very fiddly controls to manually get into orbit. If you eliminate orbit entirely then it's just a slalom race. "Hitting" each star/planet is the immediate feedback that makes it fun.

Yeah, I want to enter weird orbits around the planets.

Yes, give me weird orbits! I want a shot which is just outside the target area to get sucked in by the gravity of the planet, but potentially letting me slingshot around an intermediate planet towards a more distant one. The tap command should still mean “gravity disengaged, momentum still active“ to allow shifts from one orbit to another.

True. Hard to square it being a game, fast-spaced and accurate.

I wish people would stop sharing this website, their research is massively written by LLMs and looks good at a glance, but it goes in every direction at the same time and lacks logical connections. And the claims don't really match their sources.

Their initial publication was backed by a Git repository with hundreds of pages of documents written in just three days (https://web.archive.org/web/20260314224623/https://tboteproj...). It also contained nonsense like an "anomaly report" with recommendations from the LLM agent to itself, which covers an analysis of contributors to Linux's BPF, Android's Gerrit, and parser errors in using legislative databases. https://web.archive.org/web/20260314103202/https://tboteproj... . The repository was rewritten since, though.

This post follows their usual pattern. The second source they link to has been a dead link for 11 months (https://web.archive.org/web/20250501000000*/https://www.pala...). There's a lot about Persona's design, MCPs, vulnerabilities, data leaks, but nothing proving they use it for mass surveillance. The entire case for it being mass surveillance rests on two points: that they interact with AI companies and they offer MCP endpoints (section titled "Persona's Surveillance Architecture")


Thank you. Investigative journalism is so important and I would happily believe some of the claims made here, but when I encounter even just a few sentences that sound LLM-written, suddenly I don't trust any of the statements in the source anymore. This site goes way beyond that, with a vibe-coded UI and generated articles. There might be value in what's reported here, but currently it requires a lot of work from the reader.


Yes, and HN isn't a place to submit things that require work from the reader. Or at least that seems to be the consensus by reporting it.

Quite disappointing tbh.


You dont trust LLM's, writers with an IQ and knowledge much higher than ours? /s


The earlier you realize how little IQ and "knows a lot" means the person actually know what they're talking about, the easier life becomes. "Smart" people are wrong all the time, some say how they became smart in the first place.


I was told LLMs were at least as smart as Ph.D graduates


> There's a lot about Persona's design, MCPs, vulnerabilities, data leaks, but nothing proving they use it for mass surveillance.

And this is where I'd say I disagree. There's nothing about Peter Thiel, and his current business focus, that shows anyone he's not in the business of surveillance. Look at the company he keeps and then align that with many of the things Peter and who he surrounds himself with have said publicly. Thiel is tied to Palantir and Alex Karp. That relationship alone should tell you very clearly that, even if Thiel wasn't actually in the game of surveillance (opinion: he is) he would be very much associated with supporting it.

Karp said: “I love the idea of getting a drone and having light fentanyl-laced urine spraying on analysts that tried to screw us.

Yeah, sure... I mean I can't imagine the fact that Thiel is tied at the hip to Palantir that he doesn't have an agenda with it other than data analytics and, what, ad rev? Right.

Thiel said, publicly, that everyone should be concerned about surveillance AI [0]. Let's call spade a spade. Thiel is in the business of surveillance whether or not there's some poor LLM generated sites stating that is the case, but then using that as the basis to give Thiel a pass on this because: not enough evidence here.

Thiel is a big part of what's wrong with his class. He's worried about something that he wants to control. He's not actually worried about you or I though. He's worried about someone else having the full surveillance view and so he's aimed to build and be part of that. So, maybe, we shouldn't give Thiel a pass just because he hasn't fully proven himself to be the person that the world paints him into a picture of.

[0] https://www.cnbc.com/2021/10/22/palantirs-peter-thiel-survei...


For what it’s worth, Persona claims to not work or interact with Thiel.

https://vmfunc.re/blog/persona-2


That's cute, but they've taken his money. To say they've never interacted with him is disingenuous. And... Are we really going to default to a perspective of trust from Persona? Nobody should trust them by default as they've proven nothing to the public with regard to trustworthiness.


It's written by a bot to avoid fingerprinting.

https://tboteproject.com/git/hekate/surveillancefindings-new...


Stylometry avoidance is not a valid excuse for factual omissions, fabrications, and "DYOR dumping" (bullshit asymmetry).


Thanks for flagging this. I still think the headline is right, so where are the good sources and articles and outcries?


It's currently #1 on the front page too. HN drowning in AI slop, what a sight to behold.


The vast majority of HN commentors react to the headline and don't bother to click through.


I support a rule to ban AI-generated/edited posts.

Initially I thought they'd be fine, because AI-generated isn't intrinsically an issue and the comments can be good. But in practice, the AI posts tend to be slop, and usually there's a better human-written source for the same topic (for example, one of the many other recent "age verification is mass surveillance" posts here).


It is not so easy to distinguish this with 100% accuracy though.

For instance, a recent example from yesterday:

https://bugs.ruby-lang.org/issues/21982

Part of this was written by AI, but with a human in "charge" who explained which part of AI was used here. Would that also be a bannable example for you? I am not so convinced that this is bannable per se. Perhaps it may be different if the AI-slop was not announced, but when it was announced and explained?

> one of the many other recent "age verification is mass surveillance" > posts here

Well, it actually is. It taps very much into other similar laws e. g. "chat control", aka chat sniffing.


I should've said "guideline". I think posts can include AI if it's reasonable and/or they're good, while the guideline gives a reason to flag AI posts that are generally bad.

> It taps very much into other similar laws e. g. "chat control", aka chat sniffing.

There are many recent Chat Control posts here too. I agree Chat Control is bad, and poorly-implemented age verification is bad (though it can be implemented in privacy-preserving ways, albeit ineffectively; I commented about this 42 days ago at https://news.ycombinator.com/item?id=47123507, and it was stale then). I don't want to hear anymore about it. Maybe I need a filter myself, for the lucky 10,000. But the problem even for them, is that the repeated posts (without links to previous posts) have mostly low-effort comments, because people who made high-effort comments can't/won't keep repeating them.


It seems like there are a few stories HN will really bite on:

- age verification

- chat control

- RTO vs. remote work

- AI bubble

- ditching American tech


seems a lot of people already consumed this as truth.

In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.

I hate it here


> In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.

In my experience, when a country like Britain passes a censorship law, people in other countries like America don't enjoy being given the tools to comply with it, even if the tools are entirely optional.


The main thing that caused this ruckus was law passed in California not the UK

not that it matters because doxxing and harassing developers is not acceptable.


Can you be more specific?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: