Reddit's also famous for NSFW content. There are also stories about harrasing people who post in the "wrong" subreddit (e.g. political subreddits that are the opposite view).
This excerpt from the article describes the risk well.
> In Firefox Private Browsing mode, the identifier can also persist after all private windows are closed, as long as the Firefox process remains running. In Tor Browser, the stable identifier persists even through the "New Identity" feature, which is designed to be a full reset that clears cookies and browser history and uses new Tor circuits.
Seriously. TOR is primarily funded by the US government. Maybe this or not all bugs are deliberately left in for the sake of allowing backdoors, but people should not forget this
Would it though? I guess state agencies already know all nodes or may know all nodes. When you have a ton of meta-information all cross-linked, they can probably identify people quite accurately; may not even need 100% accuracy at all times and could do with less. I was thinking about that when they used information from any surrounding area or even sniffing through walls (I think? I don't quite recall the article but wasn't there an article like that in the last 3-5 years? The idea is to amass as much information as possible, even if it may not primarily have to do with solely the target user alone; e. g. I would call it "identify via proxy information").
11 people over 4 years doesn't seem like that much. Its not clear to me how big a population that is out of but if its government scientists i assume there are tens of thousands of those if not hundreds of thousands.
Still, FBI should be investigating every suspicious death of people with high level clearence.
Statistically, I would look at deaths from that age group among space flight science and compare this "blip" to the p50. I don't think it's easy to say if 11 deaths/disappearances over 4 years is high or not, without looking at the problem this way.
I imagine it is difficult to get good work out of scientists at the point of a gun. With physical labour you can tell if someone is doing a good job, but with intellectual labour its much harder to tell if someone is intentionally being slow or if its a hard problem that is difficult to solve.
That may be part of the issue. Perhaps LLMs are just causing people to reveal how much they consider a maintainer as providing a service for them. Maintainers don't work for you, they let you benefit from the service they perform.
That workload of maintaining a fork doesn't come from nowhere, it's just a workload someone else would have to do before the fork occured.
I think every maintainer should be able to say how they want or don't want others to contribute.
But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time. The reason people accept them is not for the sake of the patch itself but because that is how you get new contributors who eventually become useful.
> But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time.
Oh god, I needed to add a feature to an open source project (kind of a freemium project) about fifteen years ago. I had no experience with professional software development nor did I have any understanding of pull requests. I sent one over after explaining what I was trying to do and that I thought it would be a good feature for the project.
Now they probably shouldn’t have just blindly merged it, but they did, and it really made a mess.
If I recall correctly I sent the PR as just a way to ship a blob of code, intending to use it to demo a specific UI feature that I wanted them to look at.
Meanwhile I was tinkering with the schema in the database and messing about deep in the guts of the software. I didn't really know how to separate the two so I just shipped the whole PR thinking they would just run it on the side to demo the thing I was talking to them about.
Well they just deployed it to their production instance and fucked everything up on their end.
I recall being horrified when I was tinkering with their customer-facing instance and seeing evidence of the other work I was doing. I immediately emailed them and said whoa whoa whoa. xD
Found the email that I sent to the founder, from 2014:
>It looked like <redacted> may have got a few surprises in the code she merged from the 2.1M1br branch yesterday. I've just been committing basically all the tinkering i do to that branch, so it may have a few landmines in it.
>Hope i didn't create any headaches for anyone. Sorry!
I think the author of the article is missing this point.
When you actually work alongside people and everyone builds a similar mental model of the codebase then communication between humans is far more effective than LLMs.
From what i understand the 15 factor was just a stunt and didnt use the actual error corrected algorithm that needs to be used in general.
I think an analogy would be, imagine you are driving across north america in a car, but your engine is broken. The mechanic is near by so you put it in neutral and push it.
If someone said, well it took you half an hour to push it to the mechanic, it will take the rest of your life to get it across north america - that would be the wrong take away. If the mechanic actually fixes the engine, you'll go quite fast quite quickly. On the other hand maybe its just broke and can't be fixed. Either way how fast you can push it has no bearing on how fast the mechanic can fix it or how fast it will work after its fixed.
Maybe people will figure out quantum computers maybe they won't, but the timeline of "factoring" 15 is pretty unrelated.
In the context of cryptography, keep in mind its hard to change algorithms and cryptographers have to plan for the future. They are interested in questions like: is there a > 1% change that a quantum computer will break real crypto in the next 15 years. I think the vibe has shifted to that sounding plausible. Doesn't necessarily mean it will happen, its just become prudent to plan for that eventuality, and now is when you would have to start.
I think there are non bot reasons to do that.
reply