The article itself covers the specific reasons that has led to that exact problem and the potential solutions available in the ecosystem with their various trade-offs.
A big chunk of the problem with this kind of legislation for me is that it inherently indicates a failure to govern to me. I disagree with the premise of the solution, but even more so this is trying to legislate a specific engineering solution for our current systems rather than any form of financial, objective guidance, or have reasonably actionable and enforceable consequences.
While laws that target engineering decisions are sometimes reasonable, they are always accompanied with specific guidance from a credible academic based institution (e.g. mechanical and civil engineering use private licensing bodies and develop specific curriculum and best practices).
The only time this law will ever be enforced is punitively for other crimes against major actors who are extremely limited in number. It is unenforceable for Linux, trivial for Apple, Microsoft, and Google to add to their OS. Presumably easy to spoof, the law describes it as minimal but once again, there isn't a specification so who knows. Websites won't be liable, they're getting a sweetheart deal here.
In practice what this law does is absolve abusive platforms an from any responsibility. It adds extra meaningless work and overhead for legitimate adult platforms while opening themselves up to new potential legal challenges, and ultimately doesn't replace the responsibility its removing.
This doesn't make children safer. This doesn't make the internet safer. This kind of legislation makes it easier to abuse children online by removing responsibility from platforms that are known to be dangerous to them yet profit from their presence the most.
I think this is solving a real operational pain point, definitely one that I've experienced. My biggest hesitation here is the direct exposure of the managing account identity not that I need to protect the accounts key material, I already need to do that.
While "usernames" are not generally protected to the same degree as credentials, they do matter and act as an important gate to even know about before a real attack can commence. This also provides the ability to associate random found credentials back to the sites you can now issue certificates for if they're using the same account. This is free scope expansion for any breach that occurs.
I guarantee sites like Shodan will start indexing these IDs on all domains they look at to provide those reverse lookup services.
CAA records including an accounturi already expose the account identity in the same manner, so I feel like that ship has already sailed somewhat (and I would prefer that the CAA and persist record formats match).
The accounturi is an optional extension. Email, and phone are also optional. This is the first challenge that publicly requires you to specify your account ID publicly. There may be implementations that require it but neither Let's Encrypt or the protocols require them.
I think the difference is that using the existing DNS method listing the account is entirely optional. I have left it out on domains that I don't want correlated for that very reason.
Exactly. They should provide the user with a list of UUIDs(or any other randomish ID tied to the actual account) that can be used in the accounturi URL for these operations.
I think the previous post is talking about a search that will find the sibling domain names that have obtained certificates with the same account ID. That is a strong indication that those domains are in the same certificate renewal pipeline, most likely on the same physical/virtual server.
Run ACME inside a Docker container, one instance (and credentials) for each domain name. Doesn't consume much resources. The real problem is IP addresses anyway, CT logs "thankfully" feed information to every bad actor in real time, which makes data mining trivially easy.
This is publicly publishing the account ID. There is an optional extension in RFC8659 that extends it but it isn't required by any implementer. This puts that ID into a public well known location that is easy to scrape and will be (this is exactly the kind of opsec info project like Maltego love to go lookup and pull in).
I'm not sure the distinction matters, and attribution is inherently hard and easy to get wrong. I frequently read Country X is doing Y, less as a indicator of government action and more of a single that we can't be more specific of who within the country is performing an action but we know the behavior is occurring there.
In the case of IP address purchases, these are publicly tied to specific public and private entities and can be easily queried through the regional registries. These private entities are frequently the same kind of shell company you'll get with hiding shady financial details.
Pretty unlikely in my book. This runs OpenWRT out of the box. Given, there are still closed source binary blobs in these things, especially around WiFi 6 and frequently the customizations for the kernel isn't released, but those tend to be more expensive locations to place backdoors especially when the system is very open to inspection. These kind of devices are VERY frequently torn down by security researchers and used in WiFi shoot-outs leading to much higher potential increased detection of anything present.
A lot of this these "backdoor" style hypothesis' still need a motive justification for the cost. Who would they be targeting? What is the potential value of the backdoor?
Given the visibility and complex locations required for the firmware, this would be an expensive backdoor to put in place for any amount of time. The attack is completely untargeted, at best you may be able to say tech enthusiasts that travel. You probably can't count on executive targeting, this device requires a separate battery pack as well as per-site configuration as opposed to pairing to their iPhone and not carrying all that extra stuff.
What are the chances of an expensive, high-visibility backdoor showing up in a dirt cheap product line for a high-risk untargeted attack? Pretty low in my book but your threat model may vary.
Wow. It's as if you're completely unaware of how lucrative the market for malware in affordable IoT devices is.
It doesn't have to be targeted. The general demographic is a fantastic subject, and cheap affordable devices are a fantastic method. If one such trojan network device happen to end up in the home of an employee in a valuable position, or better yet in some office, an attacker has a chance to pivot further into a network.
I thought the reminder from GP was fair and I'm disappointed that it's downvoted as of this writing. One thing I've always appreciated about this community is that we can remind each other of the guidelines.
Yes it was just one word, and probably an accident—an accident I've made myself, and felt bad about afterwards—but the guideline is specific about "word or phrase", meaning single words are included. If GGP's single word doesn't apply, what does?
But again, if that is what the guideline is referring to, why does it say "If you want to emphasize a _word or phrase_". By my reading, it is quite explicitly including single words!
I’m saying that being pedantic on HN is a worse sin than capitalizing a single word. Being technically correct isn’t really relevant to how annoying people think you are being.
Imagine I capitalised a whole selection of specific words in this sentence for emphasis, how annoying that would be to read. I'll spare you. That is what the guideline is about, not one single instance.
I’m not the GP, but the reason I capitalize words instead of italicizing them is because the italics don’t look italic enough to convey emphasis. I get the feeling that that may be because HN wants to downplay emphasis in general, which if true is a bad goal that I oppose.
Also, those guidelines were written in the 2000s in a much different context and haven’t really evolved with the times. They seem out of date today, many of us just don’t consider them that relevant.
This is a false equivalency I'm surprised no one else has brought up. An archive of a site preserves attribution inherently, the scraping and training are not.
Is it? I thought it was ridiculous at first, but the more I think of it... both are scenarios where a corporation is scraping billions of webpages. We like the reason archive.is does it, but unless it's some kind of charity, I think it's a reasonable comparison.
archive.is is a charity no? Or at least they take donations, it seems the legal entity behind it is nebulous, but they don't have ads and have no paid product or offering.
They sure as shit do have ads. Have you ever accidentally followed a link using a browser profile that has no ad blocking enabled?
I only rarely browse without some form of content blocking (usually privacy-focused... that takes care of enough ads for me, most of the time). I keep a browser profile that's got no customizations at all, though, for verifying that bugs I see/want to report are not related to one of my extensions.
Every once in a while, I'll accidentally open a link to a news site (or to an archive of such a site) in that vanilla profile. I'm shocked at how many ads you see if you don't take some counter measures.
I just confirmed in that profile: archive.is definitely puts ads around the sites they've archived.
Having a device enrolled in an MDM package does not make it a corporate device. Many corporations require personal devices be managed to support remote wiping. If I install a productivity or developer tool on my personal phone or laptop for personal non-corporate use I would get mistaken as a corporate user by this process.
If you want to collect this information you should be clear about it and know and understand your edge cases before you start attempting enforcement actions based on it if that is the intent.
In general in my experience, personal tools are a VERY hard market to sell into for corporate environments (I took a peek at what the software on OPs site requires a commercial license to use). I would bet most if not all of what you're catching here is unauthorized installs in a corporate environment and you're more likely to loose interested users than sell more commercial licenses.
>Many corporations require personal devices be managed to support remote wiping.
Corporations cannot require you to have your personal devices be managed by them. If you're surrendering your own gear to a company, it stops being your own device.
But they can require things of devices connected to their wifi or being brought to their premises. You are welcome to leave the device at home if you don't want to consent.
Depends on the local laws. Where I live, they can either deal with it, or provide a secured storage space for the duration of the visit.
Either way, if a corporation wants their employees to use a device, they are obliged to make one available. Surrendering your private equipment to their management makes it not yours anymore.
Yeah you're 100% right that it's optional. It's usually only required to allow company data such as email, slack, file sharing etc on your personal device. If you're on-call it is VERY rare for an employee to win a fight on making the company provide a dedicated device for that purpose (which can inherently make it a condition of your job but that's an exception).
Most employees tend to not care about the why and are happy to just do it making "you" (the one bucking the trend) the oddball. The one not being the team player. It's not legally required, and you won't be fired for it, but its strongly socially encouraged and that makes it mandatory for anyone not willing to put up that fight.
On iOS there is the concept of "Managed Apps" that is appropriate for a BYOD scenario. They are info sandboxed and can't share information (either direction) with unmanaged apps. That would count as an MDM enrollment, if you are looking for it.
I haven't decided my opinion on this specific license, ones like it, or specifically around rights of training models on content... I think there is a legitimate argument this could apply in regards to making copies and making derivative works of source code and content when it comes to training models. It's still an open question legally as far as I know whether the weights of models are potentially a derivative work and production by models potentially a distribution of the original content. I'm not a lawyer here but it definitely seems like one of the open gray areas.