Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's 3 classes of information:

1. Information where improper disclosure is illegal. For example, health records. Or credit card details, which would presumably be a violation of PCI DSS.

2. Information that can be actively exploited, but can also be fixed so the previous disclosure is harmless. This means passwords, authentication tokens, etc.

3. Information that is merely private in nature.

Cloudflare is focusing on the first two items. The third one is hard to quantify; what one person may consider private, another person might not care about. And there's not much that can be done about this kind of disclosure (beyond scrubbing caches which they're doing anyway). Also, it's difficult to automatically identify this type of content (whereas cookies, passwords, credit card numbers, etc are pretty easy to detect) and Cloudflare probably doesn't want to have their employees spending their time reading through all of the cached data they can find looking to see if there's private info (both because it's a lot of work and a huge waste of time since it won't affect anything, and because it's private info; chances are nobody's going to see it normally, and having employees reading your private messages doesn't help anyone and is a violation of your privacy).



> 3. Information that is merely private in nature. > Cloudflare is focusing on the first two items. The third one is hard to quantify; ...there's not much that can be done about this kind of disclosure (beyond scrubbing caches which they're doing anyway). Also, it's difficult to automatically identify this type of content...

I have some pretty fundamental moral issues with this position. Cloudflare is measuring the risk that you were affected by something they can fix, not measuring the risk that you were affected by something you consider important. They are absolutely downplaying the importance of all of this private information: saying "this isn't to downplay" is like adding "we have no affiliation with" at the bottom of a massive trademark violation: it means nothing. That something is difficult to measure and might be impossible to fix it does not somehow make it unimportant.

Let's translate this: we find that some company has been dumping a bunch of random chemicals in our water supply. They respond with an attempt to make us not be concerned, but they concentrate their analysis on a handful of toxins that can be "corrected" with an antidote or a chelation or some other form of direct mitigation in the water itself, while giving lip service to something which merely increases your cancer risk and carefully not mentioning the stuff that will just make you violently ill for a couple days, as it is difficult to know what will cause that, it is a dosage mediated effect, and they can't fix it.

Meanwhile, people keep reporting that they are measuring the water coming out of their tap and keep finding junk in it, and some of the stuff they are finding would make people sick if they drank it. Are you seriously telling me that you think this kind of statement is legitimate?

I am going to maintain: these Cloudflare headers are themselves private information. Even if you ignore Authorization headers, just knowing what URLs people are browsing is something that we would normally consider to be a serious problem... I mean, all BEAST leaked was the size of the files you were downloading, and people take that seriously: the fact that there are even worse potential problems shouldn't distract us from the base issue.


2. Information that can be actively exploited, but can also be fixed so the previous disclosure is harmless. This means passwords, authentication tokens, etc.

I wouldn't call the disclosure harmless. It's unknown if anyone made use of the leaked information before Cloudflare knew, so accounts should be treated as compromised unless it's shown otherwise.

Also, leaking user credentials to any system that handles payments and health info would also breach PCI/HIPAA . This broadens the scope of systems effectively breaking the law.

Another thing to keep in mind is that many(most?) token based authentication systems don't invalidate tokens. So any tokens captured will be valid until they expire, and they can't be "changed" without invalidating every outstanding token (changing the server key)


No I mean after it's fixed, the previously-disclosed information becomes harmless. Obviously anyone who exploited it before you reset your password/tokens may have caused you harm.

> Another thing to keep in mind is that many(most?) token based authentication systems don't invalidate tokens.

In my experience, changing your password generally invalidates all outstanding tokens. And yes, this does mean invalidating all of them instead of just the leaked one, but that's not usually a big deal.


Plus a 4th class: public information.

I don't know how they could determine how many health records were leaked without looking at and classifying the data, so they've already gone ahead and done that. They presumably know how many steamy Grindr messages were in there.

Additionally, there are laws about messages as well. Email laws generally don't specify smtp only.


Health records probably have some sort of health record ID, and there's only so many formats that can take. Detecting strings that match certain ID patterns is easy.


That's one hell of a busted test. So a page fragment could leak my name and "hurts when I pee" but if the universal standard ID was cutoff it's not a health record?


> 3. Information that is merely private in nature.

Whether information is merely private is a judgment call that we may be ill equipped to make since we lack important context.

Being outed as gay may range from merely inconvenient for my coworker where friends and family already know and folks from the company have much evidence to assume so to life-threatening for a person that's living in a state where LGBT* people are actively prosecuted by the state and live under the threat of death penalty for merely living their life. Anything in between is possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: