I agree that many in the financial sector do not know much about quantum computing, but it also does not seem that some in the quantum that do not understand the details of the problems that the financial sector has either.
For instance, those in the capital markets/trading floors in Europe and North America do significant risk calculations with hundreds or a thousand variables on up to 100,000 positions with a certain confidence. Many of these risk calculations need to be reported nightly to regulators. These financial companies are paying many millions per year to AWS, Azure, or Google to do these calculations on classical computers.
However, many parts of these calculations could be done with quantum computers relatively instantly, given enough qubits. I realize that the technology and reliability is not there today, but hopefully it will come soon.
I would not be surprised if by 2035 using quantum computers to do many variable risk calculations would become (almost) mandatory for major European and North American financial companies.
What makes you say this? Which particular quantum algorithm do you think gives an exponential speedup for multi-variate risk predictions? And why do you think there is any chance in hell that a quantum computer would exist by 2035 that could even store an input of the size you're talking about, nevermind have spare qubits to actually process it?
After working with a few large corporations and their DDoS protection solutions, I did not have a good experience with Verisign, and they were not able to handle attacks or get things working.
However, I have great experiences with Akamai and Cloudflare. I trust the people at Wikimedia will choose wisely.
I would I have learned that Verisign has one of the worst BGP mitigation/scraping solutions out there.
There are a few alternatives that have more experience and provide much better uptime, include solutions from Cloudflare and Akamai.
Any serious mitigation solution must be BGP based, not proxy. Besides its technical merits and convenience, it also minimizes the risk of a benevolent controller (e.g. Matthew Prince of Cloudflare) ruining your company, because it becomes your upstream provider only during the attacks. Otherwise the GRE tunnels are not in use. The IP addresses are still yours always.
We used Verisign for mitigation of a 44Gbps volumetric attack and it worked very well. We also evaluated Neustar, but Verisign's infrastructure seemed to be more robust.
That's your requirement, but it might not be Wikipedia's requirement. Ownership of IPs is really a technical detail invisible to most people; ownership of eyeballs by way of the domain name and top Google result is probably more important. Cloudflare doesn't impact that ownership other than being able to temporarily take you offline if they choose to terminate your site.
Still, large proxy-based CDNs do have the ability to completely bypass all the same-origin protections in the browser. Even if they are angels and don't abuse this trust for identity theft and surveillance, it makes them a juicy target for bad actors, state sponsored and otherwise.
A proxy is a perfectly acceptable “serious” solution for this type of problem, as well as nearly all of the rest. Wikipedia is not the kind of website that would warrant being removed from Cloudflare. What’s wrong with having an upstream provider for caching close to the user and other features when you’re not under attack?
That’s not what MITM means. I get that you don’t like Cloudflare but voluntary use of a CDN isn’t a MITM any more than, say, Amazon is a MITM because you host on EC2.
Cloudflare is in between the client and the server, decrypting, rewriting and (if set up right) re-encrypting the request/response. It masquerades as the server by presenting a proper certificate for the domain even though it is not the entity that is actually controlling the domain.
That to me sounds very much like MITM, although it is not a MITM attack since the entity controlling the domain opted into it, so basically it is voluntary MITM.
Using a VPS like EC2 is a different story since the decryption happens within the layer that you control. Of course you need to make sure that you choose a vendor for that layer that you trust, but on EC2 the traffic that amazon sees is encrypted with keys they don't have and decrypted with keys stored on a layer that I control. Amazon could read out the memory of my EC2 to get the keys but their business depends on not doing so, so in this case either I have a vendor that always will decrypt and read traffic (Cloudflare), or a vendor whose business depends on hypothetically being able to but not doing it. There is a clear difference to me.
That is the same for most CDN's (including CloudFront and all the other major offerings), so I'm not trying to single out Cloudflare.
If you don’t trust Cloudflare, don’t use them but there’s no meaningful security distinction between what they do and what AWS does: in both cases you have a vendor with the capability of violating your security and a promise that they won’t abuse that access.
This is why having a threat model is so important: it keeps you from wasting effort on things which sound like security but aren’t actually changing anything meaningful.
There is a security distinction, and this has been shown by for example cloudbleed. Every step that has access to plaintext data is a potential attack vector and might be logging/leaking information.
Cloudflare’s business also depends on not messing with your traffic, right? It would certainly be easier for them to get your users’ content than for Amazon to do the same, but I think you still have to accept that risk with either. “Hypothetically being able to but not doing it” isn’t a whole lot of confidence if I were hosting some kind of shady website.
Sure, but since Cloudflare’s business is actively "messing" with all your traffic, all the time it's a smaller technical step to do it some more, and can also lead to accidents like cloudbleed. Every step that has access to unencrypted data is a potential attack vector or might be logging/leaking data.
You upload your private SSL key to Cloudflare for example. And I was talking about hosting on your own hardware/colos like most large sites do (7x cheaper than AWS list prices on avg)
Please specify in detail how you believe that’s an MITM using the standard industry definition. In particular, consider whether “attack” and “voluntary business agreement” are synonyms.
Breaking open encryption to monitor activity between users and other sites is a completely different thing than having a provider handle hosting for your site.
A better comparison would be Cloudfront and Application Load Balancers since you can expose your own ec2 server or load balancer and be e2e encrypted (unless AWS wanted to run commands on your instance, which they could do, but that's a different threat vector entirely).
That was the model I had in mind but it’s not really a meaningful distinction since the host could almost certainly compromise those servers as well. In any case, you’re trusting a third party rather than having their involvement maliciously imposed.
The originalcontent was posted on IG. 8ch took the reposts down when it became known that it was connected to the real shooting. Watch the video with the 8ch founder explaining (unless YouTube took it down too). Matt was preparing for the IPO.
You appear to be extremely mad that anyone questions the power of political pressure and an angry mob.
Look, you can feel however you like about whether the high-profile takedowns are right or wrong, whether the CEO's promises after the Daily Stormer are hypocritical — but let's be clear-eyed about placing a site in a position where one outside person can do it real harm. The question you should look at is whether the risk is actually acceptable for your organization.
By your statement then reddit was complicit with the Russian trolls during election season because the bitcoin trolls who evolved into trump trolls were not punished in the slightest (I have a list of 300+ usernames that are still active today)
The point is that Reddit tries to moderate, which is good enough for their providers (AWS/Fastly).
The 8ch takedown wasn't actually due to issues with moderation, since (at least based on the owner's video) 8ch removed the post, actively responds to real law enforcement requests, and the original post was actually posted to IG. The issue was that CF was getting enough bad press, and more importantly enough calls/concerns from real Enterprise clients (this is speculation on my part), to take down the website.
That's a valid stance but they didn't host the website; they only provided DDOS protection for the actual host (which proceeded to drop 8ch once CF stopped providing the protection).
Might be late, but has anyone in CloudFlare tried to switch away from regex to something more efficient and powerful?
Tools like re2c can convert 100s of regexs and CFG into a single optimized state machine (which includes no back tracking, as far as I remember). It should easily handle 10s of millions transactions per second per core if the complete state machine fits into the CPU level 3 cache (or lower), with a bit of optimization.
There is also Ragel [0], but I think that in this context deploying regexes as strings is safer than generating code and deploying that code (unless Ragel could generate webassembly).
Ragel has the advantage that CPU blowups happen at compile time, rather than run-time. Other risks aside, they would have avoided this problem had they been using ragel or something similar to pre-compile their patterns into deterministic machines.
The article says they're going to either switch to RE2 or Rust's regex, both of which use a DFA (a state machine) and have no backtracking.
But you do bring up a good point. RE2 and Rust both compile the regex in the same process that executes it. Compiling the regex as part of your build process then pushing the compiled form could have advantages.
There are a few comments about the fear that many of the top websites of the world get/use this data. I understand that people would be scared about this.
However, fraud is big problem, and any site that is dealing with anything precious is (hopefully) doing whatever they can to prevent fraud to protect their and your resources/data. From what I can tell from the JavaScript from some of the top 100 sites, it looks like many are using this data, and if the data is not what they expect, the transaction can be rejected.
I do not like when a company like Facebook would use this data, but it is a tradeoff for allowing other companies to use it.
Not sure if someone from CloudFlare, Akamai, or another company (Coinbase?) can publicly comment on what they do.
Would be nice if the browsers would at least notify of its use.
Fraud can be solved by having proper authentication solutions in place. We shouldn't leave fingerprinting vulnerabilities open just because the banking industry can't be bothered to come up with something better than a static card number & expiry date for authentication.
I agree that just using a credit card number and the expiry date is definitely bad, but I am not aware of any authentication solution that fully solves this problem (but please enlighten me if there is).
We know passwords have many problems.
Two factor authentication with SMS has problems with SIM hijacks.
Physical tokens (e.g. RSA tokens) have problems when users lose them.
Feel free to enlighten me if someone has a better solution for all of this.
Ironically, both iOS and Android have payment solutions built in, and on Apple devices this uses the secure enclave. Fingerprinting a phone is a vastly inferior solution to that.
Which is useful if you happen to live in an Apple Pay / Android Pay supported country with contactless payment widely rolled out.
However, considering contactless was pretty rare even in the US until recently, it’s wise to have other solutions - and cover other use cases like online banking, loan applications etc etc.
In the real world people seem to hold onto their cards pretty well - why not just use that? The majority of phones nowadays has an NFC chip capable of talking to contactless-enabled cards (most of them), and for those that don't, smartcard readers are pretty cheap (banks give out those calculator-like things for logins, why not add an USB port to them or Bluetooth capability for mobile devices).
2-factor authentication doesn't necessarily imply SMS. TOTP apps like Google Authenticator are reasonably secure.
Finally auth doesn't have to be 100% bulletproof (in fact, fingerprinting isn't either), it just has to solve the majority of problems. There's always someone that's going to be stupid enough to get compromised despite all the security solutions, but as long as the majority of users is safe then all is good.
I can. During a short stint in an ad tech company in Shanghai in 2016 (my second time in ad tech after running an ad farm myself in my teen years), I noticed that Samsung Internet (a browser) does not require permission for sensor data. Then, just few month later, Chrome team put sensors live without them too.
I remembered reading about Kalman techniques used in radionav in high school, and it instantly came into my head that you can as easily reverse the process to substract clean, kalman filtered, signal from noisy one to get an "anti-pattern."
And with it you can easily do whatever you want from FFT, to reverse manchester coding, to more esoteric techniques to quantitise it.
Everybody in the collective got quite fired up with it, thinking about it being a "that's it" moment for us to do some sweet arbitrage on ad exchanges with it. We were few weeks from filling a patent, but it was decided to keep it hidden after all with logic that: 1. big ads will shoot us down, 2. botters will get whiff of it, 3. patents don't work for "small" companies
I got symbolic premium, arbitrage results were far from super good as originally expected. At that point we found a silly thing: 20 to 30% of MoPub traffic had accelerometers and gyros playing same data in a 5 second loop!
Later after I left the company, I learned of ours sales people finally managing to sell it under wraps to "somebody big" , whose identity I was not told
I do remember right around that time flaming on bugzilla with either google or mozilla employees who claimed that you can't extract fingerprint from 60 hz data, and me claiming otherwise to no avail.
My point was to put mandatory permission prompt on it, and I remember being turned down.
"Fraud" presupposes a surveillance advertising ecosystem. The data is not being used to verify transactions, but to figure out if your ad impression should count as bogus or real. Change the business model and a lot of incentive for this highly invasive tracking goes away.