I always wondered about this. Do companies tie the credit card to an identity to block or do they just block the cc number?
If the latter, seems like a small friction point for a consumer. Given how often cc numbers change and how many an (American) consumer has, this won’t block anything unless you are charging back more than once every few months.
It's up to the company, but since many companies don't want to keep card numbers around (and some processors don't let you see the card number anyway), they're probably more likely to block on identity. Maybe flag the IP address of the transaction for "additional screening" on all future transactions, etc.
IPs are notoriously unreliable for identity pinning, particularly in this age of CGNAT.
If they can’t or don’t want cc numbers (makes sense considering how painful PCI guidelines are anyway) does that mean they need to rely on more tools from the processors or user accounts maintained by the merchant themselves?
CC numbers are also bound to get recycled eventually as cards expire and/or get replaced... even if you block a card, it might have a new owner 6 months or so later.
The number space between the first 6 digits (BIN) and the Luhn check digit is 9 digits — that's 1 billion numbers that issuers can give out before a collision happens.
That doesn't seem to be more than an order of magnitude off between available numbers and issued cards - a cursory search says there are over a billion credit cards in circulation in the US alone.
I think you're confusing the available number space per BIN (often used for a single card product) with the number of available numbers per network.
Visa and Mastercard each have 14 digits worth of permutations to play with, excluding the first and last digits. That's one hundred trillion numbers.
Assuming 8 billion people in the world, each person can hold 12,500 of either Visa or Mastercard before a collision happens. (As above, the number space is smaller because of how BINs are issued, but that's still plenty.)
Except the banks have "helpfully" provided a service to merchants to tell them, "this card has expired, here is the new number to charge" (or expiry/CVV).
I remember getting into an argument with a bank teller about me wanting to block/dispute transactions and how they kept approving transactions. "But you have an agreement with the gym..." That's between me and the gym, not for you to facilitate on their behalf.
Obnoxiously that doesn't cover all the edge cases for consumers. Payments from my watch recently started failing with a generic "declined" error. After calling my bank I worked out that my credit card had been replaced some months ago in advance of a recent expiry - I updated my phone wallet at the time, but my watch's wallet didn't give any indication that it was trying to use an expired card.
Declaring someone an enemy does not automatically lead to war. America considered the USSR an enemy of democracy for 50 years. They never went directly to blows.
For production Postgres, i would assume it’s close to almost no effect?
If someone is running postgres in a serious backend environment, i doubt they are using Ubuntu or even touching 7.x for months (or years). It’ll be some flavor of Debian or Red Hat still on 6.x (maybe even 5?). Those same users won’t touch 7.x until there has been months of testing by distros.
Ubuntu is used in many serious backend environments. Heroku runs tens of thousands (if not more) instances of Ubuntu on its fleet. Or at least it did through the teens and early 2020s.
and they are right, this is because a lot of junior sysadmins believe that newer = better.
But the reality:
a) may get irreversible upgrades (e.g. new underlying database structure)
b) permanent worse performance / regression (e.g. iOS 26)
c) added instability
d) new security issues (litellm)
e) time wasted migrating / debugging
f) may need rewrite of consumers / users of APIs / sys calls
g) potential new IP or licensing issues
etc.
A couple of the few reasons to upgrade something is:
a) new features provide genuine comfort or performance upgrade (or... some revert)
b) there is an extremely critical security issue
c) you do not care about stability because reverting is uneventful and production impact is nil (e.g. Claude Code)
but 99% of the time, if ain't broke, don't fix it.
On the other hand, I suspect LLMs will dramatically decrease the window between a vulnerability being discovered and that vulnerability being exploited in the wild, especially for open-source projects.
Even if the vulnerability itself is discovered through other means than by an LLM, it's trivial to ask a SOTA model to "monitor all new commits to project X and decide which ones are likely patching an exploitable vulnerability, and then write a PoC." That's a lot easier than finding the vulnerable itself.
I won't be surprised if update windows (for open source networked services) shrink to ~10 minutes within a year or two. It's going to be a brutal world.
Too often I see IT departments use this as an excuse to only upgrade when they absolutely have to, usually with little to no testing in advance, which leaves them constantly being back-footed by incompatibility issues.
The idea of advanced testing of new versions of software (that they’ll be forced to use eventually) never seems to occur, or they spend so much time fighting fires they never get around to it.
I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.
I'd say about 2/3 of the places I've worked started on Linux without a Windows precedent other than workstations. I can't speak for the experience of the founding staff, though; they might have preferred Ubuntu due to Windows experience--if so, I'm curious as to why/what those have to do with each other.
That said, Ubuntu in large production fleets isn't too bad. Sure, other distros are better, but Ubuntu's perfectly serviceable in that role. It needs talented SRE staff making sure automation, release engineering, monitoring, and de/provisioning behave well, but that's true of any you-run-the-underlying-VM large cloud deployment.
Everyone who has not hit this bug thinks it’s user error… It’s not. It happened to me a few days ago, and the speed at which I tore through my 5 hour usage cap was easily 10x faster than normal.
Also: sub agents do not get you free usage. They just protect your main context window.
Readimg through this thread, it seems likely is a KV cache "bug". Theyre likely doing too many evictions of the LLM cache so the context is being reloaded to often.
Its a "bug" because its probably an intended effect of capturing the costs of compute but surfacing a fact that they oversold compute to a situations where they cant keep the KV cache hot and now its thrashing.
In the past it had less to do with seizing the vessels and more to do with keeping financial flows between organizations offering shipping services and oil hidden from the banking system. America could have easily seized any ship they wanted to during the sanctions over the past decade. They didnt because the sanctions are American constructs: they dont apply on the open seas where UNCLOS matters. America can still seize them, but the legality is murky and comes with a reputational cost.
Now with Hormuz closed, America needs every last oil barrel moving so the economy doesn’t grind to a halt. Remember, it’s a war of choice for the US. We don’t need Iran gone as much as we want low oil prices.
> the sanctions are American constructs: they dont apply on the open seas where UNCLOS matters
Technically correct. But the way these countries evade U.S. sanctions is by flying false or no flag. That, in turn, makes them vulnerable under UNCLOS's anti-piracy rules.
> it isn’t America’s determination that a registration is fraudulent. It is the flag state’s.
Sort of. If there is no flag, it's America's determination. And in many of the seizure cases, the flag state confirmed a fraudulent registration. (I believe there was one around Venezuela falsely registered with Panama.)
I’m not an AF vet so I don’t have an idea, but what’s the over-under that the US injuries were the crew trying to get the plane ready to fly after the alert came in? I think the number of injuries lines up closely with expected crew compliment.
Towards the bottom they list some satellite imagery and a statement indicating they are possibly using the taxiways as parking.
Still leaves open the question of who might have been injured and where, but at least answers how the Iranians could have possibly hit a taxiing plane — they didn’t.
I don’t disagree with any of your assessments, but I don’t know if it’s a bigger mistake than Iraq…yet. That war was a 10 year (longer if you bc point ISIS) debacle that cost trillions.
Let’s wait a few years before saying this mistake is bigger first.
However, one point that I agree with that might lead to this war being worse: the Gulf are showing some serious buyers remorse with sticking in the US orbit. Both the uselessness of America’s strategy and the almost clear prejudice Trump shows towards the Arabs vs Israel in the decision tree of this conflict is unsettling for the Gulf states.
If the latter, seems like a small friction point for a consumer. Given how often cc numbers change and how many an (American) consumer has, this won’t block anything unless you are charging back more than once every few months.
reply