Both you and the Guardian are confused (or perhaps the Guardian is just trying to ride the popular understanding of the "Iron Dome" as a super catch all missile defense system vs reality). The Iron Dome has nothing to do with shooting down ballistic missiles. The Iron Dome isn't designed to target ballistic missiles: it targets short-range rockets and artillery like the ones fired by Hamas and Hezbollah, and has been modified to also target slow-moving drones (although the Iron Beam is intended to be the main drone defense system in the future). The Iranian missiles are targeted by different systems: David's Sling and the Arrow 2 and 3.
The Iron Dome does not depend on the American radar system in Qatar that Iran hit. It would be crazy for it to do so when it only targets short range attacks. If someone is telling you that the "Iron Dome is blind" because an American radar in Qatar got hit by a missile, you should probably update the amount you trust that source negatively, since not only is that not true, but it doesn't even pass the sniff test to anyone who knows what the Iron Dome is.
> The Iron Dome has nothing to do with shooting down ballistic missiles
This is not true, Tamir interceptors have been upgraded to target ballistic missiles. It is extremely visible when this happens, as the interceptors fly a very different path than usually.
you are arguing semantics, both me and Guardian using the term "iron dome" as a collective of all air defense systems in Israel (not that one system built to counter cheap rockets), because all these systems are integrated into one military network, including the GCC/CENTCOM radars that were destroyed.
if you replace "iron dome" with "air defense network" everything else would still be true
The problem is you do not understand how these systems work and are making claims that don't pass the sniff test to anyone who does know how these work. For example, you claim multiple times that Shahed drones have somehow exacerbated these Iron Dome missile interceptor issues, and now claim you're not talking about the literal Iron Dome — you're talking about who knows what (you don't specify any actual, concrete system and instead use a metaphorical understanding from the popular press). The problem is: actually, the literal, real Iron Dome does target Shaheds! So if it's the radar system that was the problem and caused the metaphorical Iron Dome to be "blind" — why did drones matter, if those are targeted by the literal Iron Dome that doesn't use that radar? Are you meaning to talk about David's Sling, which targets missiles and drones? But David's Sling is a medium range system that doesn't use the American radar in Qatar either! Arrow 3? Guess what — it has nothing to do with Shaheds, and has nothing to do with the American radar system either — it uses an IAI radar system.
The Iranian hit on the American radar in Qatar hasn't left the "Iron Dome" blind, figuratively or literally, and your proposed mechanisms of actions don't make sense.
you have constructed a strawman argument and are arguing with it, mostly semantics and splitting hairs.
Perhaps a problem here is that we are mixing up two theatres: Israel and GCC.
Iron dome exists in Israel, but the radars and air defense network was degraded in GCC, it is these patriots there that are having interceptor issues and shahed drone issues.
Israel is not being bombed by shaheds, it is being bombed by ballistic missiles that they are having problems intercepting and alerting population in advance.
you can check with the sourc elinks I provided that confirm that the radars in GCC were part of the early warning system for israel, and hitting radars in Qatar has impacted directly AD network in israel (reduced alert time significantly)
None of your links support that claim or even try to make it. The Haaretz article is complaining about a day of unusually short missile notifications on March 7, a week later than the Iranian strike on the radar (and now a month-old claim, which lasted only a day — if that was due to the radar, why did it not start the day the radar was actually hit, and why did it only last a day when the radar remains ruined today?). One of your articles is about drones, which has nothing to do with the radar system, and you are now backpedaling all of your drone-related claims for Israeli air defense despite making many drone claims earlier (why is that?). The other is the Guardian article that doesn't make that claim, and one is about the American Patriot missile defense system, not Israeli ones.
Recent reporting has indicated that contrary to your claim that the American radar system getting hit has left the Iron Dome "blind," Israeli missile detection has actually improved over the course of the war:
ohh, they use AI... this sounds like a YC startup pitch, I bet they also use AI agents and Claude Code to improve air defense...
then why all these radars were even needed in the first place? why did US taxpayers spent billions procuring installing and maintaining these radars, if simpel fine-tuning with Claude Code would work just as well ??
Well, I see you've graduated from wishcasting the Iron Dome being "blinded" by a radar it doesn't use to being confused that shooting down missiles involves AI.
Depending on what you call AI, AI has been used for targeting for awhile. It's just usually called 'automated control' or something. This is more a re-categorizing of targeting algorithsm, and calling it AI.
not sure you are aware that you pass for the ignorant who's stuck in denial of reality.
you are arguing against official annoucements from the IDF explaning why the civilian alert system now only gives short notice and will do so from now on, and you argue on the basis of fallacious rhetoric.
"I am morally correct therefore I need not be factually correct".
Stop doing this: it completely undermines the political argument because it makes it clear you are as uninterested in reality as the current administration.
It's rich to declare "they're lying" while happily being disinterested in the truth or clear communication.
Iron Dome is a specific interceptor system, and you can trivially look up what it is on Wikipedia.
Iron Dome is still not a catch-all term for the entire Israeli defense system, and all the other claims the poster has made are not supported by their links or evidence.
As noted: Iron Dome intercepting ballistic missiles is an apparent new capability which it was not expected to be capable of: so it's kind of weird to turn up and say "Iron Dome can't intercept ballistic missiles anymore!" when no one except whoever developed the upgrades would've expected it to do that, and Israel has a number of other still unrelated to THAAD ballistic missile interceptor systems.
FWIW, Pierre's "Code Storage" project [1] seems like it simplifies a lot of the operational overhead of running git servers, if what you want is "an API for git push". Not affiliated with the company (and I haven't tried it myself, so I can't vouch for how well it works), I just think it's a neat idea.
I think "Code Storage" (definitely needs a unique name), is less an API for git push (surely git push is that API?), and more an API for "git init"? It seems to be Git as infrastructure, rather than Git as a product. i.e. if you're using it for a single repo it's probably not a good fit, it's for products that themselves provide git repos.
I run a small open source LLM inference company, Synthetic.new. As far as I can tell, CNBC isn't reporting this accurately: the problem isn't that Oracle is building "yesterday's data centers": they're building Blackwell DCs! Those are today's DCs.
The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.
It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.
I really don't want to overrule your expertise in this regard, but an 5x efficiency gain in a single generation feels like its too much, especially considering how newer process nodes have been yielding less and less improvements.
Here's a synthethic benchmark page listing every GPU in recent memory. True, its not AI, but if we look at the 1080 Ti, a 9 year old card at this point, and compare it with the 5090 we see the gains were 190/74=2.56x in that timespan that involved multiple die shrinks and uArch changes.
I think these numbers might not hold up on IRL workloads, and afaict older datacenter cards still hold up well and are being used in production.
Newer process nodes are not the main avenue of improvement. What those transistors are used for is more important and it’s plausible that improvements between generations can increase performance by multiples on a specific task. All of the improvements aren’t necessarily in the chip itself either.
E.g. the next gen might have hardware inference for lower bits, more memory bandwidth, etc.
You could just give the TLDR: by far the biggest improvement in the different generations of nVidia chips is calculating faster at half the accuracy. For blackwell vs hopper it was "double performance". By which they mean blackwell can calculate with NXFP4 at twice the rate hopper can calculate at FP8. Then go back generations all the way until you arrive at FP64, where we started. They even made a slight detour to "FP128".
Decide for yourself if this is a real improvement. You should probably consider that nVidia did not just give the new chips, but also demonstrated training a neural net with NXFP4.
It's not the only improvement, but it is by far the biggest.
As for the future: nobody's gotten FP2 to work satisfactorily yet. But hey, maybe at nVidia's next conference. But, even NXFP4 is not actually 4 bits (meaning various parts of the computation don't actually happen at 4 bits), and neither was FP8 (you could use it like that but people didn't)
> but an 5x efficiency gain in a single generation feels like its too much, especially considering how newer process nodes have been yielding less and less improvements
The efficiency is in other areas too e.g. memory, network, etc. It's TOTAL.
> Here's a synthethic benchmark page listing every GPU in recent memory
We don't have the GPU gains not because of process nodes. Nvidia and later AMD stopped investing in that direction. They started optimizing for AI not graphics.
they are saying what you are saying. At least Deirde Bosa did. I think there is a lot of folks internally who don't understand the gravity of it and keep questioning it.
You are right about the building of today's DC's. There is a small part of me that feels Oracle might be a bit toxic long term with all this debt him and his kid have taken on. And this could be the first reaction to it.
There are definitely companies out there working towards bots that can't be readily distinguished from human. Either for fake "organic" advertising or for propaganda purposes.
One way to test and refine the bots is to have them post in more discerning forums like HN, tweaking the system prompt until people stop calling them out as fake.
Once nobody can tell any more, then the comments will be subtly altered to deliver the intended message.
Personally, I suspect that the classic pseudoanonymous forums are cooked. Within five years, they'll all be totally overrun by chatbots and their "value" will tank.
The only recourse will be mobile-only "chat apps" that guarantee 100% human participation through specific hardware device and configuration attestation (TPMs, etc...), and also validating via the gyros that the device is moving appropriately for keypresses, etc...
Everything else will be > 50% bots soon, overrun by propaganda, etc...
Yes, I know, we're most of the way there already. Reddit and Twitter are already sinking into the swamp of sadness.
But trust me, it can get worse than that! Much worse.
I am of the opinion that it will be like this until voice becomes the UI and the next interface for this type of thing will be guarded against fraud. Wag The Dog has become the standard and it's going to be this aspect of personal agency that will prevail. And solve the issue you are talking about.
It’s because Oracle Cloud had a lot of unused capacity at the beginning. Because no one wanted to use Oracle Cloud. Cheap compute was hard to say no to.
Likely aimed at classified/defence environments. In those spaces, hardware typically takes 18–36 months after commercial deployment before it’s approved—due to firmware vetting, side-channel analysis, crypto validation, and similar processes.
Meanwhile, commercial operators have already deployed their hardware for public workloads. Existing Blackwell capacity won’t just be shifted into classified environments—governments don’t repurpose hardware from unclassified infrastructure for secret/TS systems. That deployed stock will stay in the private sector for hosted AI workloads.
For many high-security use cases, new Blackwell systems may effectively be the only viable option, especially given the slow review cycles around new firmware and GPU software stacks. Newer chipsets will also be prioritized for training due to performance gains.
Oracle likely recognizes this dynamic and is betting competitors may eventually need to deploy in their data centers. Governments haven’t historically deployed GPU capacity at this scale-beyond ASIC/FPGA crypto workloads.. and likely don’t have large pools of pristine Blackwell hardware available.
They’re also purchasing late in the cycle, which may work in their favour.
5x improvement of energy efficiency in just GPUs translates to more like 50% reduction of power usage, with is significant but doesn't warrant a 80% reduction in pricing. Especially since Nvidia will charge more for the same card - they have been pricing things pretty aggressively.
And on the DC side they will be building to a power and heat budget. If Vera Rubin changes the power density per rack equation that may have some impact. But thinking rationally if the flops per kw-sq ft are lower than Blackwell, no problem. If they are a lot better then even if the kw per sq ft is higher you can just space the racks out a little
While we have you here, could you please clarify a point in your privacy policy?
> For data collected from the UI or other usage: We retain the personal information described in this privacy notice for as long as you use our Services
I have two quick questions:
1. Why are UI prompts and responses kept for the entire life of the account?
2. When an account is closed, is the data actually deleted or just de-identified?
> Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs.
This is what I don't understand. Why is the article making the assumption that the DC itself is tied to a particular GPU generation? AWS doesn't knock down a building and start over every time Intel releases a new Xeon.
Xeons have a much longer shelf life and diverse workloads. If you order hardware specifically for LLM inference and then some new hardware/model combination is much better at that (which it will be, because a lot of people are working on that), you might be in trouble.
It's like setting up a warehouse of GPUs to mine bitcoin while others are switching to ASICs.
No I mean inference. The idea is that inference demand will be massive and a race to the bottom with razor thin margins.
Training costs can be amortized over the entire lifetime of the model, but if you lose money on inference or can't offer competitive usage limits for subscribers, there's no amortizing that.
No it's all about having the top model first and training time is what's crucial. OpenAI has already shown willingness to bleed money for the sake of brand and we can expect that to continue.
Tensor core performance is inversely proportional to precision across all generations (i.e., reducing precision by a factor of 2 increases OPS by a factor of 2). 8-bit precision will give you the same improvement ratio. A100/H100 didn't support 4-bit if I remember correctly.
So FP4/INT4 will likely improve the same 30% OPS/W. You could get a separate improvement by reducing precision, but going 1-bit for 4x improvement feels unlikely for now.
> Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
Or we‘ll get a supply problem and they get nothing or not enough.
Tomorrow’s DC, never. Tada!
> Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Other reporting says this is very much the case. Stargate barely has some of the land cleared, but the buildings were supposed to be finished and have GPUs installed over the course of 2026.
There's also the indicator of Nvidia giving out billion-dollar deals to other companies such that they could commit to buying even more Blackwells to keep production going. The chips from those new deals don't have anywhere to go, everyone already spent their cash on getting shipped chips that they're still installing today (apparently some are even in warehouses)
Hey Reiss, I just checked Synthetic. So nice to see indie providers for smaller LLMs. I am personally building products to run only with small (actually < 20b) models. My aim is for laptop usage. Would love to know what plans you have for models smaller than you have currently. Industrial use is all about smaller models IMHO
> The problem appears to be that Oracle is building today's DCs... Tomorrow.
By the time Vera Rubins will be available on scale, will they immediately be put into DCs, or will tomorrows chips be running.. the day after tomorrow?
I think the difference is that the other hyperscalars are doing this out of the enormous cash rivers produced by their other profitable businesses, at a rate less than that at which profits are flowing in, whereas Oracle is funding it out of debt with AI capex in 2026 projected to reach levels nearly as high as their expected revenue (not profits) in the same period.
If the hardware refresh rate makes a substantial share of data center cost function more like opex than capex, the companies funding it out of operations (especially from operations of what are essentially monopoly businesses, in the sense pricing power), even if it isn’t the operations it power specifically, are fine in the near-to-intemediate term (barring exogenous shocks to those other businesses), whereas Oracle, funding it by a debt bonanza, is in a different position.
Google, Amazon, Meta, etc don't have to wait 12 or 24 months for their big data center to open. They already have lots of DCs to cram all the NVidia cards into, right now.
> And Starlink / xAI is going to shoot them into space.
I highly doubt that. They claim they want to shoot them into space, but I don’t believe a word of it until I see it happen (and see it work). It’s no more real than hyperloop.
DCs in space is hype but actually makes no rational sense when you figure the size of radiators you'll need, and while solar cells are more efficient in space, they aren't that much better.
The Google paper (https://arxiv.org/pdf/2511.19468) didn’t seem too concerned with radiator mass/size when I skimmed it, but maybe I just missed it. My understanding is that if you run the chips relatively hot (and maybe boost with heat pumps? But then you’re not quite as solid state, and maintenance is tough up there), the radiation ability increases enough such that you can make the radiators slightly smaller than the solar panels, and they’d sit on the dark side of the panels. Many people like to point to the ISS system and scale that up, but there’s a big difference between a system assembled in space and meant to keep humans at human temps vs mass manufactured on the ground and keeping things around 100C.
All DCs are big concrete rooms that can supply so much power per sq area and remove so much heat per sq area (the two related of course since the heat comes from dissipating the power). Variation is just in density of whatever sort of fancy resistor you plan to put in the concrete room.
The Iranians are denying the Azerbaijan airport attack and the attack on Saudi oil facilities. Mossad false flag to provoke a war between Azerbaijan and Iran and get the gulf states to be motivated to attack Iran? It could be an Iranian drone hit the Saudi oil facility unintentionally. Its like the Girl's school that was hit in Iran. It looks like it was a target because the building was once used by the Iranian military. Old/stale database data the AI used to pick its targets?
To be clear, I am no fan of the Iran regime. Just trying to keep my head above the BS/propaganda that gets put out/repeated by the news agencies. There are no "good guys" here.
The solid bad guy(s) here are BB and tRump (for bending over for BB). No US president has ever been so stupid as to allow an Israeli leader con them into a war with Iran.
Also, you are forgetting that BB/Israel wants the Gulf States to get into the War against Iran. So it is highly plausible that some of those hits are the Israeli's. The Israeli's are masters at subterfuge.
It's really unfortunate the term "free software" took off rather than e.g. "libre software", since it muddies discussions like this. The point of "free software" is not "you don't have to pay," it's that you have freedom in terms of what you do with the code running on your own machine. Selling free software is not incompatible with free software: it's free as in freedom, not as in free beer.
Nobody in this comments thread appears to be confused by or misusing the term "free software". We're talking about free software vs (commercial) proprietary software.
> I am still surprised most Linux Distros haven't changed their package managers to allow for selling of proprietary solutions directly
Free packages remain unaffected, but now there are optional commercial options you can pay for which fund the free (as in free money) infrastructure you already take advantage of so that these projects are fully sustainable. I imagine some open source projects could even set themselves up to receiving donations directly via package managers.
I promise you, everybody understands the general idea, but adding a built-in store to your operating system is far from a neutral action that has no second- or third-order effects. It isn't that it somehow affects "free" packages. Incoming text wall, because I am not very good at being terse.
- It creates perverse incentives for the promotion of free software.
If development of the operating system is now funded by purchases of proprietary commercial software in the app store, it naturally incentivizes them to sell more software via the app store. This naturally gives an incentive to promote commercial software over free software, contrary to the very mission of free software. They can still try to avoid this, but I think the incentive gets worse due to the next part (because running a proper software store is much more expensive.)
Free software can be sold, too, but in most cases it just doesn't make very much sense. If you try to coerce people into paying for free software that can be obtained free of charge, it basically puts it on the same level as any commercial proprietary software. If said commercial software is "freemium", it basically incentivizes you to just go with the freemium proprietary option instead that is not just free software, but also often arguably outright manipulative to the user. I don't really think free software OS vendors want to encourage this kind of thing.
- It might break the balance that makes free software package repositories work.
Software that is free as in beer will naturally compete favorably against software that costs money, as the difference between $0 and $1 is the biggest leap. Instead of selling software you can own, many (most?) commercial software vendors have shifted to "freemium" models where users pay for subscriptions or "upsells" inside of apps.
In commercial app stores, strict rules and even unfair/likely to be outlawed practices are used to force vendors to go through a standardized IAP system. This has many downsides for competition, but it does act as a (weak) balance against abusive vendors who would institute even worse practices if left to their own devices. Worse, though, is that proprietary software is hard to vet; the most scalable way to analyze it is via blackbox analysis, which is easily defeated by a vendor who desires to do so. Android and iOS rely on a combination of OS-level sandboxing and authorization as well as many automated and ostensibly human tests too.
I am not trying to say that what commercial app stores do is actually effective or works well, but actually that only serves to help my point here. Free software app stores are not guaranteed to be free of malware more than anything else is, but they have a pretty decent track record, and part of the reason why is because the packaging is done by people who are essentially volunteers to work on the OS, and very often are third parties to the software itself. The packages themselves are often reviewed by multiple people to uphold standards, and many OSes take the opportunity to limit or disable unwanted anti-features like telemetry. Because the software is free, it is possible to look at the actual changes that go into each release if you so please, and in fact, I often do look at the commit logs and diffs from release to release when reviewing package updates in Nixpkgs, especially since it's a good way to catch new things that might need to be updated in the package that aren't immediately apparent (e.g.: in NixOS, a new dlopen dependency in a new feature wouldn't show up anywhere obvious.)
Proprietary software is a totally different ball game. Maintainers can't see what's going on, and more often than not, it is simply illegal for them to attempt to do so in any comprehensive way, depending on where they live.
If the distributions suddenly become app store vendors, they will wind up needing to employ more people full time to work on security and auditing. Volunteers doing stuff for free won't scale well to a proper, real software store. Which further means that they need to make sure they're actually getting enough revenue for it to be self-sustaining, which again pushes perverse incentives to sell software.
What they wanted to do is build a community-driven OS built on free software by volunteers and possibly non-profit employees, and what they got was a startup business. Does that not make the problem apparent yet?
- It makes the OS no longer neutral to software stores.
Today, Flatpak and Steam are totally neutral and have roughly equal footing to any other software store; they may be installed by default in some cases, but they are strictly vendor neutral (except for obviously in SteamOS). If the OS itself ships one, it lives in a privileged position that other software store doesn't. This winds up with the exact same sorts of problems that occur with Windows, macOS, iOS and Android. You can, of course, try to behave in a benevolent manner, but what's even better than trying to behave in a benevolent manner is trying to put yourself in as few situations as possible to where you need to in order to maintain the health of an ecosystem. :)
--
I think you could probably find some retorts to this if you wanted. It's not impossible to make this model work, and some distributions do make this model work, at least insofar as they have gotten now. But with that having been said, I will state again my strongly held belief that it isn't that projects like Debian or Arch Linux couldn't figure out how to sell software or don't know that they can.
reply