Hacker Newsnew | past | comments | ask | show | jobs | submit | dgrin91's commentslogin

Whats the normal stockpile? Isn't the entire US national strategic oil reserve only enough for like 1 month of US usage? 6 weeks of stockpile does not seem like a crazy number to me.

It fluctuates wildly based on the whims of who is in charge, but the last alterations of the rules indicates 90 days of US imports (doesn't specify usage).

"International obligations

As a member of the International Energy Agency (IEA), the United States must stock an amount of petroleum equivalent to at least 90 days of U.S. imports. The SPR contained an equivalent to 141 days of imports as of September 2016. The United States is also obligated to contribute 43.9% of petroleum in any IEA-coordinated release."

https://en.wikipedia.org/wiki/Strategic_Petroleum_Reserve_(U...


The strategic oil reserve is crude oil. It has to be sent to a refinery before it can be made into things like jet fuel. Some refined products don't have a long shelf-life so it is only manufactured to meet demand.

That "6 weeks" probably reflects oil that is already in the refinery supply chain and is therefore deliverable over the next several weeks. The issue is that the top of that funnel is not being refilled.

The US is an exporter of jet fuel but places like Europe and Asia or more exposed to bubbles in the supply chain.


United has already cut flights by 5%, the article says KLM is cutting ~1% of their flights, both citing fuel shortages. If giant companies on opposite sides of the Atlantic, are saying this is an issue, it's probably worth taking their word for it

KLM is citing fuel price, not shortage. They’re cutting under utilized flights which they cannot perform profitably at current prices. They’ve explicitly said it’s not because of a shortage.

https://nieuws.klm.com/statement-situatie-midden-oosten/


Aren't those identical things? Shortage of commodity X, relative to demand, drives up prices for X.

A shortage can also be physical. The fuel you already bought (and possibly paid for) cannot be delivered. Maybe the actual delivery is the issue. Maybe a government confiscated it for other uses. Or maybe the fuel doesn't exist at all, because the refinery didn't have the oil to produce it.

https://news.klm.com/statement-situation-middle-east/

> ... due to rising kerosene costs, are currently no longer financially viable to operate. There is no kerosene shortage.


> Whats the normal stockpile?

For jet fuel? The article does not say, but if they are correct in predicting shortages in six weeks, then the stockpile (if any) is not terribly large.

> Isn't the entire US national strategic oil reserve only enough for like 1 month of US usage?

In any case, whatever it is, crude oil is not yet jet fuel. The crude has to be refined to output jet fuel (and other oil byproducts), and some amount of gulf refinery capacity is also offline due to one or both of damage or inability to export via sea through the strait.


Electricity I don't know how you could deliver ads through, but if someone could think of a way I bet they would. If everyone knew Morris code I bet they would make the lights flicker in Morris code for a discount.

Modern cars with connected infotainment systems are always trying to upsell you

Washing machines I dont know of anything at the moment, but I wouldnt count it out.

Smartphones/watches? Aren't those just ad delivery mechanisms? Not to mention tracking? Its a core foundation of modern ad technology

Headphones are not thank god, I hope it stays that way


Alright let me put on my evil corpo hat. Wait it was already on.

Headphones that inject ads is a great idea but we need to make that a better proposition. Lets say that these headphones have an AI integration which parses all sound and converts it to text, then we can run it through our AI to give helpful comments. We may even wait until no sound is playing to inject them (for now). We can add ads later once it becomes helpful. Imagine you are listening to a podcast / youtube video then you get a helpful voice give additional research and ideas. Like a friendly research agent on your shoulder.


Also more subtly, we can detect what music is playing and “slightly modify” the tunes of bands not part of a label owned by a Trusted Partner to sound worse.

> Electricity I don't know how you could deliver ads through

Even if you could, electricity is a utility with laws against disconnecting it in certain circumstances, even for nonpayment, and the internet isn't. So unless someone is going to make the argument that neural implants are utilities, ads injected into them seems like a pretty fair bet unless there is legislation not only making it illegal to do so, but making it illegal to make an implant even capable of receiving or displaying one. At least with that even if they repealed the law you'd be safe if you already had the implant.


That's a great Freudian slip.

Morse code - dots and dashes for characters via light or telegraph or radio

Morris code - Robert Morris wrote the first internet worm https://en.wikipedia.org/wiki/Morris_worm


I've never seen an ad delivered through any of these things. On smartphones I mean the phone/OS itself

It would be very easy to deliver ads via electricity. The utility could require you watch an ad before using more


Or via your smart thermostat.

https://sense.com/consumer-blog/with-your-permission-utiliti...

(Morse code messages via your flickering lights would be a hilarious app, and I'm somewhat reluctant to mention it here before someone gets VC funding to actually try it.)


> It would be very easy to deliver ads via electricity. The utility could require you watch an ad before using more.

That does not sound very easy to me. That sounds barely possible.


It's trivial

Lots of poor people have in residence electricity boxes that require prepayment for usage. In the olden days you put a coin in to turn on the power, but nowadays they have apps and digital payment solutions!

They might already have ads in those apps...


This is all news to me. It seems like it would be tough to prevent people from just using the power that's going to that box.

I guess I'm out of touch, because I've never heard of anything like this. I've had my power turned off for non-payment before, but I had to talk to someone at the utility to get it switched back on.


I don't think I've ever actually seen one. I only know about this style of electricity utility because it was a part of a Mr Bean episode once.

Getting only one 9 on a major tech provider that is suppose to be one of the flagship AI companies. That paints a picture hard to ignore.

The future of vibecoded software with code nobody understands is 1 9.

They’ll need Claude to fix it during an outage, but Claude is down too!


A 9 is optimistic, unless you’re talking about 87.9

More like 9%

Even AWS's outages were caused by engineers overrelying on AI:

https://arstechnica.com/ai/2026/03/after-outages-amazon-to-m...


Well it just tells you that Anthropic will experience repeated outages on their managed solutions. How can they can even begin to compete against everyone on with their products with their atrocious uptime? It's just as bad as GitHub's uptime circus.

There are just some things you should not vibe code your way out of.


Here is a hard question - how could Stack Overflow succeed in a post-chatgpt eta? I mean obviously the new CEO and leadership has been total trash and has squandered their goodwill and user loyalty, but if I was CEO instead I don't know how I would save the ship.

Doubling down on how it was done in the 'good old days' probably wouldn't work because you would slowly bleed user to AI. Selling data to AI companies might work for a bit, but I would guess that the sales value of SO's data has quickly diminishing returns. So what is their path forward?


That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.

In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.

SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.

Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.


> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.

I think the deeper question is how SO would get paid for that.

Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)

Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.

The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.

This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.


Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.

There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha

the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better

Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs


Be chatbot first ig. I had envisioned a portal where you land on the front page and drop your question in the box. It would do some rag thing over the SO question database then try to answer your question. You could chat back and forth with it. If you figured out your problem then you would have the option to turn it into a question answer pair with help from the ai. If you didn't figure out your problem, then it would turn it into just a question, which would then show up for the experts of SO to answer. Something like that.

Allow AI to ask questions. Since the point of the site is to build a knowledge base you don't really need humans to be that involved. Humans running into problems and then asking question was just one way to do this in the pre AI era. Now with AI we can reevaluate if we really need humans as much as we did.

Depends what you mean by "succeed". Commercial viability for anything remotely approaching the design intent of SO is probably impossible. But anyone can just start building a useful Q&A database and hope others stumble on it. The point is that experts, who have been through many years of trying to help beginners in the pre-LLM era, also know what questions to ask, and how to phrase them, and how to disentangle the concerns that beginners have.

Or at least they should. I think too many people get into a routine of letting themselves get angry about the repetitiveness of the questions they're answering, and then somehow getting addicted to that.


They should focus on high-quality expert answers.

Now that we have LLMs I don't need basic questions answered. I do still need hard questions answered by experts and AI has normalized paying money for QA.

I would definitely pay for a "human ChatGPT" service where the answers are written by experts who get paid per answer, e.g. grad students. Then they can resell this data to AI companies. Or maybe the economics are such that they can take enough money from AI companies to pay the experts and I don't need to pay anything at all.

This won't bring in as much money as advertising used to, but that business model is dead anyway. There's no future for a QA site at the low end.


ideally, slowly grinding down duplicates into canonicals, keeping the ones whose answers are subject to change (with developments in languages and tools) up-to-date, removing cruft and making it more like a library (à la Rosetta Code) that's easy to find things in

and a change of form from (questions being asked primarily as a means to an end for one person) to (Q&A pairs being written as reference materials)

and requests for comment on which approach would be the most idiomatic or whether one has fallen into an XY trap or other things that rely on human 'taste' rather than LLMs' blithe march of obedience


I’m not aware of SO’s plans to remain profitable and relevant, but I do know they have an enterprise offering. I’ve seen ads on LinkedIn recently for MCP functionality tied to the enterprise SO offering that lets you use it as a knowledge base. I could see that potentially being a path to stay relevant.

The place I work at tried using an SO enterprise instance and it was quite ineffective. We didn't have the toxicity of the public instance, but generally having a Q&A forum double as a knowledge base is an oddball format that doesn't work out. Adding AI integration is not likely to compensate for that.

> How could Stack Overflow succeed in a post-chatgpt eta?

As a data source for LLMs, and by becoming the place someone goes where ChatGPT can't produce a sufficient answer.


It will turn into a meme subreddit and/or die. What else is there?

What I don't get is, how does this economically make sense? Isn't there a 100k fee for h1bs now? So 3k h1bs would cost $300 Mil... Before you even start paying salary


These numbers can get wonky fast. E.g. devs salaries in Ukraine are pegged to the dollar, so they get free raises as the exchange rates plummet. At the same time the cost of living in Ukraine was low before the war and only got lower. Oh and taxes are pretty low in Ukraine.

So I'm pretty sure Ukranian devs will end up with one of the top salary rates in all of Ukraine, but the externalities of that are large.


>> free raises as the exchange rates plummet

Also developers use official exchange rate while everything is imported using market rate. 10% cut here.

>> the cost of living in Ukraine was low before the war and only got lower

This is not true. Some low skilled services are cheap, but high skilled are not.

Or are you talking about cost of housing near the frontline?

Dev salaries in Ukraine also down since the start of the war as no new projects are outsourced to the country.


I'd call "getting conscripted into joining a fucking war" a rather large externality!


This is pretty low quality marketing spam.

Offline signing? All signing is offline. Its not even a bitcoin thing... its just how the math works.


Yup. Is this post from 2019?


Absolutely fantastic. I actually laughed out loud a few times.

My only suggestion is make the shuffle animation shorter. At first I thought you were actually doing some server work when I clicked it and got concerned.

Also if you sell these in real life I would buy them.


Speaking about cards purchase. Gimme some time, I will do my best.


I had just assumed the slow animation was buying time to call out to an LLM.


Shuffle animation speed was addressed :)


Satellite images are not always real time. Also satellites can be affected by things like cloud cover.


For tracking of military ships it's much better to use radar imaging satellites (e.g. see [0]). They can cover a larger area, see ships really well, and almost not affected by weather.

I will not be surprised if China has a constellation of such satellites to track US carriers and it's why Pentagon keeps them relatively far from Iran, since it's likely that China confidentially shares targeting information with them.

[0]: https://www.esa.int/Applications/Observing_the_Earth/Coperni...


China has Huanjing [0], which is officially for "environmental monitoring", but almost certainly has enough resolution to track large ships (at least the later versions, apparently the early versions had poor resolution)

And even if they didn't, Russia have Kondor, [1] which is explicitly military, and we know they have been sharing data with Iran.

[0] https://en.wikipedia.org/wiki/Huanjing_(satellite) [1] https://en.wikipedia.org/wiki/Kondor_(satellite)


Strava tracks can also be spoofed and you have no guarantee for them to appear on a schedule either. I just find this to be on the sensationalist side of "data" journalism lacking any sort of contextualization or threat level assessment. Unless there was evidence of some more sensitive locations that have not been published along this story, it looks like some serious unserious case of journalism to me.


Heh, establishing an "opsec failure guy" on the boat with software on his Garmin that can be activated on days with special secrecy demands to translate his runs to a plausible fake location? I like that idea. It would actually fit a one-off like the Charles de Gaulle quite nicely!


They are usually called Public Affairs Officers :D


Clouds only affect a narrow range of the electromagnetic spectrum. Plenty of satellite constellations use synthetic aperture radar, for example, which can see ships regardless of cloud cover. There are gaps in revisit rates, especially over the ocean, but even that has come way down.


And its $100 minimum... at least in NYC. Right now its 20-25 a head and that doesn't include transportation or food.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: