Hacker Newsnew | past | comments | ask | show | jobs | submit | catgary's commentslogin

I think you need to give some concrete examples, considering the US happily let its companies offshore a lot of work to China over the years, and Chinese funds own large chunks of American companies.

? The united states have blocked exports by a Dutch company to China, and somehow got away with it.

Okay!

* The US and UK propped up the Iranian Shah to help western oil interests: https://en.wikipedia.org/wiki/1953_Iranian_coup_d'état

* US Export Controls basically handcuff anyone of import involved in creating anything of value to the state: https://www.investopedia.com/u-s-export-restrictions-6753407

* We continue to embargo Cuba instead of letting it succeed or fail on its own merits - while also controlling their own land for a Black Ops prison and having attempted repeatedly to assassinate their leaders or create coups: https://en.wikipedia.org/wiki/United_States_embargo_against_...

* Our centralization of global finance and status as a reserve currency lets us dictate global policy on everything from Intellectual Property to National Defense, meaning companies generally have to "play ball" or the host country will incur penalties

* That time we overthrew the democratically-elected government of Guatemala because they imposed radical ideas like a minimum wage: https://en.wikipedia.org/wiki/1954_Guatemalan_coup_d%27état

* And that time we overthrew the democratically-elected socialist government in Chile to prop up exploitative labor practices and resource extraction: https://en.wikipedia.org/wiki/1973_Chilean_coup_d%27état

I can go on, but really, Wikipedia is right there. If you're looking for a specific analogue to "we kidnapped CEOs and demanded a foreign company unwind their merger", I don't think I can provide that right away; however, if instead you're looking for examples of "country used threats and force to foment an outcome favorable to its domestic policies", well then, boy howdy are there tons and tons of examples out there just a cursory search away.


You basically just parroted a bunch of Howard Zinn agitprop and didn't cite a single example that was remotely similar to this specific incident, because you literally can't. What exactly is your motivation here, because it's certainly not truth-seeking.

Howard Zinn was a hero.

Also you can add middle east for last 20-30 years.

Complete disregard for human life for profit.


None of this is similar to what is happening here

Since you think Cuba and China are such nice places, perhaps try living there. You'll quickly find out about their "merits" (such as the fact that they execute dissidents).

True, America just kills outside its borders (~37 million people since the 50s), so it's a lot safer!

I’ve found they’re quite good when you’re higher in the compiler stack, where it’s essentially a game of translating MLIR dialects.


it'd be nice if one of these environment labs made an environment for cross-architecture porting, it'd be really cool to see some old ppc mac programs running natively, or compiled to wasm (yes, yes I know the visual elements would need to be ported as well)


And even then - I still read the code it generates, and if I see a better way of doing something I just step in, write a partial solution, and then sketch out how the complete solution should work.


Unless the solution is going to be more secure, faster, more stable etc, why does it matter?

Will the end user care? “Does it make the beer taste better”?


in a word, maintainability

> maintainability is inversely proportional to the amount of time it takes a developer to make a change and the risk that change will break something

https://softwareengineering.stackexchange.com/a/134863

i could be wrong, but i'm pretty sure that end-users get upset when a change takes a long time or it ends up breaking something for them.

just because people are finding that agents or whatever are speeding changes up now doesn't necessarily mean they won't encounter a slow-down later when the codebase becomes an un-maintainable mess. technical debt is always a thing, even with machines doing the work (the agent/machine still has to parse a codebase to make changes).


What makes you think that AI couldn’t make the same changes without breaking it whether you modify the code or not? And you do have automated unit tests don’t you?

Right now I have a 5000 line monolithic vibe coded internal website that is at most going to be used by 3 people. It mixes Python, inline CSS and Javasript with the API. I haven’t looked at a line of code. My IAM permissions for the Lambda runtime has limited permissions (meaning the code can’t do anything that the permissions won’t allow it to). I used AWS Cognito for authorization and validated the security of the endpoints and I validated the permissions of the database user.

Neither Claude nor Codex have any issues adding pages, features and API endpoints without breaking changes.

By definition, coding agents are the worse they will be right now.


i have a rule of thumb based on past experience. circa 10k per developer involved, reducing as the codebase size increases.

> 5000 line

so that's currently half a developer according to my rule of thumb.

what happens when that gets to 20,000 lines...? that's over the line in my experience for a human who was the person who wrote it. it takes longer to make changes. change that are made increasingly go out in a more and more broken state. etc. etc. more and more tests have to be written for each change to try and stop it going out in a broken state. more work needs to be done for a feature with equal complexity compared to when we started, because now the rest of the codebase is what adds complexity to us making changes. etc. etc. and that gets worse the more we add.

these agent things have a tendency and propensity to add more code, rather than adding the most maintainable code. it's why people have to review and edit the majority of generated code features beyond CRUD webapp functionality (or similar boilerplate). so, given time and more features, 5k --> 10k --> 20k --> ... too much for a single human being if the agent tools are no longer available.

so let's take it to a bit of a hyperbolic conclusion ... what about agents and a 5,000,000 line codebase...? do you think these agents will take the same amount of time to make a change in a codebase of that size versus 5,000 lines? how much more expensive do you think it could get to run the agents at that size? how about increases in error rate when making changes? how many extra tests need to be added for each feature to ensure zero breakage?

do you see my point?

(fyi: the 5 million LoC is a thought experiment to get you to critically think about the problem technical debt related to agents as codebase size increases, i'm not saying your website's code will get that big)

(also, sorry i basically wrote most of this over the 20 minutes or so since i first posted... my adhd is killing me today)


20K lines of code is well within the context window of any modern LLM. But just like no person tries to understand everything and keep the entire context in their brain, neither do modern LLMs.

Also documentation in the form of MD files becomes important to explain the why and the methodology.


Generally speaking, I try to ensure that the LLM is using core abstractions throughout the codebase in a consistent manner. This makes it easier for me to review any changes it makes.


Sort of a devils advocate question. If you write and review your tests and the functional and non functional requirements and the human tests for usability pass, why does the code matter?

Non functional requirements: performance, security, reliability, logging etc?


Because the code is the actual thing, tests can only show that the code fails in certain cases, they don’t actually prove the code is correct.


If you are writing the correct tests that mirror the requirements, why wouldn’t passing tests mean the code is correct?


Because it doesn’t? Thats why the field of formal methods exists.


Then that sounds like you aren’t writing good tests…


This line of thought is honestly a bit silly - uv is just a package manager that actually does its job for resolving dependencies. You’re talking about a completely orthogonal problem.


> uv is just a package manager that actually does its job for resolving dependencies.

Pip resolves dependencies just fine. It just also lets you try to build the environment incrementally (which is actually useful, especially for people who aren't "developers" on a "project"), and is slow (for a lot of reasons).


uv is really only something you need if you already aren't managing dependencies responsibly, imo.


I think there are 5-7 thousand confirmed deaths by the UN, and medical reports in Iran estimated there could be 20,000+ casualties.


7 thousand confirmed death, 9 thousand unconfirmed death. Among that 1200 confirmed death from the regime forces, and 400 to be confirmed bystanders. The nurse burned to death by protesters is among those 400.


I don't know enough to dispute, but could you link such a report


I see the value of the students, it just seems like an odd thing for a government to subsidize via NIH/NSF funding. We don’t really have anything analogous to that in Canada and it just seems awfully weird that it exists in the US without the “it’s older than the country” excuse that Oxford/Cambridge have.


How is any of this subsidized by NIH/NSF funding? Those grants are only spent on the cost of research, either direct or indirect.

Also, a number of the schools we're discussing are older than the US itself; Harvard predates it by almost 150 years.


EVs should do much better on brake dust thanks to regenerative braking, no?


But heavier so worse on the tires.

It isn’t intuitive that they’d be better off, and they might be worse on this particular dimension.


Yes current EVs are heavy. It's not at all clear that this will prevail as solid state batteries evolve to become standard. It is highly possible that EVs will soon be lighter than comparable ICE vehicles [1]

[1] https://news.ycombinator.com/item?id=46505975


No no no. Sure, there might be a future where solid state batteries become the standard for electric vehicles, but you cannot link to Donut Lab's announcement from this month. There is no credible evidence they've achieved the holy grail of batteries so far until they actually deliver these motorcycles in hand and people independently verify them.


Time will tell on their battery, especially if the bike they're putting it on delivers. I think the overall point could be that there's active R&D in trying to find geopolitically sustainable materials, and lowering the weight of materials used.


Because text analysis is substantially easier than video analysis?


Amazon has the Fallout scripts, subtitles, internal show bibles, etc. all available to them.


Are you implying that an LLM needs to be trained on a specific piece of text to answer questions about it?


If you want proper answers, yes. If you want to rely on whatever reddit or tiktok says about the book, then I guess at that point you're fine with hallucinations and others doing the thinking for you anyway. Hence the issues brought up in the article.

I wouldn't trust an LLM for anything more than the most basic questions of it didn't actually have text to cite.


Luckily, the LLM has the text to cite, it can be passed in at inference time, which is legally distinct from training on the data.


Having access to the text and being trained on the text are two different things.


You don’t need any rights to execute the feature. The user owns the book. The app lets the user feed the book into an LLM, as is absolutely their right, and asks questions.


1. The user doesn't own the book, the user has a revocable license to the book. Amazon has no qualms about taking away books that people have bought

2. I doubt the Kindle version of the LLM will run locally. Is Amazon repurposing the author-provided files, or will the users' device upload the text of the book?


I am so confused by some of the comments in this thread. All these weird mental gymnastics to argue that users should have less rights.

“Oh, you think you should be able to use an LLM with a book you paid for? Well you don’t own and book.”

Ok, and you like that? You want even less ownership? Less control?


I don't agree with the way you're interpreting the comment. If anything I think it's BAD that you don't really "own" digital content.

I guess my argument is that Amazon shouldn't be able to have their cake and eat it too


You agree that we should own our digital content but it sounds like you don’t want this particular capability because… fuck Amazon.

I can totally understand that sentiment but I don’t think giving up end user capabilities to spite Amazon is logically aligned with wanting ownership of digital media.


> All these weird mental gymnastics to argue that users should have less rights

We probably agree more than not. But users getting more rights isn’t universally good. To finish an argument, one must consider the externalities involved.


>The app lets the user feed the book into an LLM, as is absolutely their right,

I don't think that's cut and clear yet. Throwing media onto someone else's server may count as distribution.


How likely do you think it is that Amazon doesn’t have a pre-existing contract with these publishers to host these books on Amazon servers?


Sure, in the sense that any belief about the law isn’t cut and dried until a judge has explicitly dismissed it in the court of law.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: