Hacker Newsnew | past | comments | ask | show | jobs | submit | Dylan16807's commentslogin

Seeing your other (rightfully flagged) reply I want to tell you as a neutral party that yes this is missing the point of the analogy. You're basically saying "I would simply hit the brakes on the trolley". It's not that they're so hubristic they think it's impossible to legitimately disagree with their argument, it's that mentioning insurance is sidestepping their argument entirely. You're not addressing the general idea of getting hacked and suffering the consequences of the hack.

There's a lot of people making tools for coding with LLMs and those have a high chance of mentioning OpenClaw somewhere.

About 7000 on average, but let's say 10000 since demand varies. And let's consider doing 10% of them with helicopters. If we average 3 people per helicopter, that's 170 groups in and 170 groups out. If each landing needs 5 minutes of pad time, that's 14 pads. Make it 20 to handle variation.

Wow, that makes it sound significantly more feasible than I would have guessed.


Where do you put those 20 pads in the city?

I'm not sure but there's a whole lot of city available. That half of the problem is easy on a technical level.

Notably that's 45dB when it's 500 meters away.

Imagine you cut the sentence "I'm going to kill you, this is an imminent threat." out of a book and hand it to someone.

It would be silly to consider you the author of that sentence in a copyright sense.

It would be equally silly to say you have no liability from that sentence.

Looking back at the boulder example, that LLM output has no consequences to be liable for if you throw it immediately into the trash bin. It's when you take boulder.txt and use it to do things that you have liability despite not having copyright.


You asked a machine that makes things up when it doesn't know the answer a question that it has no way of knowing the answer to. I don't know why you bothered to relay its response.

Yes, that is their point. Do you have evidence against it?

I'm sure you can find some overlap, but I bet the vast majority is caused by people making a distinction between commercial and noncommercial piracy. I don't think there's a big cohort of piracy hypocrites.


Due to the nature of the argument, of course I do not have evidence for or against it. However, I am willing to leave it at that, because I think that any rational observer will be able to look at the general mood toward copyright/privacy online (including using Limewire back in the day, pirating movies, downloading Photoshop etc.) and come to their own conclusion whether or not it's plausible that there isn't a significant overlap between the two.

> Yes LLMs can reproduce passages from copyrighted works verbatim but that's only because it "learned" it and it's just telling you what it "knows".

Are you finding people that actually say this?

When it can quote something like that, it's a training error. A popular enough work gets quoted and copied by people online, and then it's not properly deduplicated. It's a very small fraction of works it can do that with, and the cleaner your data the less it happens.

I'll once again quote that stable diffusion launched with fewer weights than training images. It had some accidental memorizations, but there wasn't room for its core functionality to be memorization-based.


It's a relevant extension if you think the ability to learn from a work is a right people have that exempts them from the more general lockdown copyright would impose.

If you come at it from the view of copyright being a limited set of control over some areas but not others, then if copyright doesn't block human learning it shouldn't affect anything similar either, unless a specific rule is added to make those situations be handled differently.


"Including all language servers" is a big part of that. I hope.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: