At least palantir is open about their villainy I guess, they make no attempts to pull the wool over your eyes. So you at least know that you are for sure getting in bed with the bad guys if you go with them
The seller of the code has no visibility on the training set of the LLM. If the situation you're describing ends up being illegal, responsibility should fall on the LLM provider to provide tools to detect such overlap with their training sets, and on the clients to run the tools.
The provider of the LLM should want to enable this and to take on that responsibility (I mean take it from the clients), otherwise no one will want to use the tool. Maybe there could be AI tool-use lawsuit insurance, but I feel like that's worse than the copyright infringement detection tool for everyone involved.
I can see the tool happening in the EU, but nowhere else basically, especially in the US, the government sees "AI dominance" as a national priority and a national security priority.
Say that again in five years when you can't find a job except mega yatch toilet cleaner because Claude is distinguished engineer level for one millionth of your cost and thousands of times faster, and can be instantly parallelized in the tens or hundreds of thousands just to be spun down arbitrarily as needed at any time
It may be hyperbole, but it's how people genuinely feel about AI.
Qunnipiac in March found that voters like AI less than ICE. They also found that over half of Americans think AI will do more harm than good: https://poll.qu.edu/poll-release?releaseid=3955
Maybe those AI doomers all need to touch grass. Or, maybe, the reverse is true and the minority of people who are optimistic about AI are suffering from software brain.
The transformer paper was 9 years ago. 9 years between barely translating alright between two very closely related languages (English and French, huge fraction of shared words because of William the conqueror and cultural proximity etc etc) and what we have now.
The thing is able to code up full pretty competent thousand lines projects in an hour. Even hardcore engineers use it now, as of this year. My senior front end friends already can't find jobs.
You're crazy if you think things won't change dramatically, at the scale of all of society.
It's funny because you're arguing that 1 month showing 1 variable increasing by 1 point is as reliable as 9 years with continual increase among multiple variables by multiple points when trying to extrapolate a trend.
There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
> I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".
he meant that that's not the primary concern. the sourcing of the data is a red herring, they care about losing their ability to make a living doing the thing that they love that is so central to their identity
I think I'm not sure how to parse your statement... I don't think there'd be much care for (or need for) the UX change if it wasn't for the whole ideological/valid fear about training AI on creative works? But it has been a long day, so I apologize.
I've been all over the place with my thoughts, so it's fair for you to be unsure of how to parse what I said. When making my initial post, I was thinking "this is a coding model, it isn't an image/3d model generation model, so why do they care?". I further interpreted make3 as saying that 3d artists were opposed to AI in general because they view any AI use as trending towards taking away their jobs.
So, what I meant when I said '... otherwise ...' wasn't trying to dismiss the data sourcing concern, but more like "I understand if the data sourcing is the concern, but you (make3) seem to be saying it's about the use of AI in general (ie even if, hypothetically, an ethically sourced training dataset was used for a model), which feels like a weird restriction to me". That was when I added the edit to my initial post.
This is the best phrasing of the issue I've seen online anywhere.
You can find AI useful and still be against its introduction into your field for entirely understandable reasons.
Unfortunately this does create uphill friction for any good-intentioned people trying to use AI to improve art by empowering people to take on more ambitious projects. (This is a general statement and not related to the case of Anthropic. Of course Anthropic here is just trying to sell their product, which is a fair thing to do in isolation, but I also understand the opposition to it on the grounds of its downstream effects.)
I don't think artists see it the same way. An artist will get pilloried by their peers, followers and fans if they post something that has even a whiff of generative AI.
Completely false and I hate this puritan gatekeeping. Artists who hate AI are the type to put more importance on the craft than the end product itself. Art is a means of communicating something personal. It’s not meant to show off skills in how well you can move a pencil or how many fricking tools you know in adobe.
AI removes all these hurdles and directly presents you with the end problem - communication. Artists hate that because most artists don’t have anything to communicate. These people deserve to be automated away. I don’t wanna see more derivative shit. Artists who have something special to communicate won’t feel threatened by AI but feel more freedom.
>AI removes all these hurdles and directly presents you with the end problem - communication.
Which is why 99.9% of AI art is worthless. There's literally nothing personal or interesting about getting grok to fart out some picture you thought about while sitting on the toilet in the morning.
AI art will never be good without actual artists embracing the medium.
With things like the latest dlss (extremely high quality run time .. reinterpretation), I wonder how precise mesh etc has to be now.
1. extract even a super approximate (meaning, like square edges, with some visual details) mesh from gen ai or a scan as a starting point,
2. move things around and define volumes for gameplay needs,
3. name things ("this is a Victorian house in a surprisingly good condition compared to the neighborhood it's in"), have human guided gen ai polish the things a bit more from the labels within the bounds of the gameplay required volumes,
4. let run time dlss fix the lighting etc from the rough geometry
I had to write multiple times in my prompt that it's not the model's role to change the subject or end the conversation at all.
I think that they do that to dodge conversations about controversial subjects without full-on refusing to answer. They'll give you an ok answer then tell you to go to get the walk you were talking about.
I also feel like maybe they think people are still ready to pay a lot if they feel like they're getting a lot of "high value stuff" even if the low value stuff the model refuses to do, so they basically try to stop you from doing low value stuff on Opus. I suspect that Sonnet or Haiku never tells you to go take a hike.
reply