The problem is that subjective judgements by streaming platforms on where an AI line is drawn in music production is difficult.
If you human-write a song but use AI to produce a synth stem or bass stem and then mix it down and use AI mastering is that better or worse than if you use AI to help you write something but record with human musicians and a bit of AI assist?
And what if you use AI entirely to write and compose but use human performers to record?
And what if the AI is trained only on licensed content?
> The problem is that subjective judgements by streaming platforms on where an AI line is drawn in music production is difficult.
This may be more of an economic problem. There is a stark difference between a music track with 1% human work/effort, and 0%. You can make many musical tracks if you have to do only 1% of the work, but you can't make >100x what you made without AI (Amdahl's law). While the latter can scale infinitely; you could upload a billion tracks if you wished, you're limited basically by bandwidth and automation. So a classifier or policy which permitted the 99% AI but banned the 100% AI may be adequate.
There's a whole spectrum from sfw to nsfw but we don't give up and allow porn on every platform because drawing the line is "difficult". We can use common sense and taste, with all their flaws.
I wouldn't say that porn is not allowed on every platform. basically every mainstream "content posting" platform (fb, ig, tw, tiktok, etc) allows softcore porn, and in fact pushes it on users, both on content an on advertising. if the same was true with AI music I wouldn't bother with the platform
Honestly, debating these corner cases feels like a distraction tactic. The reality is that the bulk of that 44% is total AI slop: one-sentence prompts entered into Suno to generate 1,000 tracks and extract money from subscribers who stream in the background.
It's the same thing with writing. No one cares that you asked a chatbot to help you reword a paragraph in your essay. The problem is zero-effort slop delivered by the truckload to your social media feed.
But it doesn't. We have a problem. We can focus on addressing the problem without pre-adjudicating every hypothetical corner case.
If your "work" is mostly AI, and if you don't disclose it, it goes to /dev/null. And yeah, you can get into a debate that it's unfair to reject 51% but allow 49%, but that's how the real world works - otherwise, nothing would ever get done. You also get a DUI for BAC of 0.08% but not 0.07%. That's not an argument for putting DUI laws on hold until we can figure out a more perfect approach.
Of course ~nobody wants low-effort "I pasted a one-line prompt into Suno and got this out" in their feed. If they did they'd be listening on Suno and not Spotify. The problem is there's no objective, let alone automated, way to tell the difference between that and the corner cases. Artistic quality is an inherently subjective metric, not something that can be enforced via rules.
The same people who read AI-generated stories about AI. Which is, roughly, most of us. There are AI-generated blog posts on the front page of HN multiple times a day. Right now, I see "I prompted ChatGPT, Claude, Perplexity, and Gemini and watched my Nginx logs", which is AI slop. I'm sure there's more.
Well, even if you are absolutely deliberate against AI slop like me, you might well just fall asleep listening to an ambient album of your top rated human musician, and wake up to AI slop anyway in an hour or two in which your subscription money had been paying those fuckers' instant ramen.
But this can be easily fixed by turning the autoplay, the slop's best friend, off.
Me personally, I sniff AI on Spotify by empty "about" sections. Which is sad as I always held dear that it's the music that must speak for the author, not the vice versa.
The solution is easy, don’t use Spotify.
They are money grubbing vampires leeching off musicians anyway and using your subscription money to fund shady arms companies.
Lots of people are listening to it. There’s an AI brand named “Eddie Dalton” on Spotify right now with 589k monthly listeners and a couple of million streams on its top track. This is one of many.
Lots of people don’t care about whether the music they listen to is human created or not - just as lots of people don’t care about lots of other AI slop so long as they are entertained by it.
The biggest issue for new musicians is getting people’s attention. AI music that people are happy to over human music listen to is absolutely part of the problem.
I agree that this is the biggest problem, but the existing backlog of hits is far that have been recorded since way before I was born is far “worse” in that regard than AI slop.
It’s easy (at least for now) to compete with slop - it’s way harder to compete with e.g. Queen, Eminem and the Beatles.
It’s like saying cancer is a problem when you’re bleeding from a gunshot wound to your chest.
The engagement mechanisms and audiences for heritage artists vs new artists are very different. New artists are not in competition with heritage artists like the ones you mention: those artists are a constant passive consumption baseline against which new active “lean in” consumption needs to fit. A lot of music listening is passive consumption. Start an algorithmically generated “radio” style playlist from one of those big name heritage artists and Spotify will then serve up payola content (baked in in the major label deals, “Spotify Discovery Mode” for indies) that positions new music within that playlist for algorithmically receptive listeners. If AI created music is going head to head in that algorithmic market for listening slots, that has a significant disadvantage for new human-created music.
Big name heritage artists aren’t the problem - they are the thing that underpins a lot of consumption and keeps people coming back to platform.
I can assure you it’s not a corner case: this is one of the things that a lot of creators are concerned about. If a major streaming platform decides your music is not acceptable because you used some AI as part of your production process and blocks your song as a result that has pretty big consequences.
Spotify, for example, already said that any track that gets under 1000 streams will not get any money. What if it says “any track that uses more than a proportion of AI will not make any money” - but refuses to say how it makes those decisions so that people can’t game the system.
I use Claude Opus 4.6 as an enterprise user, and have also noticed a lobotomization. In recent weeks it's been much more self-correcting even within singular responses ("This is the problem - no wait, we already proved it can't be this - but actually ...") I'm wary of 4.7 being a change in this pattern, it's frustrating to have such a substantial change in experience every few months.
Frustrating that the experience changes, and then they retire the better older model because it costs more, although it was better for everyone. The new ones are just geared better towards beating the benchmarks at a cheaper cost!
Was talking about this with some colleagues who are from Ukraine, Russia, and other countries.
In the US, it seems corruption is only allowed at the top. If you tried to bribe your way out of a traffic ticket as a regular person, you'd get in big trouble, then meanwhile the president pardons wealthy fraudsters [1].
Meanwhile, in countries like Russia, everyone can get in on the action. A colleague of mine told me if he were to get drafted to the war, he knew exactly how much to pay and who to pay off locally to get his name off the list. It's equal opportunity corruption.
I'm Lithuanian familiar with soviet type of corruption and post soviet Lithuania which did a lot to remove corruption (also live in asia rn) and your assessment is somewhat correct but it's a terrible system.
The availability of corruption is a huge grease for economic activity and weirdly - order - but soviet type of corruption has a massive flaw that bad corruption bets (big impact, high publicity) would be mostly unpunished. In asia however it's quite interesting how the face saving and family culture corrects for that a bit as bad corruption bets will backfire despite lack of legal framework for cleanup.
Unfortunately it's _not_ equal opportunity corruption as low economic classes are left out and suffer the most, the cruelty of these systems are really hard to put in the words of a single comment. This also creates a massive overhead for corruption beaurocracy where entire positions are found not on actual product or activity but corruption "middle managers".
So despite your friends take this is not a good system on it's own and merely a relief for terrible autocratic rule. Autocrats actually actively allow corruption as this relief is what keeps them in power precisely because people with some power get a relief and poor class bears the slave worker burden.
I've had Indian coworkers remark similarly. The way they put it was "In India, corruption is democratized. Everybody gets in on the act, and everybody can profit a little bit. In the U.S, corruption is reserved for the very top; only they can profit, and everybody else just suffers. Personally, I prefer the Indian system."
Was kinda eye-opening as a native-born U.S. citizen. I'd always just assumed things worked according to the rules here, but then after he said it, I started seeing corruption at the top all the time.
On my team I've been adding additional linters and analyzers (some I've written with Claude) to run at CI or locally to prevent codified "bad patterns" from entering our systems. This has been nice as a backstop, as I can't enforce what everyone's Claude prompts and local workflows are, but we can agree what CI checks run before merging. Not a 100% solution, but it has been helpful so far.
I added a Claude skill (/gather-history) that consolidates the history of our session(s) specific to the change into a series of: decision log, involvement (how much did I write vs. AI, how many refinement iterations, reviews, etc.) that I can then include in the PR. So far this has been helpful for my colleagues to understand how I arrived at the change and how thoroughly it's been developed.
There seems to be so much value in planning, but in my organization, there is no artifact of the plan aside from the code produced and whatever PR description of the change summary exists. It makes it incredibly difficult to assess the change in isolation of its' plan/process.
The idea that Claude/Cursor are the new high level programming language for us to work in introduces the problem that we're not actually committing code in this "natural language", we're committing the "compiled" output of our prompting. Which leaves us reviewing the "compiled code" without seeing the inputs (eg: the plan, prompt history, rules, etc.)
I have a design doc subdirectory and instead of "plan mode" I ask the agent to write another design doc, based on a template. It seems to work? I can't say we've looked at completed design docs very often, though.
One challenge with code review as an antidote to poor quality gen-AI code, is that we largely see only the code itself, not the process or inputs.
In the pre-gen-AI days, if an engineer put up a PR, it implied (somewhat) they wrote their code, reviewed it implicitly as they wrote it, and made choices (ie: why is this the best approach).
If Claude is just the new high level programming language, in terms of prompting in natural language, the challenge is that we're not reviewing the natural language, we're reviewing the machine code without knowing what the inputs were. I'm not sure of a solution to this, but something along the lines of knowing the history of the prompting that ultimately led to the PR, the time/tokens involved, etc. may inform the "quality" or "effort" spent in producing the PR. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.
Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient.
Will there be an interest in vision based wearables?
Google Glasses - dead
Apple Vision Pro - dead
FB/Meta x RayBan - dead soon(?)
It seems they can’t get over the social hurdle of having a camera strapped to your face, and the effects of that on people around you. I think the tech is neat, but not socially accepted as a concept to make it viable. My sister is big into tiktok and filming all the time, and it personally makes me hesitant to be nearby as I’m not comfortable being filmed all the time.
I don't want people with camera glasses around me either. But the stupid thing is: they don't even need to exist. The Google glass can show its notifications just fine without a camera. My Xreal Air works great without one.
It's the big tech companies that are pushing for pervasive cameras. Not consumers saying they can't live without a camera on their face.
It is almost certainly a problem with size, cost, and features.
The wearables are just too big, too expensive, and the feature set too small.
Much like with VR goggles, every problem they solve is solved far better and more cheaply with another device most people already have and use.
I don't think it has anything to do with the moral or social implications of taking pictures of people privately. The second any of the above are resolved, society will willingly give up even more privacy without a hiccup, as we've done every other time the choice was presented.
Agreed. But perhaps that’s the problem? Instead of trying to go instantly mainstream via the consumer market, perhaps the tie-hold are niche professional / commercial markets? Or niche consumers markets provided by the business (e.g., museums)?
It’s not a tech issue, it’s a marketing issue (and lack of imagination).
I think it goes beyond the social hurdle. I have an Oculus, and I just never use it. A phone or laptop screen generally just feels good enough. It's easier to start and stop using, and it doesn't feel like I'm shutting myself off from the world when I do.
I've been in big tech for 12+ years now. The first handful of years are definitely a grind to earn your spot, get a couple promos. After that though, it can become quite a bit easier to coast if that's what you're looking for. People know you, know you're probably valuable cause you're "senior" or "staff" and still here, and likely leave you alone. But yeah, as a newer engineer these days, it still requires the initial commitment to earn the privilege of coasting in a big tech company.
My biggest problem with usage of an LLM in coding is that it removes engineers from understanding the true implementation of a system.
Over the years, I learned that a lot of one's value as an engineer can come from knowing how things actually work. I've been in many meetings with very senior engineers postulating how something works arguing back and forth, when quietly one engineer taps away on their laptop, then spins it around to say "no, this is the code here, this is how it actually works".
reply