> Nevermind the fact that they are literally able to introspect human cognition and presumably find non verbal and non linear cognition modes.
Are they, though? Or are they just predicting their own performance (and an explanation of that performance) on input the same way they predict their response to that input?
Humans say a lot of biologically implausible things when asked why they did something.
Dumb question, do the cybercab thingies not drive themselves? Having a safety driver doesn’t disqualify them if for the vast majority of the time they’re autonomous. It just means they’re earlier into chasing 9’s than Waymo.
The characterisation of “level 5” autonomy as the car handling any conceivable circumstance (not that you explicitly made this claim here) is just silly. Humans can’t handle any conceivable circumstance either.
I wonder to what degree it depends on how easy you find coding in general. I find for the early steps genAI is great to get the ball rolling, but rapidly it becomes more work to explain what it did wrong and how to fix it (and repeat until it does so) than to just fix the code myself.
The not-fun work isn’t on the song, it’s on you. Improving the song is a byproduct. This only really becomes apparent over time but you’ll realise you were working on yourself all along.
reply