I don't understand. I would assume most people don't think small kids should smoke crack. That doesn't mean that they are automatically in favor of creating a 24/7 survelliance state just to prevent that from happening.
Not sure you're aware, but the joke may be on you. It's apparently Putin who's convinced Trump and the Mullahs (not the band) to choose civility over babarity by allowing a superyacht of one of his cronies to pass through the Hormuz.[0]
Russian trolling at its finest, truly. This timeline keeps raising the bar on the absurdity quotient.
>Part of me wonders if Karp might just be mocking recent rants by the Trump & Tech Bro's gang.
You haven't been following Palantir in the news in Trump's second term I would wager. This is definitely not the case. If it is, Karp is engaging in a multi-year performance art gimmick.
Palantir and other Big Tech execs were given the rank of Lt. Colonel in the army last year:
If it's interesting to note that there are mostly nuanced takes and positive vibes in the thread, and an otherwise low-value meta comment is deemed to be a worthy top comment for saying so, then I suggest an auto-generated AI summarization comment pinned to every long HN thread. This will surely save everyone the trouble of doing so...
(I am not claiming the top comment is AI generated, only that an AI generated summarization of the thread can function just as well in its stead, despite the occasional inaccuracies)
>people also want phones these days that are physically durable,
Anecdotally on this front, I have had to replace the screens of my iphones at least three times in the past (different models). Incidentally, I have never needed to replace the screen of a phone that had a replaceable battery. YMMV, but this seems needlessly defeatist.
>maximum battery life
One could also claim that bespoke charging cables allow for faster charging or longer battery life, but I don't know any iPhone users that are a crying a river for their deprecated non-standard chargers. But again, YMMV I guess.
I don't understand why people don't just say "This is wrong. try again." or "This is wrong because xyz. try again." This anthropologizing by asking why seems a bit pointless when you know how LLMs work, unless you've empirically had better results from a specific make and version of LLM by asking why in the past. It's theoretically functionally equivalent to asking a brand new LLM instance with your chat history why the original gave such an answer...Do you want the correct result or do you actually care about knowing why?
>Introspection mostly amounts to back-rationalisation, just like in humans.
That's the best case scenario. Again, let's stop anthropologizing. The given reasons why may be incompatible with the original answer upon closer inspection...
I definitely do this, along with the compulsion sometimes to tell the agent how a problem was fixed in the end, when investigating myself after the model failing to do so. Just common courtesy after working on something together. Let’s rationalize this as giving me an opportunity to reflect and rubberduck the solution.
Regarding not just telling „try again“: of course you are right to suggest that applying human cognition mechanisms to llm is not founded on the same underlying effects.
But due to the nature of training and finetuning/rf I don’t think it is unreasonable that instructing to do backwards reflection could have a positive effect. The model might pattern match this with and then exhibit a few positive behaviors. It could lead it doing more reflection within the reasoning blocks and catch errors before answering, which is what you want. These will have attention to the question of „what caused you to make this assumption“, also, encouraging this behavior. Yes, both mechanisms are exhibited through linear forward going statical interpolation, but the concept of reasoning has proven that this is an effective strategy to arrive at a more grounded result than answering right away.
Lastly, back to anthro. it shows that you, the user, is encouraging of deeper thought an self corrections. The model does not have psychological safety mechanisms which it guards, but again, the way the models are trained causes them to emulate them. The RF primes the model for certain behavior, I.e. arriving at answer at somepoint, rather than thinking for a long time. I think it fair to assume that by „setting the stage“ it is possible to influence what parts of the RL activate.
While role-based prompting is not that important anymore, I think the system prompts of the big coding agents still have it, suggesting some, if slight advantage, of putting the model in the right frame of mind. Again, very sorry for that last part, but anthro. does seem to be a useful analogy for a lot of concepts we are seeing (the reason for this being in the more far of epistemological and philosophical regions, both on the side of the models and us)
By what metric? You have to go through Scandinavia, Germany, Spain, and a few Eastern european countries before you get to the US on voter turnout. Not to mention labor unions striking as a form of political protest (eg Italian labor unions striking against the Gaza war). And depression prevalence also seems to be higher in the US. Did you mean worse than Europe instead of "almost as" bad?
reply