Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My intuition is that since AI assistants are fictional characters in a story being autocompleted by an LLM, mechanisms that are interpretable as human interactions with language and appear in the pretraining data have a surprising advantage over mechanisms that are more like speculation about how the brain works or abstract concepts.


This is also why LLMs get 80% of the way there and crap out on logic. They were trained on all the open source abandonware on GitHub.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: