Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not exactly true. Every time you start a new conversation; you get a new LLM for all intents. Asking an LLM about an unrelated topic towards the end of a ~500 page conversation will get you vastly different results than at the beginning. If we could get to multi-thousand page contexts, it would probably be less accurate than a human, tbh.


Yes, I should have clarified that I was referring to memory of training data, not of conversations.


Training data also deteriorates quite quickly as the context gets longer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: