Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your skepticism is well placed. Every time a new quantization or compression technique drops, the immediate response is to just scale up context length or run a bigger model to fill whatever headroom was freed up. It's Jevons paradox applied to VRAM - efficiency gains get eaten by increased usage almost immediately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: