Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Won't the need to train increase as the need for specialized, smaller models increases and we need to train their many variations? Also what about models that continuously learn/(re)train? Seems to me the need for training will only go up in the future.


That's the thing - nobody knows. LLM architecture is constantly evolving and people are trying all kinds of things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: