Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We absolutely do, we know exactly how LLMs work. They generate plausible text from a corpus. They don't accurately reproduce data/text, don't think, they don't have a world view or a world model, and they sometimes generate plausible yet incorrect data.




How do they generate the text? Because to me it sounds like "we know how humans work, they make sounds with their mouths, they don't think, have a model of the world..."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: