Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.




The sometimes stumble and hallucinate out of distribution. It’s rare, it’s more rare that is actually a good hallucination, but we’ve figured out how to enrich uranium, after all.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: