Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Additionally, I don't think that these kinds of failures say much about overall intelligence. Humans are largely visual creatures, and we fall prey to innumerable visual illusions where we fail to see what's actually there or imagine something that isn't there under certain visual patterns.

LLMs are largely textual creatures and they fail to see things that are there or imagine things that are under certain textual patterns.

I don't think you would say a human "isn't really intelligent" because it imagines grey spots at the intersection of black squares on a white background even though they aren't there.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: