Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mxkopy
on May 9, 2023
|
parent
|
context
|
favorite
| on:
LLMs are not greater than the sum of their parts: ...
Transformers originally were made for language translation. So the way I think about it, GPT models translate questions to answers. Hence the hallucinations - some questions can't be answered by just associative reasoning and pattern matching.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: