There are some similarities, but they are absolutely overwhelmed by the differences. Having a handful of superficial similarities is not enough to make draw a meaningful comparison. The act of teaching a human is very different from “training” an LLM because humans have the power of the whole brain and body, not just some information-integration part that the brain and LLMs may (or may not) share. Humans can be creative in ways that LLMs manifestly can’t be. Humans can act like mere token predictors, but we can (and routinely do) also transcend that, question it, play with it. LLMs can’t.
> but we can (and routinely do) also transcend that, question it, play with it. LLMs can’t.
Maybe not in a single inference, but you can have an LLM question itself by running another inference using its previous prompt as input. You can easily see this in a deep research agent loop where it might find some data and then it goes to find other data to back that up but then finds that it was actually incorrect and then it changes its mind
I think this is anthropomorphising far too much. I’ve seen similar patterns, but the end result is still nothing that comes close to what you get from a human.