It takes basically a week on a single GPU to train AlexNet which has human level ImageNet performance. Let's say it's 500 W for the GPU versus around 10 W for a human brain. So that's 84kwh for the model and 175kwh for the baby (over 3 years at 16h/day). That's without a half billion years of architecture and initialization tuning that the baby has. I think the model performs very favorably.
I don't. This is so obscenely flawed in obvious ways. The energy to train the model was used only for model training while the energy used by the baby performed a myriad of tasks including image recognition, and can presumably apply the knowledge gained in novel ways. Not only can a baby identify a cat and a dog but it can also speak what the difference is in audible language, fire neurons to operate its musculoskeletal system (albeit poorly), and perhaps even no longer shits its pants. Apples and Oranges. Is model performance getting more impressive every day? Definitely. Has anyone actually demonstrated "AI". Still nope.
The context of this thread is the cost of training brains and models on comparable tasks. Not that the model is comparable to a human in every way.
If you want to be pedantic then 6% of the human brain is the visual cortex but then you also have to argue that AlexNet is horribly inefficient to train. So you cut the brain cost to 6% and the model cost to 1%. They're still within an order of magnitude (favoring the model) which I'd say is pretty close in terms of energy usage.
Sure but my point is that the energy costs are in the same magnitude.
If you want to be pedantic then only 6% of the human brain is the visual cortex. But AlexNet is also an inefficient model so something like an optimized ResNet is 100x as efficient to train. So now you're at 10.5kwh and 1.5kwh for the baby and model respectively.
You can argue details further but I'd say the energy cost of both is fairly close.
You’re missing my original point which is about continued, ongoing robustness that works in the low data regime and allows pilots/astronauts to make reasonable decisions in _completely novel_ situations (as just one example).
The networks we have are trained once and work efficiently for their training dataset. They are even robust to outliers in the distribution of that dataset. But they aren’t robust to surprises, unmentioned changes in assumptions/rules/patterns about the data.
Even reinforcement learning is still struggling with this as self-play effectively requires being able to run your dataset/simulation quickly enough to find new policies that work. Humans don’t even have time for that, much less access to a full running simulation of their environments. Although we do generate world models and there’s some work towards that I believe.