Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of this AGI risk stuff always hinges on the idea of us building an AGI, while nobody has any idea of how to get there. I need to finish my PhD first, but writing a proper takedown of the "arguments" bubbling out of the Hype machine is the first thing on my bucket list afterwards, with the TL;DR; being "just because you can imagine it, doesn't mean you can get there"


Are you rephrasing the arguments against man-made flight machines from the early 20th century on purpose or accidentally?


Google just released a paper that shows a language model beating the average human on >50% of tasks. I’d say we have a pretty good idea of how to get there.


Okay, so how do we go from "better than the average human in 50% of specific benchmarks" to "AGI that might lead to human extinction" then? Keeping in mind the logarithmic improvement observed with the current approaches




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: