Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?



Not the person you're responding to, but I think the salary paid to the researchers / research-engineers at all the major labs very much counts as eye-watering.

What happened at meta is ludicrous, but labs are clearly willing to pay top-dollar for actual research talent, presumably because they feel like it's still a bottleneck.


Having the experience to build a frontier model is still a scare commodity, hence the salaries, but to advance AI you need new ideas and architectures which isn't what you are buying there.

A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?

People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: