> The question is, to what extent would humans still set goals and priorities, and how.
From what I hear about the US and UK governments, even the elected representatives of these governments don't really set goals and priorities, so the answer is surely "humans don't".
I get your point, but I’d say they do set goals, they’re just do bad at achieving them that it’s hard to tell.
Hopefully AI would help us better achieve our goals, but they still need to be our goals. I’m just not sure what that means. I don’t think anybody does.
That’s a major problem here, if we can’t reliably articulate our goals in unambiguous terms, how in earth can we expect AI to help us achieve them? The chances that whatever they end up achieving will match what we will actually like after the fact seems near zero.
I'd say Maslow's hierarchy[0] is a great starting point. Program that properly and faithfully (no backdoors, military exceptions, etc whatsoever) along with Asimov's 3 laws[1] and it should be pretty hard to find issue with the system that would result.
This is the "draw the rest of the owl"* of the alignment problem.
Or possibly the rest-of-owl of AI in general: Consider that there's still no level-5 self driving cars, despite road traffic law existing and the developers knowing about it since before they started trying.
The film version of I Robot had this right, the three laws are a manifesto for totalitarianism. The AI cannot sit on the sidelines as long as there is anything it can do to prevent crimes or abuse of any kind, no matter how intrusive that intervention may be.
The question is, to what extent would humans still set goals and priorities, and how.