"In the short term, this could mean research into the economic effects of AI to stop smart systems putting millions of people out of work."
This seems unfortunate and somewhat challenging to do. Our current economic model encourages improving efficiency of systems. This seems like a good thing. Its really too bad that people "need" jobs. Jobs should be creating value or they shouldn't exist. Artificially "creating" jobs to prop up the systems feels like fighting against reality and a bad long term plan.
We are now entering a new age where our current form of capitalism will not work. You see, when there was enough demand on the workforce (as in, there was as much or more work to do than there were workers), things worked reasonably well. Now we are entering an era where there is much less demand for workers, yet we are not yet productive enough to just sit back, relax, and let machines handle 99.9%+ of the work.
Think about it: 50% unemployment is terrible. 100% unemployment is not unemployment. It means there are enough resources to go around that nobody has to work. There is so much food, shelter, and entertainment that we don't need to get up in the morning and go to work.
Obviously, 100% unemployment cannot happen. In all likelyhood, we'll still need doctors, chefs, artists, etc. But that the direction we are going in, and we better figure out how to set up a new economy to adapt to the new realities.
I envision a shift in the mind set. Personally, if I have more money then I could spend in a lifetime, and choose to work to make myself even richer, I don't care if 10 other families are living off my wealth. People will have a choice: do something productive for society, and live a slightly better lifestyle, or pursue more personal fulfillment. We'll see a lot more artists, and a lot less custodians. In the long run, I believe this shift will be a good thing, but it will require us to stop using terms like "moocher class".
Austrian economists like to point out that human wants and needs are effectively infinite (mostly due to the "wants" part), so there is fundamentally no reason that our economy should cease functioning just because the "needs" portion becomes increasingly automated and efficient.
The problem is getting people to pay for the "wants" part. I'm worried that the internet has accustomed people to treating creative content as a free resource by nature of being on the internet. It's the most efficient distribution mechanism for a lot of creative works, but the economics of an ad-filled internet experience with the user as product is very disturbing on a grand scale.
The whole dream of economic progress since the Enlightenment was that our needs would become increasingly solved so that we could focus on what we really enjoy as human beings. To quote John Adams:
> "I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain."
But we live in a world with an ever expanding supply of people and a rapidly diminishing set of "necessary" jobs. If we want to avoid the end game of that situation (revolution) we need to find ways to get the economics of the unnecessary-but-still-valuable to work.
I agree with you to some extent but I am not sure the professions you mention are actually as safe from automation as they might seem.
For instance a radiologist who has a very long education behind them and make $300K is in bigger risk of being replaced by image recognition software than a house cleaner.
With regards to art music is being made in large parts by machines and we are now at the point where humans can't hear the difference between a piece of music made by humans and machines.
What are the better alternatives for allocating resources? Any distribution mechanism will need a standard metric for measuring and dividing piles of resources. Money is that metric.
I do agree with you about work though. Assuming a future government structure similar to our current one, a humane economic policy will probably need to provide some form of universal basic income.
I'm still not convinced that work is obsolete or becoming dangerous... As long as humans have desires (whatever weird e.g. I want my hair to be the bluest and spikiest), and as long as these desires there are not the same for everyone, there will be incentives to exchange value, and while money is the unit of measure, work is the effort directed to get that value for you. Also, the side effect of all this is that me make things, so is work dangerous? Perhaps /we/ are, as work is a byproduct of our way of being.
If money is three things (unit of measure, holder of value, and method of exchange), perhaps there is a way of dividing these in two or more subsets. Measure and exchange seem difficult to separate; perhaps not being able to "hold" money could be interesting.
I agree that there's a benefit to making processes more efficient by replacing human labor with AI where possible, but I interpreted this to refer to systems that have wide economic effects, like robosigning foreclosures or selling company stocks.
Consider the wide economic effect of self-driving cars (once finally highly reliable). This one application of AI could almost single handedly push us over the edge.
The job decimation will be huge: truck drivers, delivery drivers, taxis, bus drivers, automotive insurance industry, traffic enforcement, collision repair shops, emergency services, car sales, car manufacturing (less accidents more vehicle sharing).
I'm sure it will create a few new job types, but a couple of orders of magnitude less jobs than the destruction. Hopefully the world considers this a good thing.
Honestly it seems that most people are too ignorant to see economic pain this will cause until it has happened. Unfortunately, I'm sure unions and industry groups see this train coming and will put up road blocks (pun intended) to delaying an inevitable roll out of the technology.
I don't understand why AI eliminating jobs is considered a problem. If any job can be done by a computer program it simply means that humanity has outgrown this kind of work and that nobody should do that mindnumbing crap anymore. Let's forget that the word "computer" used to mean "guy who sits in an office of an accounting firm, and adds numbers all day long".
That's fantastic, as long as there are alternative jobs for those people to go to that they can feasibly perform. This has historically been the case with technology replacing work, but it's not clear whether it's the case this time.
If there's no (or insufficient) jobs for people to go to, in a society that predicates your ability to live a pleasant life on having one, you have a huge social problem. Our current system is not set up to cope with 40% of the population being out of work.
Because there are too many people and people need to feed themselves and their families.
For example, consider if your job was made obsolete in 20-30 years. That there was no longer a need to program or manage computers any longer. All of a sudden all of this knowledge in your head is now basically useless and you have not likely developed any other meaningful skillset.
What do you do and what does society do for you?
THAT is why eliminating jobs is a problem. At some point the jobs simply aren't replaced by other things, or the other things are menial tasks, not meaningful tasks. If you eliminate enough jobs there isn't enough money to keep the consumer based economy afloat, and then the companies that own the robots go out of business because people who buy their products no longer have jobs or money to afford their product.
There are no clear solutions to this problem either.
But what is "AI"? Once something in AI gains traction, it quickly becomes part of the standard computer-science-toolkit.
For instance, programmatically tuning some variables can be considered a standard task, either with some bruteforce or a small search between the possible combinations. This was earlier considered AI, but is now a common approach. AI now would probably be to do more advanced problems with genetic algorithms or similar. But where to draw the line?
A nice and steep tax on capital gains would do fine. As technology progresses and machines contribute more and more value the income they generate returns to those that financed them.
I think, the perceived danger here is more the lack of social acceptance of unemployed people. I think however, that if a large part or even a majority of the population is unemployable it will necessarily lead to social acceptance. People will simply have a lot of leisure time.
It's more than that. You hit on it in the last part of your comment. This is about having tons of leisure life. How is it parceled out? How do people stay engaged in life and happy?
Remember, there are already lots of rich people who have as much leisure life as they want. And it's not a good thing for many of them. It's like a study posted on HN said a while back: you can have a meaningful life, or your can have a happy life. They are not the same thing. The more people that have meaningful lives, whether sweeping the streets, inventing new things, or writing novels? The better we all are. The more people choosing simply to be happy? Society will stagnate and die.
So this isn't some case of somebody telling somebody else what to do. Assuming we are entering the age of "robots do everything", our species has a crisis of meaning at hand.
But even in leisure time there are things to achieve, for example learning new skills, exercising and gaining reputation in virtual worlds. I think the concept of "meaning in life" is more flexible than often assumed.
It is not just social acceptance, though. It is also the problem that one's livelihood is closely tied to one's employment.
With both these things, the question is how the social acceptance and the detachment of livelihood from employment comes about. Will it require revolutions and civil war with accompanying bloodshed when the downtrodden unemployed rise up? Or will we be collectively smart enough to reform our economic systems before the rift through society becomes too deep?
Maybe I'm thinking in too simple terms, but I can't imagine that being a major problem because it will affect people from all classes in the same way and the change will probably be fairly quick (once the technology is available, everyone will want it immediately to remain competitive). Intelligence is distributed like a Gauss bell, so it will equally hit lower, upper and middle class, making the necessity for a basic income very quickly obvious. I don't think the lower class will be excluded from such a social system.
I personally think the bigger risks are AIs themselves, used as weapons or when they go out of control.
"it will affect people from all classes in the same way"
No. Given a roughly capitalist system as we have today, the vast majority of the economic benefits of AI is captured by "capital"[1]. People from a sufficiently rich and powerful background will not suffer like those who do not have any capital to speak of. Only those without capital need a basic income.
[1] Which is a problematic term, hence the quotation marks.
I mean that more in personal terms ("look, this machine is more skillful/faster/smarter than you") than in economical terms. There will be a huge impact on the service sector, so upper and middle class will be able to personally relate to the lower class and show empathy. I think the middle class will mainly promote the changes.
Or, more people (all those becoming unemployed because of advances in AI) will chase the little jobs remaining (those not taken over by AI), creating three classes:
A) the 1%: capitalists financing (and controlling) AI, will get phenomenal rewards once AI has taken over production of basic utilities, food, water distribution. Only political control will keep AI monopolies from appearing.
B) the 49.5%: those lucky enough to get a job in one of the areas not taken over by AI. They will have to be extremely competent, and lower their salary: they are competing in a very limited and not necessarily very complicated skill-sets (those skill-sets not taken over by AI) with 99% of the human population
C) the other 49.5%: those who are not making the cut to get a job.
(the sizes of the groups B & C obviously depend on the number of available jobs)
People will be able to move from B to C, but never to A (since building an AI empire from scratch is impossible, specially when capital has become extremely concentrated)
People in B are barely making ends meet, and people in C are in continuous existential danger. Political engagement drops to a minimum. Capitalists collude to create monopolies, raise prices and basically destroy democracy. The human race splits in two distinct classes, and the lower class is basically enslaved.
As I wrote below, I think that increasing machine intelligence will hit all classes both on in personal and economic terms. As soon as robots outperform many people in currently middle to upper middle class service sector, enough people will be affected to make something like a basic income a necessity.
Money is not the only problem. Unemployment leads to loneliness, demotivation, social stigma. This is not only caused by social pressure to have a job. Even rich people work. We need some form of Minimal Activity.
The letter will have no effect. Similarly, you can sing a letter to stop drugs.
Why do shareholders of big corporations profit from science in a grossly non-proportional way, while more than 50% of world's population has to live on under $2 a day?
It is time for the world's greatest minds to start thinking about how to fix capitalism, because it seems to be seriously broken.
And we need it fixed more than we need e.g. iPhone 7.0, or Google Adwords 2.0.
Clinical medicine is a better analogy than technical rollouts. But you're right, modern development economics argues that social fixes (policies) should be empirically driven using evidence-based randomized control trials.
Esther Duflo has pushing this approach at the MIT Poverty Action Lab. The main insight is to reject grand generalizations and broad theorizing. Sitting in an armchair pondering how to "fix capitalism" is unlikely to lead to useful lines of thought. Society is too complex a system, cause & effect relationships can be highly localized and context-dependent; formulaic thinking just leads one down ideological rabbit holes.
Well, in an old school way thats already whats happening. We have many countries on this planet (UN recognizes about 200) and some of the most modern are actually a collection of smaller states organized in a federation. USA, Germany, Switzerland etc.
In fact, you guys had the unfortunate role to find out how well USSR style (I assume) communism works.
I don't think I did. What's a proportionate distribution? Proportionate to what and for whom? The whole "market" or "system" of innovation is driven by "disproportionate" distribution. If people don't have the potential to get rich creating a New Thing or Efficient Method, what's to encourage them to even consider Something Better?
> If people don't have the potential to get rich creating a New Thing or Efficient Method, what's to encourage them to even consider Something Better
Let me give an example. In academics, as pointed out here before, smart young people have to survive for a long time on relatively low pay, and without a real prospect of making a lot of money in the long term. In other words, they do this work because they like to do it, and/or for idealistic reasons.
If we can (somehow) change the system such that smart and idealistic people get in charge instead of greedy people, that could be a step forward.
Again, I'm just giving an example, to show you my line of thinking. I don't claim that this is the correct approach.
"In other words, they do this work because they like to do it, and/or for idealistic reasons." I'd like to see a study on this. In my experience, this is not the case.
My peers and I, while in college, did this work because one day it would pay off. If we'd been told that our pay would always be meager, and still continued to do the work, perhaps one could attribute it to idealism or simply liking the work. Most likely, however, we'd have relegated the work we liked to hobby status while we pursued something that would pay better.
I wonder: Why should we care about what Musk and Hawking mean about AI? This article doesn't mention too much bad stuff, but earlier they have said that we should be afraid of AI/singularity.
With my thesis now in AI, I probably know far more than those two about this. And we're sooo far away from AI being a superforce destroying mankind.
You can probably assume they are in contact with the leading experts in the field. From the letter:
The initial version of this document was drafted by
Stuart Russell, Daniel Dewey & Max Tegmark, with major
input from Janos Kramar & Richard Mallah, and reflects
valuable feedback from Anthony Aguirre, Erik
Brynjolfsson, Ryan Calo, Tom Dietterich, Dileep George,
Bill Hibbard, Demis Hassabis, Eric Horvitz, Leslie Pack
Kaelbling, James Manyika, Luke Muehlhauser, Michael
Osborne, David Parkes, Heather Roff Perkins, Francesca
Rossi, Bart Selman, Murray Shanahan, and many others.
They aren't the only ones who signed it, they just get top media billing because they always do. Other people who signed include AI researchers. Peter Norvig, for one.
I haven't read your thesis but does it take things like exponential progress, deep learning and higher and higher privileges and control over infrastructure into account?
>> And we're sooo far away from this being a problem.
If we're far away from this being a problem, we may be far away from understanding how to solve it. We certainly wouldn't want the former to outpace the later, given what is at stake.
AI would, and should, succeed human intelligence properly implemented. We are nowhere near even understanding it as a problem, much less solving it. I attended an AGI conference a couple of years ago (summer of 2012 iirc). The general feeling was that we are still a lifetime away from a solution.
I don't know much about the philosophy of AI and I'm only familiar at a basic level with modern AI algorithms. From what I have been exposed to I don't see any reason to think AI is any more than a set of statistical frameworks. Is there any reason to believe that these statistical frameworks are comparable to biological intelligence?
In this context, the term "Artificial Intelligence" refers to a man-built emulation of the physical process that allows humans to reason and shape the world around us in pursuit of our goals. This hasn't been achieved yet, but humans are proof that it is physically possible. (And through evolution, which is not a strongly guided process).
Unless you believe there is something inherently special and unique about humans that make it impossible for this physical phenomenon to be replicated artificially, there is absolutely something to worry about here.
It sounds like you're talking about the kinds of AI in use today? That's not what the cautions are about, since current "AI" however good at reading individual words or flying drones is not yet capable of human-level thought. The cautions are about trans-sapient AI, which doesn't exist yet. Even if it's simply a beefed up "set of statistical frameworks" linked in the right way to get a computer behave like a human, humans develop and use nuclear weapons, humans go on shooting sprees, humans decide to go to war...
I see, I didn't think about that. So you mean AI that are not necessarily intelligent or conscious but are just more integrated in our lives, for example in the military or law enforcement?
I can see how a bug in those kinds of systems could cause things like that without necessarily being conscious or intelligent.
> Research into AI, using a variety of approaches, had brought about great progress on speech recognition, image analysis, driverless cars, translation and robot motion, it said.
How much of this progress required training data generated by working humans? What would feed future statistical algorithms if this source of training data was greatly reduced?
So long as we can pull the plug or disconnect the interfaces we'll be ok with AI. Once we can't, then we have a problem.
In effect the scariest AI is distributed, self propogating and can't be unpowered. Effectively a virus. I have yet to see a meaningful distributed AI, even in concept.
But it's not inconceivable that in the near-future an AI could train itself to mutate (even if this means interfacing with mechanical turk or freelance websites and paying humans to do it).
My real concern with all of this is always the uncontrolled ecosystem of steadily evolving viruses and malware. We will never have control of that... and there is no telling what it can become in the future.
I think it will be a simple error induced by some random mutation in one of these malicious progams, not some vast artificial intelligence, that causes us problems in this arena first.
We need more AI working in the domain of computer security. Stuff that learns under specific guidelines to restrict computational behavior, given specifications of expected behavior.
This seems unfortunate and somewhat challenging to do. Our current economic model encourages improving efficiency of systems. This seems like a good thing. Its really too bad that people "need" jobs. Jobs should be creating value or they shouldn't exist. Artificially "creating" jobs to prop up the systems feels like fighting against reality and a bad long term plan.