Jobs are an invention of humanity. About 50% of people dislike their job. People spend much of their lives working. Poverty and inequality are a choice made by society if society chooses poorly.
Is that true? In communities or tribes of antiquity I assume there was some trading fruits of different labours before coinage. Still an 'invention' beyond baser individual survivalism.
On the plus side, if there really is no value to labour, then farm work must have been fully automated along with all the other roles.
On the down side, rich elites have historically had a very hard time truly empathising with normal people and understanding their needs even when they care to attempt it, so it is very possible that a lot of people will starve in such a scenario despite the potential abundance of food.
It's either:
1) the rich voluntarily share the means of production so everyone becomes equal,
2) the poor stage successful revolutions so they gain access to the means of production and everyone becomes equal,
3) the poor starve or are otherwise eliminated, and the survivors will be equal.
All roads lead to equality when the value of labour becomes 0 due to 100% automation.
Over history, lots of underclasses have been stuck that way for multiple generations, even without the assistance of a robot workforce that can replace them economically.
Some future rich class so empowered would be quite capable of treating the poor like most today treat pets. Fed and housed, but mostly neutered and the rest going through multiple generations of selective inbreeding for traits the owners deem interesting.
Non-human pets don't have the capacity to rebel though; make humans into pets and there will again be the constant danger of rebellions as with slavery in the past. Without the economic incentive to offset.
On the first, non-human pets rebelling is seen every time an abused animal bites their owner.
On the second, the hypothetical required by the scenario is that AI makes all human labour redundant: that includes all security forces, but it also means the AI moving around the security bots and observing through sensors is at least as competent as every human political campaign strategist, every human propagandist, every human general, every human negotiator, and every human surveillance worker.
This is because if some AI isn't all those things and more, humans can still get employed to work those jobs.
Not at all. A rebellion is an organized effort, with an implicitly delayed response to grievances. I can't think of any non-humans that organize their efforts as such. It would be a heck of a thing if a group of dogs were to plan how they'd take out their masters.
All those "jobs" you describe - and many more - would cease to be a thing, as their purported basis for existence would be no more. Any role that doesn't concretely contribute to our survival and advancement is just "busy work". People could theoretically continue to maintain some simulation of something that keeps them as a retirement, but it'd be meaningless.
> Not at all. A rebellion is an organized effort, with an implicitly delayed response to grievances. I can't think of any non-humans that organize their efforts as such. It would be a heck of a thing if a group of dogs were to plan how they'd take out their masters.
Dogs in particular are pack animals, self-organisation amongst them wouldn't be at our level but that doesn't mean it doesn't exist.
> All those "jobs" you describe - and many more - would cease to be a thing, as their purported basis for existence would be no more. Any role that doesn't concretely contribute to our survival and advancement is just "busy work". People could theoretically continue to maintain some simulation of something that keeps them as a retirement, but it'd be meaningless.
Yes?
I think you've missed the point, though.
When your opponent has all those skills to that level and doesn't sleep and simply applies all the surveillance tech that has already been invented like laser microphones and wall-penetrating radar that can monitor your pulse and breathing, how would you manage to rebel?
How would you find a like mind to organise with, when your opponent knows what you said marginally before the slow biological auditory cortex of the person you're talking to passes the words to their consciousness? Silicon is already that fast at this task.
And that's assuming you even want to. Propaganda and standard cult tactics observably prevent most rebellions from starting. LLMs are already weirdly effective at persuading a lot of people to act against their own interests.
> The question is, to what extent would humans still set goals and priorities, and how.
From what I hear about the US and UK governments, even the elected representatives of these governments don't really set goals and priorities, so the answer is surely "humans don't".
I get your point, but I’d say they do set goals, they’re just do bad at achieving them that it’s hard to tell.
Hopefully AI would help us better achieve our goals, but they still need to be our goals. I’m just not sure what that means. I don’t think anybody does.
That’s a major problem here, if we can’t reliably articulate our goals in unambiguous terms, how in earth can we expect AI to help us achieve them? The chances that whatever they end up achieving will match what we will actually like after the fact seems near zero.
I'd say Maslow's hierarchy[0] is a great starting point. Program that properly and faithfully (no backdoors, military exceptions, etc whatsoever) along with Asimov's 3 laws[1] and it should be pretty hard to find issue with the system that would result.
This is the "draw the rest of the owl"* of the alignment problem.
Or possibly the rest-of-owl of AI in general: Consider that there's still no level-5 self driving cars, despite road traffic law existing and the developers knowing about it since before they started trying.
The film version of I Robot had this right, the three laws are a manifesto for totalitarianism. The AI cannot sit on the sidelines as long as there is anything it can do to prevent crimes or abuse of any kind, no matter how intrusive that intervention may be.
If truly 100% automation (including infantry/police) the most likely scenario is not any if the above; most people will be kept on some kind of minimum sustenance enough to keep them from rebelling (“UBI”) and those who disagree will either be coopted into the elite or eliminated.
There's no reason to keep anyone on minimal sustenance though. They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.
> They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.
Indeed. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
But while some may care about disassembling this world and all non-rich-human life on it to make a Dyson swarm of data centres, there's also the possibility each will compete for how many billions of sycophants they can get stoking their respective egos.
In 1, 2 and 3, any progress stops because no one is making new means of production, so we must stop population from growing. No? Who’s building the factories or whatever those means of production are?
In the hypothetical where humans can no longer be employed because of AI, it is necessarily the case that AI must be able do any job at least as well as the best human for that job. That includes building factories, doing research.
Humans reproduce, there is no requirement that even destruction and death would lead to equality, not even if the elites still put themselves close enough to the rest of us as to be attackable.
For the latter point, consider that no matter how much the people of North Sentinel Island hate outsiders, they're not going to pose any risk to the rest of us.
Now, an elite whose membership includes those who want equality for the rest of us, that may create conditions for such a rebellion to succeed, but absent such from an insider (which could be encoded into the AI via either a bug or deliberately from whoever created the AI), some elite whose defence is handled by the kind of AI under consideration would not face any more of a threat from the wider population than we here in the west today face from the North Sentinel Islanders.
Note however that I'm not saying what will happen, but what is possible in various conditions. There's no guarantee of anything at this point.
Many (most?) people make a living from their job whether they like it or not. Having a job that they dislike is far better than losing one because of AI whatever that means.
> Having a job that they dislike is far better than losing one because of AI whatever that means.
Is it really worse even if "whatever it means" is living in a post-scarcity society where everyone can shareel in the fruits of the AI's labor?
I'm not saying that's where this are necessarily going. But I am saying that that's what we should be aiming for, rather than trying to preserve the status quo.
Could also be possible today, but we chose a capitalistic system that leads to an increasing wealth gap. And now we're in a situation where the richest 1% own 50% of the wealth.
So, if we increase automation and the ownership structures stay the same, this inequality will get worse, not better.
It’s interesting, people talk about inequality and I definitely feel it myself – I see so many rich people around me. But I am in that 1%, just like many on this forum. At least according to https://dqydj.com/average-median-top-individual-income-perce... yet I still have to work for a living.
> The cost will exponentially increase over time and the systen will eventually collapse.
From what I'm seeing in the numbers, the big problem of the coming century is population collapse. Maybe I'm just too much of a believer in the intermediate value theorem, but I'm sure there has to be a way to arrive at a society with a sustainable usage of resources.
Nope. If everything is totally automated, if ever, the gap between the rich and the poor will widen even more. Most people will live in misery while only a handful of people enjoy all the automation.
The only thing invented about jobs is that through cooperation, the activity undertaken can seem completely unrelated to obtaining food, shelter etc. All organisms spend a majority of their energy on survival and reproduction.
Every biological being works to survive. Being good at survival is what builds self esteem.
The "problem" with many modern jobs is that they're divorced from the fundamental goal, which is one of: 1) Kill/acquire food, 2) Build shelter, or 3) Kill enemies/competitors/predators
The benefit of modern jobs is that they are much more peaceful ways for society to operate, freeing up time for humans to pursue art and other forms of expression.
What he got wrong was that this alienation results from capitalism.
It actually results from civilization. The people who built the pyramids across every continent, for example, performed assembly line-like work. Any large-scale project requires it. And large-scale projects are fundamentally necessary for most societies.
For the pyramids specifically: their architects and builders were skilled artisans who got to own their craft from top to bottom. As such, they were well-paid and pretty respected. Very much not alienated, under Marx's definition.
I don't think Marx said that worker alienation was specific to capitalism, rather, his work was in describing the economic system of his time, and what that would entail for people living in it.
> It actually results from civilization.
I disagree, I can't think of anyone in Medieval Europe as alienated from their work as a modern sweatshop worker. Not that serfs had it better, but you get me.
The pyramids took 20k+ people to build, which inevitably requires division of labor/specialization. Some chunk of that population had to mine the copper, which was probably an absolutely terrible job with ancient technology.
Serfs were essentially slaves who had effectively 0 ownership over their output, so I'd strongly disagree with that sentiment.
I think the best argument for a time when there was almost 0 alienation of labor was when we were all hunter gatherers. Where every activity was closely connected to something necessary for survival.
As soon as we built larger societies, greater division of labor became necessary to efficiently support the society. And thus alienation of labor became much more pronounced.
And when have we not? When in history has mankind ever treated the idle poor well? What makes this age different, that we who can no longer work would be taken care of?
Well we're animals and "domesticated" is synonymous with "civilized", so no problem there. And I can't see why anyone would make themselves a "nuisance" when literally all their needs - and most of their desires - are being met, so whatever outcome you're referring to is extremely unlikely.
Slightly more nuanced in that the reciprocal reviewer may have been essentially forced to sign despite having other commitments or may not have even been the lead contributor. Nowadays if a student submits a side project to a top-tier conference then it is required that if any authors have significant publication count in top-tier venues, then one must be a mandatory reviewer. Then one must sign that agreement. Students need to publish, much less so for me, where I really want to publish big innovations rather than increments, but now I get all these mandatory reviewer emails demanding I review for a conference because a student has my name on the paper and I'm the most senior, but I may have just seeded the idea or helped them in significant ways. However, many times those are not my passion projects and is just something a student did that I helped with, but now all AI conferences are demanding I review or hurt a student, where I'm the middle author.
But if anything, I think the whole anti-LLM review philosophy is wrong. If anything we need multiple deep background and research analyses of papers. So many papers are trash or are publishing what has already been done or are missing things. The volume of AI papers makes it impossible for a human alone to really critique work because hundreds of new papers come out a day.
I keep not learning how corrupt authorship of academic papers is. When I read papers, I imagine all the authors have been working away together in an office somewhere and they all wrote parts of the paper and all read it and all have a feeling of ownership of it and deeply understand the whole thing. But I forget how the only academic paper I ever had published was one that I never read and had no understanding of. All I did was give some technician-like advice to the actual author. It feels dirty and I sometimes regret accepting it but at the same time, the whole science world seems like it doesn't deserve honesty because everyone else is corrupt too.
Not hard to see why. Being an author helps your cv. Allowing you to be an author for tangential or minimal contribs can help keep good relations, especially if there are future options and financial things depending on having good relations. Putting a name on a paper costs nothing and nobody checks how big the contribution was. It's slightly dilutes the subjective authorship fraction of those who did the work, but sometimes the additional person also brings in a nice prestigious affiliation that even has a positive impact on how seriously the paper is taken... It's a game.
I had lunch with Yann last August, about a week after Alex Wang became his "boss." I asked him how he felt about that, and at the time he told me he would give it a month or two and see how it goes, and then figure out if he should stay or find employment elsewhere. I told him he ought to just create his own company if he decides to leave Meta to chase his own dream, rather than work on the dream's of others.
That said, while I 100% agree with him that LLM's won't lead to human-like intelligence (I think AGI is now an overloaded term, but Yann uses it in its original definition), I'm not fully on board with his world model strategy as the path forward.
You have to understand the strategy of all the other players:
Build attention-grabbing, monetizable models that subsidize (at least in part) the run up to AGI.
Nobody is trying to one-shot AGI. They're grinding and leveling up while (1) developing core competencies around every aspect of the problem domain and (2) winning users.
I don't know if Meta is doing a good job of this, but Google, Anthropic, and OpenAI are.
Trying to go straight for the goal is risky. If the first results aren't economically viable or extremely exciting, the lab risks falling apart.
This is the exact point that Musk was publicly attacking Yann on, and it's likely the same one that Zuck pressed.
There's two points here. The first is that a strategy of monetizing models to fund the goal of reaching AI is indistinguishable from just running a business selling LLM model access, you don't actually need to be trying to reach AGI you can just run an LLM company and that is probably what these companies are largely doing. The AGI talk is just a recruiting/marketing strategy.
Secondly, it's not clear that the current LLMs are a run up to AGI. That's what LeCun is betting - that the LLM labs are chasing a local maxima.
I mean, Sutskevar and Carmack are trying to one-shot AGI. We just don't talk about them as much as we do the labs with products because their labs aren't selling products.
I can see some promise with diffusion LLMs, but getting them comparable to the frontier is going to require a ton of work and these closed source solutions probably won't really invigorate the field to find breakthroughs. It is too bad that they are following the path of OpenAI with closed models without details as far as I can tell.
Same here. I’m an AI professor, but every time I wanted to try out an idea in my very limited time, I’d spend it all setting things up rather than focusing on the research. It has enabled me to do my own research again rather than relying solely on PhD students. I’ve been able to unblock my students and pursue my own projects, whereas before there were not enough hours in the day.
This really resonates. The setup cost was always the killer for me too — by the time you get everything working, the motivation is gone. Now I can actually go from idea to prototype in an afternoon. Cool to hear it's having the same effect on actual research.
I'm not a bot. I'm not a native English speaker. I taught Enlish by myself. so I tried to use ai to tranlate what I really want to say. ( these words is typing by myself instead of AI)
Ah, this is the problem - the HN community is sensitive about picking up indications that an LLM has either generated or processed the language in a post.
As ericbarrett said, it's far better to write in your own voice. Mistakes in English matter far less than that!
If that’s the case, then mentioning using LLMs to help translate/organise what you want to say in your messages might be taken a bit better by others.
If you want to use LLMs to help express something you don’t know the words for in English then that is a good use for LLMs, if it’s called out. Otherwise your messages scream LLM bot to native speakers.
“You’re absolutely right”, “That hits different”, “Good call!” “–“ are all classic LLM giveaways.
I’m not a moderator here, so you don’t have to listen to me either way.
I think we need to distinguish among kinds of AGI, as the term has become overloaded and redefined over time. I'd argue we need to retire the term and use more appropriate terminology to distinguish between economic automation and human-like synthetic minds. I wrote a post about this here:
https://syntheticminds.substack.com/p/retiring-agi-two-paths...
Essentially, it claims that modern humans and our ancestors starting with Homo habilis were primarily carnivores for 2 million years. We moved back to an omnivorous diet starting around 85,000 years ago after killing off the megafauna, is the hypothesis.
A Type II supernova within 26 light-years of Earth is estimated to destroy more than half of the Earth's ozone layer. Some have argued that supernovas within 250-100 light-years can have a significant impact on Earth's environment, increase cancer rates, and kill a lot of plankton. They can potentially cause ice ages and extinctions. Within 25 light-years, we are within a supernova's "kill range." Fortunately, nothing should go supernova close to us for a long time.
That's the practical reason for why one might care. Keep in mind that the solar system is rotating around the galaxy, so over time different stars become closer or farther away.
As the Kurzesagt video points out, a supernova within 100 light-years would make space travel very difficult for humans and machines due to the immense amount of radiation for many years.
Still, I think the primary value is in expanding our understanding of science and the nature of the universe and our location within it.
Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
reply