AI isn't automation. It's thinking. It automates the brain out of human jobs.
You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
Why this example? One of the things automation has done is reduce and replace stevedores, the shipping equivalent of stacking shelves.
Amazon warehouses are heavily automated, almost self-stacking-shelves. At least, according to the various videos I see, I've not actually worked there myself. Yet. There's time.
> AI isn't automation. It's thinking. It automates the brain out of human jobs.
You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
Right up until the AI is good enough to control the robot that can do that job. Which may or may not be humanoid. (Plus side: look how long it's taking for self-driving cars, how often people think a personal anecdote of "works for me" is a valid response to "doesn't work for me").
Even before the AI gets that good, a nice boring remote-control android doing whatever manual labour could outsource the "controller" position to a human anywhere on the planet. Mental image: all the unemployed Americans protesting outside Tesla's factories when they realise the Optimus robots within are controlled remotely from people in 3rd world countries getting paid $5/day.
Yes, AI is automation. It automates the implementation. It doesn't (yet?) automate the hard parts around figuring out what work needs to be done and how to do it.
The sad thing is that for many software devs, the implementation is the fun bit.
Except it isn't thinking. It is applying a model of statistical likelihood. The real issue is that it's been sold as thinking, and laypeople believe that it's thinking, so it is very likely that jobs will be eliminated before it's feasible to replace them.
People that actually care about the quality of their output are a dying breed, and that death is being accelerated by this machine that produces somewhat plausible-looking output, because we're optimizing around "plausible-looking" and not "correct"
That observation is only useful if you can point at a capability that humans have that we haven't automated.
Hunter-Gatherers were replaced by the technology of Agriculture. Humans still are needed to provide the power to plow the earth and reap the crops.
Human power was replaced by work animals pulling plows, but you only humans can make decisions about when to harvest.
Jump forward a good long time,
Computers can run algorithms to indicate when best to harvest. Humans are still uniquely flexible and creative in their ability to deal with unanticipated issues.
AI is intended to make "flexible and creative" no longer a bastion of human uniquness. What's left? The only obvious one I can think of is accountability: as long as computers aren't seen as people, you need someone to be responsible for the fully automated farm.
'Because thing X happened in past it is guaranteed to happen in the future and we should bet society on it instead of trying to you know, plan for the future. Magic jobs will just appear, trust me'
The primary Stallman’s output is neither technical nor legal but moral. He condemns any entity that doesn’t grant its users four “freedoms.” And the primary disease that GPL spreads is not arbitrary legal restrictions but the moral code (altruism) that sanctions it. The fact that it’s not meant to be forced upon developers is irrelevant: it preserves the philosophy that could be weaponized in the future, possibly in an altered form.
But yeah, altruism is typically shared by both anarchists and communists. The only remaining question seems to be: who better embodies the ideal?
Well, they tout altruism, but they don't actually practice it.
It's all fake; if they were true altruists, they wouldn't need to argue all day about people needing to share their stuff.
In altruism, there is the concept of not personally benefiting or not requiring reciprocity.
But the demagogues endlessly promoting communist ideologies definitely benefit from it by appearing morally superior and getting resources for no valuable work in exchange.
The GPL, shows that, actually, they are not really ok with the no reciprocity part.
There are very few truly altruistic individuals, and their defining characteristic is that they just do the good stuff instead of endlessly talking about it for brownie points.
Basically the complete reverse of communists (and everyone far left in general).
I believe that while centralized computing excels at specific tasks like consumer storage, it cannot compete with the unmatched diversity and unique intrinsic benefits of personal computing. Kindle cannot replace all e-readers. Even Apple’s closed ecosystem cannot permit it to replace macOS with iPadOS. These are not preferences but constraints of reality.
The goal shouldn’t be to eliminate one side or the other, but to bridge the gap separating them. Let vscode.dev handle the most common cases, but preserve vscode.exe for the uncommon yet critical ones.
Unbounded increases in complexity lead to diminishing returns on energy investment and increased system fragility which both contribute to an increased likelihood of collapse as solutions to old problems generate new problems faster than new solutions can be created since energy that should be dedicated to new solutions is needed to maintain the layers of complexity generated by the layers of previous solutions.
I’ve always thought it’s good practice for a system to declare its limits upfront. That feels more honest than promising ”infinity” but then failing to scale in practice. Prematurely designing for infinity can also cause over-engineering—like using quicksort on an array of four elements.
Scale isn’t a binary choice between “off” and “infinity.” It’s a continuum we navigate with small, deliberate, and often painful steps—not a single, massive, upfront investment.
That said, I agree the ZOI is a valuable guideline for abstraction, though less so for implementation.
For your "quicksort of 4 elements" example, I would note that the algorithm doesn't care - it still works - and the choice of when to switch to insertion sort is a mere matter of tuning thresholds.
I think the land of “freedom” knows that “Those who would give up Essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”
What’s wrong with zero values? They free the developer from guessing hidden allocations. IMO this benefit outweighs cast riddles by orders of magnitude.
I'm not nearly as angsty as the parent on this subject, but they don't really free the developer from guessing about hidden allocations--Go's allocations are still very much hidden even if developers can reasonably guess about the behavior of the escape analyzer. I think it would have been better if Go _required_ explicit assignment of all variables. That said, despite not being a big fan of this particular decision, Go is still by far the most productive language I've used--it strikes an excellent balance between type safety and productivity even if I think some things could be improved.
Zero values prioritize implementation convenience (we always have a zero value so we don't need to handle any cases where we don't have a value, just says those are zero) over application convenience (maybe my type should not have a default and the situation where it has no value is an error)
Take either of Rust's library types named Ordering - core::cmp::Ordering (is five less than eight or greater?) or core::sync::atomic::Ordering (even if it's on another core this decrement definitely happens before that check) neither of these implements Default because even though internally they both use the zero bit pattern to mean something, that's a specific value, not a default.
reply