If laws were loosened up and littering was no longer punishable, would you begin littering? Is hypothetical legal recourse the main thing that would motivate you not to act in a certain way?
From last month, an article in the Atlantic about how candy makers are experimenting with new non-chocolate ingredients and flavors in existing brands:
Unfortunately, the legacy of David Sheff's Game Over is tarnished. He took shortcuts and got some important details wrong. An historian should find additional sources for the statements made in the book.
Nope. I'd go with anything from MIT. Racing The Beam and I Am Error were transcendental for me. Some stuff from Boss Fight books is absurdly good. Final Fantasy V immediately comes to mind.
It's not already on my computer, and changing that will cost me money and time and is something that Microsoft promised we wouldn't have to do anymore when they sold us on 10.
The only problem with IEnumerable's open-endedness is that it's so open-ended. It makes no implicit guarantees about order, finiteness, side effects, speed/efficiency, idempotency etc. It's easy to assume those things until you accidentally find a situation where one or more are not in your favor.
Where did "can't you just turn it off?" in the title come from? It doesn't appear anywhere in the actual title or the article, and I don't think it really aligns with its main assertions.
It shows up at https://boydkane.com under the link "Why your boss isn't worried about advanced AI". Must be some kind of sub-heading, but not part of the actual article / blog post.
Presumably it's a phrase you might hear from a boss who sees AI as similar to (and as benign/known/deterministic as) most other software, per TFA
Yeah this is my mistake, the phrase "can't you just turn it off" was in several drafts but got edited out, and I missed that during the publishing of the essay.
In my experience it’s usually the engineers that aren’t worried about AI, because they see the limitations clearly every time they use it. It’s pretty obvious that whole thing is severely overhyped and unreliable.
Your boss (or more likely, your bosses’ bosses’s boss) is the one deeply worried about it. Though mostly worried about being left behind by their competitors and how their company’s use of AI (or lack thereof) looks to shareholders.
It depends on where you are in the chain, and what kind of engineering you’re doing. I think a lot of engineers are so focused on the logistics, capabilities, and flaws, and so used to being indispensable, that they don’t viscerally get that they’re standing on the wrong side of the tree branch they’re sawing through. AI does not need to replace a single engineer before increased productivity means we’ll have way too many engineers, which mean jobs are impossible to get, and the salaries are in the shitter. Middle managers are terrified because they know they’re not long for this (career) world. Upper managers are having 3 champagne lunches because they see big bonuses on the far side of skyrocketing profits and cratering payroll costs.
It's a poor choice of phrase if the purpose is to illustrate a false equivalence. It applies to AI both as much (you can kill a process or stop a machine just the same regardless of whether it's running an LLM) and as little (you can't "turn off" Facebook any more than you can "turn off" ChatGPT) as it does to any other kind of software.
This sci-fi thing goes as far back as the 1983 movie WarGames, where they wanted to pull the plug on a rogue computer, but there was a reason you couldn’t do that:
McKittrick: General, the machine has locked us out. It's sending random numbers to the silos.
Pat Healy: Codes. To launch the missiles.
General Beringer: Just unplug the goddamn thing! Jesus Christ!
McKittrick: That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.
Further than that, even - this trope appears in Colossus: The Forbin Project, released in 1970, where the rogue computer is buried underground with its own nuclear reactor, so it can't be powered off.
In real life it won’t be that the computer prevents you from turning it off. It’ll be that the computer is guarded by cultists who think its god, and unstoppable market forces that require it to keep running.
When AI ends up running everything essential to survival and society, it’ll be preposterous to even suggest pulling the plug just because it does something bad.
Can you imagine the chaos of completely turning off GPS or Gmail today? Now imagine pulling the plug on something in the near future that controls all electric power distribution, banking communications, and Internet routing.
This is the case with capitalism today. I don't like where he took the philosophy, but Nick Land did have an insight that all the worst things we believe about AI (e.g. paperclip optimizing etc) are capitalism in a nutshell.
Just listen to what these CEOs say on the topic and they basically admit something terrible is being built, but that the most important things is that they are the ones to do it first.
It has its own shortcomings, but in my opinion they're all relatively minor and it does the job of warning the driver of potential pressure problems without wireless or in-tire sensors that require replacement.
EDIT, never mind, I wasn't seeing "indirect" in the comments but now that I look I do see "ABS", which is what iTPMS depends on for determining wheel speed.