Hacker Newsnew | past | comments | ask | show | jobs | submit | more nlawalker's commentslogin

From the cart corrals. Do you leave trash on the ground because they have people paid to empty the trash cans?


I don't leave trash on the ground because littering is against the law.


Is the law the only thing that stops you from littering?


So you only do things if there is a specific law that compels you? There is no moral compass?


If laws were loosened up and littering was no longer punishable, would you begin littering? Is hypothetical legal recourse the main thing that would motivate you not to act in a certain way?


I'd love to see anyone get charged for literring in a fast food establishment.


From last month, an article in the Atlantic about how candy makers are experimenting with new non-chocolate ingredients and flavors in existing brands:

https://www.theatlantic.com/health/2025/10/chocolate-shortag... (gift link)


It all depends on what you do with it - I see the first prompt just as a slightly different starting place than the second one.


If this is interesting to you, I recommend the book Game Over by David Sheff.


Unfortunately, the legacy of David Sheff's Game Over is tarnished. He took shortcuts and got some important details wrong. An historian should find additional sources for the statements made in the book.


Still much more accurate than some other books like Console Wars.


Would you say _Game Over_ is the best example of a book in this space/topic, despite the flaws?


Nope. I'd go with anything from MIT. Racing The Beam and I Am Error were transcendental for me. Some stuff from Boss Fight books is absurdly good. Final Fantasy V immediately comes to mind.


Totally agree - interesting info but nothing of practical use, especially because white spots can be mold.

See https://www.eatortoss.com/how-to-tell-if-white-stuff-on-chee..., https://www.eatortoss.com/aged-cheddar-with-a-crusty-white-s....


To my untrained eye: not sure about the first one,but the second one is obviously good. Correct?


It's not already on my computer, and changing that will cost me money and time and is something that Microsoft promised we wouldn't have to do anymore when they sold us on 10.



The only problem with IEnumerable's open-endedness is that it's so open-ended. It makes no implicit guarantees about order, finiteness, side effects, speed/efficiency, idempotency etc. It's easy to assume those things until you accidentally find a situation where one or more are not in your favor.


Where did "can't you just turn it off?" in the title come from? It doesn't appear anywhere in the actual title or the article, and I don't think it really aligns with its main assertions.


The retitling now by HN appears more accurate


It shows up at https://boydkane.com under the link "Why your boss isn't worried about advanced AI". Must be some kind of sub-heading, but not part of the actual article / blog post.

Presumably it's a phrase you might hear from a boss who sees AI as similar to (and as benign/known/deterministic as) most other software, per TFA


Ah, thanks for that!

>Presumably it's a phrase you might hear from a boss who sees AI as similar to (and as benign/known/deterministic as) most other software, per TFA

Yeah I get that, but I think that given the content of the article, "can't you just fix the code?" or the like would have been a better fit.


Yeah this is my mistake, the phrase "can't you just turn it off" was in several drafts but got edited out, and I missed that during the publishing of the essay.


In my experience it’s usually the engineers that aren’t worried about AI, because they see the limitations clearly every time they use it. It’s pretty obvious that whole thing is severely overhyped and unreliable.

Your boss (or more likely, your bosses’ bosses’s boss) is the one deeply worried about it. Though mostly worried about being left behind by their competitors and how their company’s use of AI (or lack thereof) looks to shareholders.


It depends on where you are in the chain, and what kind of engineering you’re doing. I think a lot of engineers are so focused on the logistics, capabilities, and flaws, and so used to being indispensable, that they don’t viscerally get that they’re standing on the wrong side of the tree branch they’re sawing through. AI does not need to replace a single engineer before increased productivity means we’ll have way too many engineers, which mean jobs are impossible to get, and the salaries are in the shitter. Middle managers are terrified because they know they’re not long for this (career) world. Upper managers are having 3 champagne lunches because they see big bonuses on the far side of skyrocketing profits and cratering payroll costs.


It's a poor choice of phrase if the purpose is to illustrate a false equivalence. It applies to AI both as much (you can kill a process or stop a machine just the same regardless of whether it's running an LLM) and as little (you can't "turn off" Facebook any more than you can "turn off" ChatGPT) as it does to any other kind of software.


It's a sci-fi thing, think of it along the lines of "What do you mean Skynet has gone rogue? Can't you just turn it off?"

(I think something along these lines was actually in the Terminator 3 movie, the one where Skynet goes live for the first time).

Agreed though, no relation to the actual post.


This sci-fi thing goes as far back as the 1983 movie WarGames, where they wanted to pull the plug on a rogue computer, but there was a reason you couldn’t do that:

McKittrick: General, the machine has locked us out. It's sending random numbers to the silos.

Pat Healy: Codes. To launch the missiles.

General Beringer: Just unplug the goddamn thing! Jesus Christ!

McKittrick: That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.


Further than that, even - this trope appears in Colossus: The Forbin Project, released in 1970, where the rogue computer is buried underground with its own nuclear reactor, so it can't be powered off.


In real life it won’t be that the computer prevents you from turning it off. It’ll be that the computer is guarded by cultists who think its god, and unstoppable market forces that require it to keep running.


When AI ends up running everything essential to survival and society, it’ll be preposterous to even suggest pulling the plug just because it does something bad.

Can you imagine the chaos of completely turning off GPS or Gmail today? Now imagine pulling the plug on something in the near future that controls all electric power distribution, banking communications, and Internet routing.


This is the case with capitalism today. I don't like where he took the philosophy, but Nick Land did have an insight that all the worst things we believe about AI (e.g. paperclip optimizing etc) are capitalism in a nutshell.


Just listen to what these CEOs say on the topic and they basically admit something terrible is being built, but that the most important things is that they are the ones to do it first.


Turning AI off comes up a lot in existential risk discussions so I was surprised the article isn't about that.


I'm surprised to not see any mention of indirect TPMS anywhere in these comments.

https://en.wikipedia.org/wiki/Tire-pressure_monitoring_syste...

It has its own shortcomings, but in my opinion they're all relatively minor and it does the job of warning the driver of potential pressure problems without wireless or in-tire sensors that require replacement.

EDIT, never mind, I wasn't seeing "indirect" in the comments but now that I look I do see "ABS", which is what iTPMS depends on for determining wheel speed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: