Hacker Newsnew | past | comments | ask | show | jobs | submit | newsoftheday's commentslogin

Wow...that seriously may change my long standing anti-Mac disdain to pro-Mac advocacy, very interesting, even Gemini confirmed what you're saying.

My theory is that, since I'm going to do things like banking in my browser, I want one that has a lot of skin in the game. Chrome being backed by Google has trillions of dollars on the line should they ever do anything truly evil. Though this sneaky 4GB download comes close.

Google is not liable for your banking.

There's no skin in the game if they do not think they'll be meaningfully punished by government or consumers for their wrongdoings.

And they have trillions riding on milking you for all your data and ad impressions.

Which they seem to think they'll get, regardless of the quality of their web browser. Most people are entrapped by Android anywho.

Edge and Chrome could both be eliminated tomorrow and those trillions would be safe.

You’re the product, not the browser.


> she dropped out of the primary when her impending loss became obvious

That is how voting and elections is supposed to work, not by saying people of a certain age, color, race or creed can't hold office because young people today are bigoted and feel they deserve more than all generations that ever preceded them throughout all of history.


I'd say party leadership endorsing a 78 year old candidate for US Senate is not how voting and elections ought to work - I'd say it's a pretty big problem. You're welcome to agree or disagree with that, but more relevant to the context of this comment thread, it is not a problem that would be solved by congressional term limits.

> young people today are bigoted and feel they deserve more than all generations that ever preceded them throughout all of history.

Each generation doing materially better than the previous one used to be a widely agreed upon goal in the United States. Perhaps not really relevant to this comment thread.


She was the logical choice and a good candidate. Government is not a job anyone can step into and be sucessful. Look at the difference in effectiveness of career politicians vs populists outsiders. Its like 100s of meaningful bills passed compared to single digit.

> old folks have much more money

I retired last year, lead software engineer, never got into management as a civilian, did software for over 3 decades. My salary was lower than it was as a mid level 20 years ago.

What old people have "much more money", maybe they can send some my way.

I thank God for Social Security and hope and pray it remains well into the future, even for the 99% of the people who are saying bigoted, dumb shit on this page.


Pensioners in most western countries are the second most affluent cohort.

This is what I've done after spending some time to look into it, this is for Linux Desktop:

Delete Chrome's silent 4 GB AI model file and AI

In Chrome, go to: chrome://flags

  Search for and Disable these:

  Enables optimization guide on device

  Prompt API for Gemini Nano

  AI Mode

Open DevTools (F12 or Ctrl+Shift+I).

  Click the Settings (gear icon).

  Go to AI Innovations and uncheck Enable AI assistance.

For Linux, in a bash shell, this should prevent Chrome from trying to download the file again because the root user instead of my user, will own the file/directory.

  sudo rm -rf ~/.config/google-chrome/OptGuideOnDeviceModel

  sudo rm -rf ~/.config/googlechrome/Default/OptGuideOnDeviceModel

  sudo touch ~/.config/google-chrome/OptGuideOnDeviceModel

  sudo chmod 400 ~/.config/google-chrome/OptGuideOnDeviceModel

  sudo touch ~/.config/google-chrome/Default/OptGuideOnDeviceModel

  sudo chmod 400 ~/.config/google-chrome/Default/OptGuideOnDeviceModel

In case they already existed from doing the above previously, make sure root user owns them.

  sudo chown root:root ~/.config/google-chrome/OptGuideOnDeviceModel

  sudo chown root:root ~/.config/google-chrome/Default/OptGuideOnDeviceModel

List to check them.

  ls -l ~/.config/google-chrome/OptGuideOnDeviceModel

  ls -l ~/.config/google-chrome/Default/OptGuideOnDeviceModel

That's a lot of steps compared to using a browser that doesn't treat your computer as their property.

I just learned from this Reddit post about the On-Device AI setting:

https://www.reddit.com/r/chrome/comments/1t536x6/psa_chrome_...

"On-device AI" can be disabled. At least, it is in Chrome on Linux Desktop.


DevTools uses a server side model, and only after you opt in with explicit consent.

Or accidentally trigger it because you're using a key binding you've used for 15 years that, upon hitting an unexpected consent screen, triggers the consent button.

FWIW, first two sections worked for Chrome on Windows.

They do that anyway, so it's in addition to that which is the parent's point.

Did you notice when your streaming files went from 1.5 GB for a movie to 6 GB for a movie? I didn't. Almost no one does. And no one writes blog posts like this about the data usage.

The article says to regularly run prune, how regularly? Currently I run the following once per day from cron:

    docker system prune -a -f
    docker volume prune -a -f

This would depend entirely on how much churn your system is doing on containers/volumes/images. Once a day sounds really often for most situations.

"Regularly" = when you're running out of space because of a bunch of built up old stuff.


From the docs, you can just run `docker system prune -a --volumes`

Ref: https://docs.docker.com/reference/cli/docker/system/prune/


Personally, I'd recommend the pointed 'docker {container,image,volume} prune' commands for scheduling granularity/control. At least, filtering as you've also shown.

The 'system' context captures networks; much to my dismay, this has been a problem for no fewer than three employers. It's painfully common for things to expect the networks to persist. They don't really consume resources, so I see no reason to invite the systematic heartburn.

When? When there's disk pressure. Maybe some longer term (weekly, monthly?) to keep a lid on things. The image cache provides a benefit, no sense fighting it. At our rate, daily pruning means I might lose hours (through a week) repeatedly pulling the same images.


Monitor your disks to see if they grow full, and have an idea what your storage baseline should be. Storage in /var/lib/docker/overlay2 can also leak, even if you prune regularly.

I had to work on a Mac M3 for a year, it sucked, it did not feel snappier than any Windows or Linux machine (including this one) that I've ever used and that is going back to the 1980's.

I suggest you judge based on benchmarks rather than vibes.

If you believe the latest M3 does not perform better than machines you’ve used in the 80s, I have no idea how to even start a reasonable discussion about this.


> If you believe the latest M3 does not perform better than machines you’ve used in the 80s

That wasn't what I was trying to say, I apologize, I should have been clearer. What I intended to say was that I've been using various, many computers since the 1980's so I have a wide and deep sampling of experiences with them and to that end...the M3 did NOT feel to me like it performed better. Regardless the benchmarks, I know how the machine should feel and I know M3 did not feel any better than any other machine I've used (and that is a lot of laptops).


Ok that is a much better point and a fair correction.

Well, no ? That's litterally saying "trust the synthetic process, we don't care about real world usage" ?? I don't care if it works better theorically, if it feels bad in everyday usage it IS bad

Well, grab an Apple from the 80s and try running a modern app on it and see whether the M3 performs better or worse.

Or, if the point is that software became very bloated, then sure but they also do a lot more nowadays so then you’re really just comparing apples with oranges.


The article is dumb, "why do you have an API endpoint that deletes your entire production database?" irrelevant, the AI did what it did, period.

No, the AI did what you told it to do. The AI didn’t do anything on its own.

> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability


> No, the AI did what you told it to do.

I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.

Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.

Either don't use non deterministic tools or protect yourself from the potential fallout.


Uh?

If someone left a loaded gun in a room and then let a toddler run around in it, we would be questioning why the guy 1) left the gun in the room 2) left the toddler in the room unsupervised. We wouldn't be saying, well no one should have toddlers in rooms.


A PhD-level toddler, mind you.

Lol no. No LLM that exists today can write a legible PhD thesis. Nor a masters dissertation. Maybe a first-year collage student, if we’re being generous, but I wouldn’t leave one of those in a room with a loaded gun either.

Does that mean the prompt should include: "...and don't delete my production database."?

If the agent didn't have delete permissions, or was sandboxed dying other way from your production database, that would handle it. So not running it that way is a decision someone made

It means people have to read the commands that they are generating before executing them.

Just in case this isn't hyperbole, no. It means an LLM should not be given that much privilege and that you are responsible for reviewing the tool's output and approving its actions.

"But wait, the user probably just meant that I shouldn't delete the database itself. Removing all of the rows in the table is fine"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: