My theory is that, since I'm going to do things like banking in my browser, I want one that has a lot of skin in the game. Chrome being backed by Google has trillions of dollars on the line should they ever do anything truly evil. Though this sneaky 4GB download comes close.
> she dropped out of the primary when her impending loss became obvious
That is how voting and elections is supposed to work, not by saying people of a certain age, color, race or creed can't hold office because young people today are bigoted and feel they deserve more than all generations that ever preceded them throughout all of history.
I'd say party leadership endorsing a 78 year old candidate for US Senate is not how voting and elections ought to work - I'd say it's a pretty big problem. You're welcome to agree or disagree with that, but more relevant to the context of this comment thread, it is not a problem that would be solved by congressional term limits.
> young people today are bigoted and feel they deserve more than all generations that ever preceded them throughout all of history.
Each generation doing materially better than the previous one used to be a widely agreed upon goal in the United States. Perhaps not really relevant to this comment thread.
She was the logical choice and a good candidate. Government is not a job anyone can step into and be sucessful. Look at the difference in effectiveness of career politicians vs populists outsiders. Its like 100s of meaningful bills passed compared to single digit.
I retired last year, lead software engineer, never got into management as a civilian, did software for over 3 decades. My salary was lower than it was as a mid level 20 years ago.
What old people have "much more money", maybe they can send some my way.
I thank God for Social Security and hope and pray it remains well into the future, even for the 99% of the people who are saying bigoted, dumb shit on this page.
This is what I've done after spending some time to look into it, this is for Linux Desktop:
Delete Chrome's silent 4 GB AI model file and AI
In Chrome, go to: chrome://flags
Search for and Disable these:
Enables optimization guide on device
Prompt API for Gemini Nano
AI Mode
Open DevTools (F12 or Ctrl+Shift+I).
Click the Settings (gear icon).
Go to AI Innovations and uncheck Enable AI assistance.
For Linux, in a bash shell, this should prevent Chrome from trying to download the file again because the root user instead of my user, will own the file/directory.
Or accidentally trigger it because you're using a key binding you've used for 15 years that, upon hitting an unexpected consent screen, triggers the consent button.
Did you notice when your streaming files went from 1.5 GB for a movie to 6 GB for a movie? I didn't. Almost no one does. And no one writes blog posts like this about the data usage.
Personally, I'd recommend the pointed 'docker {container,image,volume} prune' commands for scheduling granularity/control. At least, filtering as you've also shown.
The 'system' context captures networks; much to my dismay, this has been a problem for no fewer than three employers. It's painfully common for things to expect the networks to persist. They don't really consume resources, so I see no reason to invite the systematic heartburn.
When? When there's disk pressure. Maybe some longer term (weekly, monthly?) to keep a lid on things. The image cache provides a benefit, no sense fighting it. At our rate, daily pruning means I might lose hours (through a week) repeatedly pulling the same images.
Monitor your disks to see if they grow full, and have an idea what your storage baseline should be. Storage in /var/lib/docker/overlay2 can also leak, even if you prune regularly.
I had to work on a Mac M3 for a year, it sucked, it did not feel snappier than any Windows or Linux machine (including this one) that I've ever used and that is going back to the 1980's.
I suggest you judge based on benchmarks rather than vibes.
If you believe the latest M3 does not perform better than machines you’ve used in the 80s, I have no idea how to even start a reasonable discussion about this.
> If you believe the latest M3 does not perform better than machines you’ve used in the 80s
That wasn't what I was trying to say, I apologize, I should have been clearer. What I intended to say was that I've been using various, many computers since the 1980's so I have a wide and deep sampling of experiences with them and to that end...the M3 did NOT feel to me like it performed better. Regardless the benchmarks, I know how the machine should feel and I know M3 did not feel any better than any other machine I've used (and that is a lot of laptops).
Well, no ? That's litterally saying "trust the synthetic process, we don't care about real world usage" ?? I don't care if it works better theorically, if it feels bad in everyday usage it IS bad
Well, grab an Apple from the 80s and try running a modern app on it and see whether the M3 performs better or worse.
Or, if the point is that software became very bloated, then sure but they also do a lot more nowadays so then you’re really just comparing apples with oranges.
No, the AI did what you told it to do. The AI didn’t do anything on its own.
> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.
If someone left a loaded gun in a room and then let a toddler run around in it, we would be questioning why the guy 1) left the gun in the room 2) left the toddler in the room unsupervised. We wouldn't be saying, well no one should have toddlers in rooms.
Lol no. No LLM that exists today can write a legible PhD thesis. Nor a masters dissertation. Maybe a first-year collage student, if we’re being generous, but I wouldn’t leave one of those in a room with a loaded gun either.
If the agent didn't have delete permissions, or was sandboxed dying other way from your production database, that would handle it. So not running it that way is a decision someone made
Just in case this isn't hyperbole, no. It means an LLM should not be given that much privilege and that you are responsible for reviewing the tool's output and approving its actions.
reply