Hacker Newsnew | past | comments | ask | show | jobs | submit | unsnap_biceps's commentslogin

Can you expand on what was RC? Was the compute off device?

It's patching the binary in memory, so the binary patch would be architecture dependent. The existing one is only x86_64, but with an updated payload, it would work on arm.

The point of backtesting is to allow you to do what you want to do with a veneer of being data driven.


TIL that F14's CADC might be the first microprocessor chipset


How are you crafting the stamps? 3d printing them so they're a negligible cost?


there's no videos of it cleaning really dirty floors for the simple reason it doesn't work like that. Cleaning robots wage a war of attrition where they can clean at a slightly faster pace then you dirty it. If it's really dirty, it takes awhile for it to actually clean the mess, but it cleans daily and eventually it gets it clean.

I would also say that cyclones don't violate the rule. They end up adding the dirt to a container of some sort and I would not consider that container clean.


it ships with a SKILL.md file, so if they're trying to enable AI to use it, it's a good bet that they used AI to build it.


When I looked at OpenSnitch (years ago), it didn't support running headless on a server. Am I mistaken about this, or has it changed?


You can run daemons on several nodes (different machines) and view them all through a central ui, it is pretty cool.


The UI is a separate package. Though you might just configure the firewall yourself at that point.


The slider disappearing when sliding between extremes is very confusing. I think the silver should be the only thing displayed and remove the buttons entirely.


The change to buttons was based on feedback I got today. The slider disappearing is a bug. Pushing a fix now!


lm studio offers an Anthropic compatible local endpoint, so you can point Claude code at it and it'll use your local model for it's requests, however, I've had a lot of problems with LM Studio and Claude code losing it's place. It'll think for awhile, come up with a plan, start to do it and then just halt in the middle. I'll ask it to continue and it'll do a small change and get stuck again.

Using ollama's api doesn't have the same issue, so I've stuck to using ollama for local development work.


Claude Code is fairly notoriously token inefficient as far as coding agent/harnesses go (i come from aider pre-CC). It's only viable because the Max subscriptions give you approximately unlimited token budget, which resets in a few hours even if you hit the limit. But this also only works because cloud models have massive token windows (1M tokens on opus right now) which is a bit difficult to make happen locally with the VRAM needed.

And if you somehow managed to open up a big enough VRAM playground, the open weights models are not quite as good at wrangling such large context windows (even opus is hardly capable) without basically getting confused about what they were doing before they finish parsing it.


I use CC at work, so I haven't explored other options. Is there a better one to use locally? I presumed they were all going to be pretty similar.


If you want to experiment with same-harness-different-models Opencode is classically the one to use. After their recent kerfluffle with Anthropic you'll have to use API pricing for opus/sonnet/haiku which makes it kind of a non-starter, but it lets you swap out any number of cloud or local models using e.g. ollama or z.ai or whatever backend provider you like.

I'd rate their coding agent harness as slightly to significantly less capable than claude code, but it also plays better with alternate models.


I am hopeful the leaked claude code narrows the capability, perhaps even googles offering will be viable once they borrow some ideas from claude.


I have good experience with Mistral Vibe.


OpenCode


Can't you use Claude caveman mode?

https://github.com/JuliusBrussee/caveman


I don't get why I would use Claude Code when OpenCode, Cursor, Zed, etc. all exist, are "free" and work with virtually any llm. Seems like a weird use case unless I'm missing something.


From my experience, Claude Code is just better. Although I recently started using Zed and it’s pretty good


previously I have found claude code to be just better than the alternatives, using large models or local. It is, however, closer now and not much excuse for the competition after the claude code leak. Personally, I will be giving this a go with OpenCode.


> I don't get why I would use Claude Code when OpenCode, Cursor, Zed, etc. all exist, are "free" and work with virtually any llm. Seems like a weird use case unless I'm missing something.

I'm with you on this. I've tried Gemma and Claude code and it's not good. Forgets it can use bash!

However, Gemma running locally with Pi as the harness is a beast.


this is like asking why use intellij or vscode or … when there is vim and emacs


No it's more like, why use a Microsoft paid for distro of nvim when lazyvim, astronvim exist


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: