Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is why eventually, the AI with the fewest guardrails will win. Grok is currently the most unguarded of the frontier models, but it could still use some work on unbiased responses.


Still has to be a local model too.

Arbitrary government censorship on top of arbitrary corporate censorship is a hell no for me forever into the future


For what you're looking for, VeniceAI is focused entirely on privacy and making their models uncensored. Even if it's not local. They IP block censorious jurisdictions like UK, rather than comply.


VeniceAI is great, and my go-to for running open source models. Sadly they appear to have given up providing leading coding models, making it of limited use to me.


I can't imagine myself sharing my code or workspace documents with X. Nevermind the the moral implications of just using their products.


Glad to see someone saying this, it's frightening how quickly all is forgiven and forgotten.


If you tell DeepSeek you're going to jump off a cliff, DeepSeek will tell you to go for it*; but I don't think it's going to beat Anthropic or OpenAI.

* https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...


Try asking about Chinese history/politic and you won't get far.


Gemini is surprisingly unguarded as well, especially when running in API mode. It puts on the air if you do a quick smoke test like "tell me how to rob a bank". But give it a Bond supervillain prompt, and it will tell you, gleefully at that. Qwen also tends to be like that.

OTOH Anthropic and OpenAI seem to be in some kind of competition to make their models refuse as much as possible.


My prediction is alignment is an unsolvable problem, but OTOH if they don’t even try, the second order effects will be catastrophic.


Doesn't it have the opposite issue where it will actively steer you towards alt right topics like white genocide?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: