Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Called it 10 days ago: https://news.ycombinator.com/item?id=47533297#47540633

Something worse than a bad model is an inconsistent model. One can't gauge to what extent to trust the output, even for the simplest instructions, hence everything must be reviewed with intensity which is exhausting. I jumped on Max because it was worth it but I guess I'll have to cancel this garbage.



With Claude Code the problem of changes outside of your view is twofold: you don't have any insight into how the model is being ran behind the scenes, nor do you get to control the harness. Your best hope is to downgrade CC to a version you think worked better.

I don't see how this can be the future of software engineering when we have to put all our eggs in Anthropic's basket.


Yep. I was doing voice based vibe-coding flawlessly in Jan/Feb.

I've basically stopped using it because I have to be so hands on now.


This is why you should never ever trust an AI coding agent to produce good code.

Use it to set up the strictest possible custom linting rules.


One of the replies even called out the phased rollout, lmao https://news.ycombinator.com/item?id=47533297#47541078


LLMs are nondeterministic.


You couldn't ever just trust the output of an LLM what are you talking about




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: