Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah I saw that one too, which I would think supports my point that distilling down training data would lead to more truth aligned AI.

I mean it's also just the classic garbage in garbage out heuristic, right?

The more training data is filtered and refined, the closer the model will get to approximating truth (at least functional truths)

It seems we are agreeing and adding to each other's points... Were you one of the people who downvoted my comment?

I'm just curious what I'm missing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: