Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What the study actually showed, people don't have an investment or strong memory to output they didn't produce.

Problem with LLMs is, when you pass hours feeding prompts to solve a problem, you actually did help (a lot!) to produce the output.



I agree, the study didn't do that or have any thoughts on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: