Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you find OP absolutely did say that.

> Parts of it were 100% LLM written. Like it or not, people can recognize LLM-generated text pretty easily

https://news.ycombinator.com/item?id=45868782



Thanks for adding the quote, that is a different part of the post than I was focusing on.

I still think that's a far cry from deterministically recognizing LLM-generated text. At least the way I would understand that would be an algorithmic test with very low rates of both false positives and false negatives. Instead I understood the OP to be saying that people have an intuitive sense of LLM generated text with a relatively low false negative rate.

I am certain that the skill varies widely between individuals, but in principle there is no reason to suspect that with training humans could not become quite good at recognizing low effort (no attempt at altering style) LLM generated content from the major models. In principle it is no different than authorship analysis used in digital forensics, a field that shows fairly high accuracy under similar conditions.


I am pretty much certain that parts of it were LLM-written, yes. This doesn't imply that the entire blog post is LLM-generated. If you're a good Bayesian and object to my use of "100%" feel free to pretend that I said something like "95%" instead. I cannot rule out possibilities like, for example, a human deliberately writing in the style of an LLM to trick people, or a human who uses LLMs so frequently that their writing style has become very close to LLM writing (something I mentioned as a possibility in an earlier reply; for various reasons, including the uneven distribution of the LLM-isms, I think that's unlikely here).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: