Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting litmus test, as the post isn't just green, it's riddled with LLM copyediting. Doesn't read as if originally composed by an LLM, so there's that.

Would seem to require some discernment to classify. Not all assistive use is slop.



Some litmus test. I am sooo tired of statements like "No x. No y. No z." and then optionally "Just Foo.".

Who aside from Fred fucking Durst writes like that?

Ugh... Clearly llm generated. This is how internet has become. 90% of posts are variations of tropes like these.


    > I am sooo tired of statements like "No x. No y. No z." and then optionally "Just Foo.".  Who aside from Fred fucking Durst writes like that?
I disagree. This is a classic humor template in popular magazines from the 1990s and 2000s. The New Yorker's "Talk of the Town" probably has/had this style frequently. Also, (Timothy) McSweeney's Quarterly Concern is basically an extended trope of exactly this type of writing from 1990s and 2000s.


I mean I guess you're right - I didn't notice it, because the community reaction to the project was so positive.

> Not all assistive use is slop.

That's right, and the key is to discern which posts/projects are interesting.


The discussion about the LLM assisted/written submission at the time, with replies by the author: https://news.ycombinator.com/item?id=47055300 The defence given was essentially "just reformatted it for better grammar"

It's obviously says LLM to me at first read through.

I suspect that:

a) less people are willing to expend a bit of energy to notice LLM usage given how much of it is. ("we've lost" theory)

b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)

or c) that people don't care about the use of LLM. ("who cares" theory).

Personally I've been feeling less invested, because it seems as if most users don't care and even the main users of the site don't notice it.


Do you have any good links to guides on how to spot? I would like to care, but its hard to tell. and then what do we do when we spot it?


One guide that i hope is kept up to date: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing . Generally though it's a kind of pattern recognition which for some patterns seems visible to me.

I should clarify and revise my thoughts and initial comment. I do not think that not being able to detect it leads to lack of care. I actually think that many things have passed me by and in the future this will be even more as LLMs improve ("we're cooked").

As to "what do we do when we spot it" - you hit the nail on the head of the feelings I felt as I was writing the comment. What do we actually do, what can we change and should we attempt futile things?

And even the example dang gave - the actual submission as very good. Is any amount of LLM use okay and what's the level? I use LLMs at work but I don't like writing readmes or blog posts with it. But others might like writing code at work by hand and don't like writing text so use LLMs for that. Maybe I lower my expectations!


Or even train LLM to catch LLMs. Like that old adage, use criminals to catch criminals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: