this post was submitted on 04 Aug 2024
335 points (90.4% liked)
Technology
59179 readers
2124 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
An algorithm can't.
Plenty of humans absolutely can. LLM writing is genuinely fucking terrible. It has the slightly stilted over formality of most non-native speakers, without the intelligence being fluent in a second language implies.
Flawless grammar with a complete absence of any sign of intelligence is not something you get regularly from humans.
The "can" is irrelevant here. Checking tool has to be reliable to be useful. What's the use of having a checker that maybe detects something sometimes somewhat successfully?
There's a massive gap between "you can't make a tool" and "you can't identify it".
The problem with a tool is the exact same as the issue with LLMs to begin with. It does not resemble intelligence or comprehension in any way and cannot use it as an indicator.
But the use of LLMs is absolutely identifiable to moderately intelligent humans, because LLM output has raw language skills wildly inconsistent with every other skill that is part of writing.
What's even point of your argument? That a detective can figure out who used AI? Yes detectives can figure out most stuff. This is completely irrelevant to the topic at hand my dude.
What are you talking about "detectives"?
You said "nobody can identify LLM use" when any moderately intelligent human can identify LLM output pretty easily. It explodes off the page.
Whatever dude not playing these stupid games. You know exactly what I meant. Go away 👋
It's not a game.
Spreading the lie that LLMs are somehow indistinguishable from humans is incredibly harmful. It's a big part of the reason the obscene waste of energy the entire "force chatbots into everything" space exists.