this post was submitted on 21 Oct 2024
217 points (97.4% liked)

Technology

58799 readers
3846 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] Grandwolf319@sh.itjust.works 16 points 5 hours ago

Of course not, the whole point of disinformation is that it sounds correct, that’s AI’s bread and butter!

[–] MajorHavoc@programming.dev 54 points 9 hours ago* (last edited 6 hours ago) (2 children)

I love that someone even bothered with a study.

(Edit: To be clear, I am both amused, and also genuinely appreciate that the science is being done.)

[–] jeansburger@lemmy.world 25 points 8 hours ago (3 children)

Confirmation of anecdotes or gut feelings is still science. At some point you need data rather than experience to help people and organizations change their perception (see: most big tech companies lighting billions of dollars on fire on generative AI).

[–] GoodEye8@lemm.ee 3 points 5 hours ago

Not to mention based on the numbers in the article I imagine the AI might actually do better than an average human would do. It wasn't as much of a "duh" as I thought it would be.

[–] MajorHavoc@programming.dev 2 points 6 hours ago

Agreed!

I don't mean sarcasticly, honestly. As you said, it's still valuable science.

[–] homesweethomeMrL@lemmy.world 1 points 7 hours ago

That’s true. But still. Duh.

[–] TheGrandNagus@lemmy.world 2 points 6 hours ago

For many hundreds of years, blood-letting was an obvious thing to do. As was just giving people leeches for medical ailments. And ingesting mercury. We thought having sex with virgins would cure STDs. We thought doses of radiation was good for us. And tobacco. We thought it was obvious that the sun revolved around Earth.

It is enormously important to scientifically confirm things, even if they do seem obvious.

[–] Zerlyna@lemmy.world 8 points 6 hours ago (1 children)

Supposedly ChatGPT had an update in September, but it doesn’t agree that Trump was found guilty in may 34 times. When I give it sources it says ok, but it doesn’t upload correct learned information.

[–] Jrockwar@feddit.uk 8 points 4 hours ago (1 children)

That's because it doesn't learn, it's a snapshot of its training data frozen in time.

I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It's like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.

It is not perfect, it does hallucinate from time to time, but it's rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.

As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. "I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee") this is an absolute godsend.

I'm not affiliated or anything, and if anything better comes my way I'll be happy to ditch it. But for now I really enjoy it.

[–] Nougat@fedia.io 1 points 20 minutes ago

... t uses your data to craft web searches, gather content, and summarise it into a response.

GPT 4-o does this, too.

[–] BertramDitore@lemm.ee 32 points 9 hours ago

Think about it this way: remember those upside-down answer keys in the back of your grade school math textbook? Now imagine if those answer keys included just as many incorrect answers as correct ones. How would you know if you were right or wrong without asking your teacher? Until a LLM can guarantee a right answer, and back it up with real citations, it will continue to do more harm than good.

[–] pennomi@lemmy.world 7 points 8 hours ago

I think the next step in AI is learning how to control and direct the speech, rather than just make computers talk.

They are surprisingly good for being a mere statistical copycat of words on the internet. Whatever the second tier innovation is that jumps AI into true reasoning rather than pattern matching is going to be wild.

[–] TommySoda@lemmy.world 5 points 7 hours ago
[–] THX1138@lemmy.ml 2 points 7 hours ago

Shock pickachu