this post was submitted on 11 Feb 2024
16 points (80.8% liked)

Futurology

3110 readers
61 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kwedd@feddit.nl 4 points 2 years ago (2 children)

Is there no risk of the LLM hallucinating cases or laws that don't exist?

[–] RedditWanderer@lemmy.world 6 points 2 years ago

How to use Chat GPT to ruin your legal career.

AI does help with discovery and they don't need to spend 8 days scanning emails before the trial, but they'll still need lawyers and junior lawyers.

[–] Bipta@kbin.social 2 points 2 years ago* (last edited 2 years ago)

GPT4 is dramatically less likely to hallucinate than 3.5, and we're barely starting the exponential growth curve.

Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it's already got within sight.