this post was submitted on 19 Oct 2023
199 points (95.0% liked)
Technology
59128 readers
2403 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No, that's a bad question. Autocorrect takes your source knowledge and information as input and makes minor corrections to spelling and suggestions to correct grammar. It doesn't come up with legal analysis on its own, and any suggestions for grammar changes should be scrutinized by the licensed professional to make sure the grammar changes don't affect the argument.
And your second statement isn't what happened here. If the lawyer had written an argument and then fed it to AI to correct and improve, then that would have the basis of starting with legal analysis written from a licensed professional. In this case, the lawyer bragged that he spent only seconds on this case instead of hours because the AI did everything. If he only spent seconds, then he very likely didn't start the process with writing his own analysis and then feeding it to AI; and he likely didn't review the analysis that was spit out by the AI.
This is an issue that is happening in the medical world, too. Young doctors and med students are feeding symptoms into AI and asking for a diagnosis. That is a legitimate thing to use AI for as long as the diagnosis that gets spit out is heavily scrutinized by a trained doctor. If they just immediately take the outputs from AI and apply the standard medical treatment for that without double checking whether the diagnosis makes sense, then that isn't any better than me typing my symptoms into Google and looking at the results to diagnose myself.
I watched the legal eagle video about another case where they submitted documents straight from an LLM with hallucinated cases. I can agree that's idiotic. But if there are a ton of use cases for these things in a lot of profession's that I think these types of incidents might leave people assuming that using it is idiotic.
My concern is that I think there's a lot of people trying to convince people to be afraid or suspicious of something that is very useful because they might be threatened either their career or skills are now at risk of being diminished and so they come up with these crazy stories.