this post was submitted on 10 Jun 2024
73 points (86.9% liked)

Apple

17482 readers
83 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bamboo@lemm.ee 2 points 5 months ago (1 children)

Is the distrust in the quality of the output? If so, I think the main thing Apple has going for it is that they use many fine tuned models for context constrained tasks. ChatGPT can be arbitrarily prompted and is expected to give good output for everything, sometimes long output. Being able to do that is… hard. However, most of apple’s applications are much, much narrower. Like, the writing assistant which will rephrase at most a few paragraphs: the output is relatively short, and the model has to do exactly one task. Or in Siri: the model has to take a command, and then select one or more intents to call. It’s likely that choosing which intents to call, and what kinds of arguments to provide are handled by separate models optimized for each case. Despite all that, it is very possible that errors can still occur, but there are fewer chances for them to occur. I think part of Apple’s motivation for partnering with OpenAI specifically for certain complex Siri questions, is that this is an area they aren’t comfortable putting Apple branding on due to output quality concerns, and by providing it with a partner, they can pass blame onto the partner. Someday if LLMs are better understood and their output can be better controlled and verified for open ended questions, that’s when Apple might dump OpenAI and advertise their in house replacement as being accurate and reliable in a way ChatGPT isn’t.

[–] LostWanderer@lemmynsfw.com 1 points 5 months ago

I think it's due to a combination of the tech still being relatively young (it's made leaps and bounds) and its thoughtless hallucinations that pass as valid answers. If the training data is poisoned by disinformation or misinformation, it makes any output potentially useless at best, at worst it's harmful. The quality of LLM results purely depends on the people in charge of creating them and the source of its data. After writing it out, I feel that I mistrust the people in control of LLM development because it's so easy to implement this tech incorrectly and for the people in charge to be completely irresponsible. Since, the techbros behind this latest push for making LLMs into AI are so gung-ho about it, the guard rails have been pushed aside. That makes it all the easier for my fears to become manifest.

Once again, it sounds all well and good what Apple is likely trying to do with their implementation of LLM. However, I can't help but wonder about how terribly wrong it can all go.