this post was submitted on 12 May 2025
135 points (91.9% liked)
Apple
18812 readers
36 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Apple is terrible. The ai is doing what it’s supposed to: spying on its users for its real masters.
Ditch your Apple products before they get you sent to El Salvador
Well for once I have to stand up for apple. What makes them different in the AI space is that the inference actually happens on device and is very privacy focused. Probably why it sucks
Nailed it. I've tried taking notification contexts and generally seeing how hard it is. Their foundational model, I think is 4bit quantized, 3billion parameter model.
So I loaded up llama, phi, and picollm to run some unscientific tests. Honestly they had way better results than I expected. Phi and llama handled notification summaries (I modeled the context window, nothing official) and both performed great. I have no idea wtf AFM is doing, but it's awful.
It sucks for a lot of reasons but mostly because ai is always a “black box” (deep seek the exception) with “magic proprietary code”. You think “Tim Apple” isn’t working with the trump admin to id people for El Salvador?
Being open source doesn’t magically make it good. There’s a ton of open source software that straight up sucks.
Yes, but in this case, you can see what the model is doing, and it is running on your actual computer. Whereas a lot of LLM providers tend to run their models on their own server farms today, partly because it's prohibitively expensive to run a big model on your machine (Deepseek's famous R1 model needs at least a hundred GBs of VRAM, or about 20 GPUs) and partly so that they have more control over the thing.
AI isn't a black box in the sense that it is a mystery machine that could do anything. It's a black box in the sense that we don't know exactly how it's working, with which particular probability vector/tensor is responsible for what, though we have a fairly good general idea of what goes on.
It's like a brain in that sense. We don't know which exact nerve-circuits do what, but we have a fairly good general idea of how brains work. We don't think that if we talk to someone, they're transmitting everything you say to the hivemind, because brains can't do that.
Boiling the planet just for this... thing. Freaking awful.