this post was submitted on 05 May 2025
113 points (91.2% liked)

Technology

2586 readers
729 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 40 points 1 day ago* (last edited 1 day ago)

I'm not opposed to A"I"; far from that, I actually use text generators a fair bit, sometimes image gens. It's simply a technology and I use it as such. And I still bloody hate how corporations handle it:

  • Always two weights, two measures. If you violate their IP, you're a filthy criminal; if they violate yours, you're overreacting and a luddite and harming progress. I want to see copyright gone, but if it is not, then apply it consistently to all sides. (By the way, fuck "Open"A"I" and their Bob Dylan defence.)
  • Always nagging you to use it. If you're nagging me to use something, it's because it's in yours best interests that I use it, not mine. No means "no" dammit.
  • Always implicitly lying about its abilities. No, I'm not going to ask it anything where a bullshit answer might ruin my day, stop misleading me to do so.
  • Always downplaying issues. Yeah, nah, I'm not blind to the environmental concerns around training those huge models. Or that corporations - that don't understand what "consent" means - basically DDoS sites to train their models.

But of course they won't talk about this, right? This sort of questionnaire is not made to genuinely obtain feedback; it's made to mislead you.

[–] chicken@lemmy.dbzer0.com 7 points 1 day ago (3 children)

I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

I don't know if I'm understanding this argument right, but the idea that integrating locally run AI is inherently privacy destroying in the same way as live service AI doesn't make a lot of sense to me.

[–] lime@feddit.nu 5 points 18 hours ago

think of apple's on-device image scanner ai that flagged people as perverts after they had taken photos of sand dunes.

[–] Umbrias@beehaw.org 2 points 16 hours ago (1 children)

building and centralizing pii is indeed a privacy point of failure. what's not to understand?

[–] chicken@lemmy.dbzer0.com 3 points 13 hours ago* (last edited 12 hours ago) (1 children)

The use of local AI does not imply doing that, especially not the centralizing part. Even if some software does collect and store info locally (not inherent to the technology and anything with autosave already qualifies here), that is not close to as bad privacywise as filtering everything through a remote server, especially if there is some guarantee they won't just randomly start exfiltrating it, like being open source.

[–] Umbrias@beehaw.org 1 points 11 hours ago (1 children)

I don’t care if your language model is “local-only” and runs on the user’s device. If it can build a profile of the user (regardless of accuracy) through their smartphone usage, that can and will be used against people.

emphasis mine from the text you quoted…

[–] chicken@lemmy.dbzer0.com 1 points 10 hours ago* (last edited 10 hours ago) (1 children)

I don't see how the possibility it's connected to some software system for profile building, is a reason to not care whether a language model is local only. The way things are worded here make it sound like this is just an intrinsic part of how LLMs work, but it just isn't. The model still just does text prediction, any "memory" features are bolted on.

[–] Umbrias@beehaw.org 1 points 10 hours ago (1 children)

Because these are often sold with profile building features, for example, recall. Recall is sold as "local only" with profile building features. So it continues to be centralized pii that is a point of failure. As the quote says, as i said.

[–] chicken@lemmy.dbzer0.com 1 points 8 hours ago (2 children)

Even with Recall, a hypothetical non-local equivalent would be significantly worse. Whether Microsoft actually has your data or not obviously matters. Most conceivable software that uses local AI wouldn't need any kind of profile building anyway, for instance that Firefox translation feature.

The thing that's frustrating to me here is the lack of acknowledgement that the main privacy problem with AI services is sending all queries to some company's server where they can do whatever they want with them.

[–] Umbrias@beehaw.org 1 points 3 hours ago (1 children)

why do you care that someone didnt say it was worse enough? "x is a problem, if y is true then z is a problem" -> "why didnt you talk about x"

silly.

[–] chicken@lemmy.dbzer0.com 1 points 3 hours ago

What's basically being said is, making an AI powered software local-only doesn't make a difference and doesn't matter. But that's not true, and the arguments for that don't seem coherent.

[–] ReversalHatchery@beehaw.org 1 points 8 hours ago (1 children)

the point is that making it local-only is not significantly better. it does not solve a major problem.

[–] chicken@lemmy.dbzer0.com 1 points 7 hours ago (1 children)

So you don't think collection of user data is a meaningful privacy problem here? How does that work?

[–] ReversalHatchery@beehaw.org 1 points 7 hours ago (1 children)

it is, and that is still happening.

[–] chicken@lemmy.dbzer0.com 1 points 7 hours ago (1 children)

Software that is designed not to send your data over the internet doesn't collect your data. That's what local-only means. If it does send your data over the internet, then it isn't local-only. How is it still happening?

[–] ReversalHatchery@beehaw.org 1 points 7 hours ago (1 children)

it does. it locally aggregates, collects data about what you do on your computer across the days and weeks.

[–] chicken@lemmy.dbzer0.com 1 points 6 hours ago* (last edited 6 hours ago)

But the company hasn't collected it, because it doesn't have it. Your computer has it. So long as it stays on your computer, it cannot harm your privacy. That's why there is such a big difference here; an actual massive loss of privacy that is guaranteed to be combined with everyone else's data and used against you, vs a potential risk of loss of privacy from someone gaining unauthorized access to your computer.

[–] knightly@pawb.social 2 points 19 hours ago

Microsoft Recall

[–] Goretantath@lemm.ee 12 points 1 day ago

"Original Character plz do not steal" Sonic but purple with glasses

[–] possiblylinux127@lemmy.zip 6 points 1 day ago (1 children)

That image is kind of off putting

[–] Sendpicsofsandwiches@sh.itjust.works 10 points 1 day ago (2 children)

Took me a minute to understand everything that was going on with the open ai logo in the thumbnail...

load more comments
view more: next ›