this post was submitted on 09 Aug 2023
400 points (100.0% liked)

Privacy Guides

16813 readers
2 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] RedstoneValley@sh.itjust.works 28 points 1 year ago* (last edited 1 year ago) (2 children)

"public" does not mean you're allowed to steal it and republish it as a work of your own. There are things like copyright and stuff

[–] cooljacob204@kbin.social 15 points 1 year ago (3 children)

“public” does not mean you’re allowed to steal it and republish it as a work of your own

That is not what they or LLMs do. And while there is questionable morals around it acting like they are straight up stealing and republishing work hurts having serious discussions about it.

[–] Kichae@kbin.social 19 points 1 year ago (1 children)

LLMs create statistical distributions of words and phrases based on ingested data, and then sample those distributions given conditional probabilities.

Why should for-profit companies have the right to create these statistical distributions based on our written works without consent? They're not publishing these distributions, and the purpose of ingesting these texts is not to report on the distributions.

They're just bottom-trawling the internet and acting as if they have every right to use other peoples' written works. While people are having "serious discussions" around it, they're moving forward, ignoring the discussions entirely, and trying to force the conclusion of those discussions to be "well, it's too late now, anyway".

[–] Even_Adder@lemmy.dbzer0.com 6 points 1 year ago* (last edited 1 year ago)

Original analysis of public data is not stealing. If it were stealing to do so, it would gut fair use, and hand corporations a monopoly of a public technology. They already have their own datasets, and the money to buy licenses for more. Regular consumers, who could have had access to a corporate-independent tool for creativity and social mobility, would instead be left worse off with fewer rights than where they started.

[–] pjhenry1216@kbin.social 9 points 1 year ago (1 children)

You should have the discussions first. Not after you already profited off someone else's work. If the argument should be about whether they can use the data or not, then harvesting it first is absolutely harmful to the discussion you claim is important. You can't just argue one side is in bad faith when the other side is already objectively acting in bad faith if we are to assume the discussion is real.

[–] cooljacob204@kbin.social 8 points 1 year ago* (last edited 1 year ago)

You should have the discussions first.

But we are way past that. And legally while they are walking a thin line it seems that LLMs are going to win the legal challenges.

I don't think stopping or slowing LLM development is going to work, because then more questionable countries who really don't give a fuck about IP will pull ahead.

If you want my honest opinion I don't think these LLMs companies are stealing and I do think artists are getting the shit end of the stick at the same time. We are heading towards and AI dystopia and I think the way to address is is through more solid social welfare programs instead of fight about IP. While artists are the focus, this AI revolution is coming for all labor. Artists are unfortunately the first ones being impacted by it.

I think people should stop fighting about the minor things and instead prep for the inevitable unemployment this will bring. LLMs are really just the tip of the iceberg.

Yeah, you're right that it is different from simply stealing content. However the LLMs still use protected material as input and it seems that at least parts of those works can be uniquely identified in the output. That can be considered problematic, even if the data is deconstructed into embeddings inbetween input and output.

Thank you. I haven't thought about copyright just now. This is indeed something that needs to be addressed.

Although I personally still don't have much of a problem with that. I think copyright laws are highly debatable.