this post was submitted on 26 Oct 2024
1244 points (99.3% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54627 readers
599 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] chicken@lemmy.dbzer0.com -5 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you're wondering about something and don't know what you don't know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.

[–] Excrubulent@slrpnk.net 10 points 3 weeks ago* (last edited 3 weeks ago)

The phrase "synthesised expert knowledge" is the problem here, because apparently you don't understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.

You're not exposing people to expert knowledge, you're exposing them to expert-sounding words that cannot be made accurate. Sometimes they're right by accident, but that is not the same thing as accuracy.

You confused what the LLM is doing for synthesis, which is something loads of people will do, and this will just lend more undue credibility to its bullshit.

[–] veniasilente@lemm.ee 5 points 3 weeks ago

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents.

If any of that was actually true, yeah. But it's not, it can't be, and it won't be.

As with all world-changing technology, "the general public" will never truly obtain its power, not until it has been well squeezed by the elites for gains. Not only that, "the general public" obtaining this power would be devastating on the simple physical principle that this kind of technology depends on ruining the ecology. And this whole "synthethized expert knowledge".... man, that's three words that mean absolutely nothing when chained together because it's all illusion: it's not actual knowledge, it's not expert, and it's not even synthetized, at best it's emulated. It's all a tangle of lies and make-believes sold on bulk with zero accountability.

But sure, nice dream. I want a Lamborghini, too.

[–] Ashelyn@lemmy.blahaj.zone 3 points 3 weeks ago* (last edited 3 weeks ago)

People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.

Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.

Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.

[–] Auli@lemmy.ca 2 points 3 weeks ago

Man the amount of work a bash script needs from a LLM and that is a pretty basic thing. Did it speed up the process I think it did but not really sure actually did it make it easier yes. Did I need some idea of what it was doing yes.