this post was submitted on 26 Jul 2024
230 points (96.7% liked)

science

14350 readers
103 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] metaStatic@kbin.earth 33 points 1 month ago (2 children)

we have to be very careful about what ends up in our training data

Don't worry, the big tech companies took a snapshot of the internet before it was poisoned so they can easily profit from LLMs without allowing competitors into the market. That's who "We" is right?

[–] WhatAmLemmy@lemmy.world 19 points 1 month ago* (last edited 1 month ago)

It's impossible for any of them to have taken a sufficient snapshot. A snapshot of all unique data on the clearnet would have probably been in the scale of hundreds to thousands of exabytes, which is (apparently) more storage than any single cloud provider.

That's forgetting the prohibitively expensive cost to process all that data for any single model.

The reality is that, like what we've done to the natural world, they're polluting and corrupting the internet without taking a sufficient snapshot — just like the natural world, everything that's lost is lost FOREVER... all in the name of short term profit!

[–] veganpizza69@lemmy.world 2 points 1 month ago

The retroactive enclosure of the digital commons.