this post was submitted on 11 Jun 2024
94 points (100.0% liked)

technology

23313 readers
74 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

you are viewing a single comment's thread
view the rest of the comments
[–] lurkerlady@hexbear.net 35 points 5 months ago* (last edited 5 months ago) (2 children)

This is accurate, though I am actually going to explain why. These big model companies (Google, ClosedAI, etc) parasitize the open-weights/open-source community that actually makes good Loras, fine tunes, and research papers. Consumer hardware simply hasn't gotten good and cheap enough for very good fine tune training, and thats why this is all slowly petering out. In a couple of generations of consumer GPUs, which will be when we get consumer GPUs geared towards AI (re: super high VRAM counts of like 70gb+ for an affordable sub 700 usd cost), we might see another leap forward in this tech. Though I will say that this mostly pertains to LLMs, generative AI models like Stable Diffusion have a lot of tricks up their sleeves that can still be explored. Most of recent research and tweaking has been based around building a structure for the AI to build on, to sort of guide it rather than letting it take random stabs at things, in order to improve outputs. Some people have been doing things like hard coding color theory, framing a photograph, etc, and interpreting human language to trigger that hard code.

We've had statistical models like these since the 50s. Consumer hardware has always been the big materialist bottleneck, this is all powered by small research teams and hobbyist nerds. You can throw a ton of money at it and have a giant research team, but the performance you squeeze out of adding 400b more parameters to your 13b model or having a gigantic locked-down datacenter is going to be diminishing.

Also, synthetic data can be useful, people are hating on it in this thread but its a great way to reinforce good habits in the AI and interpret garbled code and speech that would otherwise confuse the AI. I sometimes feel like people just see something about 'AI bad' and upvote it and don't try to understand it, where it is useful and where it is not, and so on.

[–] bazingabrain@hexbear.net 13 points 5 months ago (2 children)

I fail to see how synthetic data is good if it makes AI used to justify job cuts, "better".

[–] frauddogg@lemmygrad.ml 13 points 5 months ago* (last edited 5 months ago) (2 children)

That's where I'm at. Sure, there might be moderately-beneficial use-cases, maybe; but it doesn't change the fact that there's no such thing as an ethically-trained model, and there's still no such thing as a model that wasn't created based on rampant theft by capitalists, so I consider anything that comes of it fruit of the poison tree.

AI bad until the base that comprises it radically changes, across the board.

[–] lurkerlady@hexbear.net 11 points 5 months ago* (last edited 5 months ago)

Sure, there might be moderately-beneficial use-cases, maybe; but it doesn't change the fact that there's no such thing as an ethically-trained model, and there's still no such thing as a model that wasn't created based on rampant theft by capitalists, so I consider anything that comes of it fruit of the poison tree.

I mean thats just the case with everything really. Theres a lot of very good use cases that are mostly to do with data manipulation, but the coolest ones are translating. I think we're approaching a point where small models are providing very accurate translations and are even translating tone and intent properly, which is far superior to simple dictionary translation methods. I think its very possible that new phones could be outfitted with tensor cores and you could have a real-time universal translator in your hand, though it'll likely only add 'subtitles' irl for you. AI voice-word recognition has also been very good and can be miniaturized. This is the use case I'm most excited for, personally, as a communist. Currently translating in a foreign country requires a lot of typing (if you dont have a perfect grasp of language) and it removes a very human element I feel to conversation. If everyone could locally run a subtitle-translation generation app it'd be amazing for all of humanity.

Theres of course plenty of manufacturing use cases as well, but China is spearheading on that, though there is some work being done in the US as well in the few industries that remain.

[–] bazingabrain@hexbear.net 10 points 5 months ago

AI bad until the base that comprises it radically changes, across the board.

which wont happen, hence why me and 650k others moved to cara and gave meta the finger.

[–] lurkerlady@hexbear.net 9 points 5 months ago* (last edited 5 months ago)

Synthetic data is basically a fancy way of saying 'I'm properly formatting data and reinforcing the ai's good outputs'. Rearranging words, fixing / adding tags, that sort of thing. This is generated with various tools that usually have an LLM or VLM plugged in, though some are as simple as a regex script.

[–] MacNCheezus 3 points 5 months ago

Better hardware isn't going to change anything except scale if the underlying approach stays the same. LLMs are not intelligent, they're just guessing a bunch of words that are statistically most likely to satisfy the user's request based on their training data. They don't actually understand what they're saying.