this post was submitted on 14 Apr 2024
272 points (91.7% liked)
Futurology
1757 readers
256 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
OP, you do realize that this paper is about image generation and classification based on related data sets and only relates to the image processing features of multimodal models, right?
How do you see this research as connecting to the future scope of LLMs?
And why do you think that the same leap we've now seen with synthetic data transmitting abstract capabilities in text data won't occur with images (and eventually video)?
Edit: Which LLMs do you see in the models they tested:
I don't see how that paper has anything to do with OPs theory.
I mean, if we're playing devil's advocate to the "WTF is OP talking about" position, I can kind of see the argument around how exponential needs for additional training data combined with the ways in which edge cases are underrepresented from synthetic data sources leading to model collapse could be extrapolated to believing that we've hit a plateau resulting from a training data bottleneck.
In theory there's an inflection point at which models become sophisticated enough that they can self-sustain with generating training data to recursively improve and whether we will hit that point or not is an open question with arguments on both sides.
I agree that this paper in relation to the title isn't exactly the best form of the argument, but I can see how someone only kind of understanding what's being covered could have felt it was confirming their existing beliefs around where models currently are at and will be in the future.
The only thing I'll add is that I was just getting a nice laugh out of looking at if Gary Marcus (a common AI skeptic) has ever been right about anything to date, and saw he had a long post about how deep learning was hitting a wall and we were a far way off from LLMs understanding human text...four days before GPT-4 released.
In my experience, while contrarian positions to continuing research trends can be correct in a "even a broken clock is right twice a day" sense, personally I wouldn't put my bets on a reversal of a trend that in its pacing and replication seems to be accelerating, not decelerating.
In particular regarding OP's claim, the work over the past 18 months with synthetic data sets from GPT-4 giving tiny models significant boosts in critical reasoning skills during fine tuning should give anyone serious pause on "we're hitting diminishing returns and model collapse."
That sounds surprising. Do you have a source?