this post was submitted on 29 Sep 2023
102 points (100.0% liked)

Technology

37708 readers
312 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

While LLMs have been used for... a lot, it seems like this use might be one where it's not only reliable but it appears to outperform existing methods of image compression. Being able to cram more data into less space tends to lead to interesting developments, so I will be keeping my eye on this.

What do you guys think? Seem like it's deserving of less hype than I'm giving it? What kind of security holes do you think this could open?

you are viewing a single comment's thread
view the rest of the comments
[–] astraeus@programming.dev 16 points 1 year ago (2 children)

Seems like another “hey, what if we used LLMs for this” scenarios. It might be more effective, but exactly how many more resources are being used to make it do the same work as current compression algorithms? Effective doesn’t mean efficient and I think for lossless applications efficient is truly more important.

[–] Butterbee@beehaw.org 11 points 1 year ago (1 children)

A LOT. You can barely run 13b parameter models on a 24gb gfx card and outputs are like a page or so of text. Translate that over to audio and it would have to be broken down into discrete chunks that the model could use as "prompts" to output a section of audio that fit into the models available output. It might compress better, but it would be exceedingly painful and slow to extract even on AI focused cards. And it would use OODLES of watts to get just a little bit better than flac.

[–] abhibeckert@beehaw.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

13b parameters works out to about 9GB. You need a bit more than that since it needs more than just the model in memory, but at 24GB I'd expect at least half of it to go unused. And memory doesn't use much power at all by the way. LPDDR4 uses something like 0.3 watts while actively reading/writing to it.

The actual computations use more, obviously, but GFX cards are not designed for this task and while they're fast most of them are also horribly inefficient.

I run 13b parameter models on my ultra portable laptop (which has a small battery, no active cooling (fanless) and no discrete GPU). It has 16GB of RAM not GPU memory - RAM, and I'm running a full operating system, web browsers, etc a the same time. Models like llama2, stable diffusion, etc get perfectly usable performance without using much battery at all (at a guess, single digit watts while performing the calculations).

There is efficient hardware now and there will be even more efficient hardware in the future. My laptop definitely isn't designed to run these models and on top of that the models aren't designed to run on a laptop either. There's plenty more optimisation work to be done in the years to come.

[–] Butterbee@beehaw.org 1 points 1 year ago

Ok, it's been a while since I tried running a language model so I might have been thinking of the 30b models that were showing up at the time. The point remains though that this thing they were running would be well beyond hardware generally available and completely impractical for realtime use. Like.. why would you do all that when flac and png are good enough. It is far cheaper and uses less power to accommodate the slightly less compressed files.

[–] christophski@feddit.uk 10 points 1 year ago (2 children)

Ok but what if we used LLM AND blockchain for this

[–] ezures@lemmy.wtf 4 points 1 year ago (1 children)

Im sure we can squeeze an nft in there somewhere

[–] christophski@feddit.uk 4 points 1 year ago

S m a r t c o n t r a c t s

[–] astraeus@programming.dev 4 points 1 year ago (1 children)

Our company has been looking for a brilliant innovator like you, how would you like to apply for a new position called professional cool sounding tech peddler, I mean director of creative technology?

[–] christophski@feddit.uk 3 points 1 year ago (1 children)

I want 200k and 30% of the company

[–] astraeus@programming.dev 2 points 1 year ago

Best I can do is $125k and $300k in company stock over 4 years