this post was submitted on 06 Jan 2025
775 points (99.9% liked)

TechTakes

1533 readers
151 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 24 points 3 days ago (3 children)

both those are related to information theory, but there are other things I legally can’t mention. shrug.

hahahaha fuck off with this. no, the horseshit you’re fetishizing doesn’t fix LLMs. here’s what quantization gets you:

  • the LLM runs on shittier hardware
  • the LLM works worse too
  • that last one’s kinda bad when the technology already works like shit

anyway speaking of basic information theory:

but the research showing that 3 bits is as good as 64 is intuitive once you tie the original inspiration for some of the AI designs.

lol

[–] khalid_salad@awful.systems 7 points 2 days ago (1 children)

It's actually super easy to increase the accuracy of LLMs.

import pytorch # or ollama or however you fucking dorks use this nonsense
from decimal import Decimal

I left out all the other details because it's pretty intuitive why it works if you understand why floats have precision issues.

[–] froztbyte@awful.systems 5 points 2 days ago

decimal is a severely underappreciated library

[–] eestileib@sh.itjust.works 12 points 3 days ago* (last edited 3 days ago)

Honestly, the research showing that a schlong that's 3mm wide is just as satisfying as one that's 64 is intuitive once you tie the original inspiration for some of the sex positions.

[–] killingspark@feddit.org 11 points 3 days ago (2 children)

I have seen these 3 bit ai papers on hacker news a few times. And the takeaway apparently is: the current models are being pretty shitty at what we want them to do, and we can reach a similar (but slightly worse) level of shittyness with 3 bits.

But that doesn't say anything about how both technologies could progress in the future. I guess you can compensate for having only three bits to pass between nodes by just having more nodes. But that doesn't really seem helpful, neither for storage nor compute.

Anyways yeah it always strikes me as a kind of trend that maybe has an application in a very specific niche but is likely bullshit if applied to the general case

[–] BlueMonday1984@awful.systems 3 points 2 days ago

Far as I can tell, the only real benefit here is significant energy savings, which would take LLMs from "useless waste of a shitload of power" to "useless waste of power".

[–] V0ldek@awful.systems 12 points 3 days ago (1 children)

If anything that sounds like an indictment? Like, the current models are so incredibly fucking bad that we could achieve the same with three bits and a ham sandwich

[–] killingspark@feddit.org 5 points 3 days ago

Oh it definitely says something about the current models for sure