this post was submitted on 01 Feb 2025
978 points (98.5% liked)

Political Memes

1244 readers
147 users here now

Non political memes: !memes@sopuli.xyz

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] AtHeartEngineer@lemmy.world 7 points 2 days ago (2 children)

I haven't seen a way to do that that doesn't wreck the model

[–] Speculater@lemmy.world 6 points 2 days ago (1 children)

Kccp, hugging face, grab a model that fits your vram in gguf format. I think two clicks after downloaded.

[–] AtHeartEngineer@lemmy.world 6 points 2 days ago (1 children)

I know how to download and run models, what I'm saying is, all the "uncensored" deepseek models are abliterated and perform worse

[–] 474D@lemmy.world -1 points 2 days ago (1 children)

You can do it in LM Studio in like 5 clicks, I'm currently using it.

[–] AtHeartEngineer@lemmy.world 4 points 2 days ago (1 children)

Running an uncensored deepseek model that doesn't perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven't seen an uncensored deepseek model that performs as well as the baseline deepseek model

[–] 474D@lemmy.world 1 points 2 days ago (1 children)

I mean obviously you need to run a lower parameter model locally, that's not a fault of the model, it's just not having the same computational power

[–] AtHeartEngineer@lemmy.world 2 points 2 days ago

In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face