this post was submitted on 29 May 2025
158 points (90.7% liked)

Technology

70498 readers
2240 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] LodeMike 42 points 3 days ago

So can a lot of other models.

"This load can be towed by a single vehicle"

[–] blarth@thelemmy.club 4 points 3 days ago (4 children)
[–] vhstape@lemmy.sdf.org 27 points 3 days ago (2 children)

the Chinese AI lab also released a smaller, “distilled” version of its new R1, DeepSeek-R1-0528-Qwen3-8B, that DeepSeek claims beats comparably sized models on certain benchmarks

Most models come in 1B, 7-8B, 12-14B, and 27+B parameter variants. According to the docs, they benchmarked the 8B model using an NVIDIA H20 (96 GB VRAM) and got between 144-1198 tokens/sec. Most consumer GPUs probably aren’t going to be able to keep up with

[–] brucethemoose@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Depends on the quantization.

7B is small enough to run it in FP8 or a Marlin quant with SGLang/VLLM/TensorRT, so you can probably get very close to the H20 on a 3090 or 4090 (or even a 3060) and you know a little Docker.

[–] avidamoeba@lemmy.ca 7 points 3 days ago (1 children)

It proved sqrt(2) irrational with 40tps on a 3090 here. The 32b R1 did it with 32tps but it thought a lot longer.

[–] vhstape@lemmy.sdf.org 2 points 3 days ago* (last edited 3 days ago)

On my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device

[–] LainTrain@lemmy.dbzer0.com 6 points 2 days ago

I'm genuinely curious what you do that a 7b model is "trash" to you? Like yeah sure a gippity now tends to beat out a mistral 7b but I'm pretty happy with my mistral most of the time if I ever even need ai at all.

[–] knighthawk0811@lemmy.world 8 points 3 days ago

it's distilled so it's going to be smaller than any non distilled of the same quality

[–] TropicalDingdong@lemmy.world 4 points 3 days ago (1 children)

Yeah idk. I did some work with deepseek early on. I wasn't impressed.

HOWEVER...

Some other things they've developed like deepsite, holy shit impressive.

[–] double_quack@lemm.ee 2 points 2 days ago (1 children)

Save me the search, please. What's deepsite?

[–] TropicalDingdong@lemmy.world 6 points 2 days ago* (last edited 2 days ago) (1 children)

https://tmpweb.net/nmS9uRBAENhQ/

Above is what I can do with deepsite by pasting in the first page of your lemmy profile and the prompt:

"This is double_quack, a lemmy user on Lemmy, a new social media platform. Create a cool profile page in a style that they'll like based on the front page of their lemmy account (pasted in a ctrl + a, ctrl + c, ctrl + v of your profile)."

It not perfect by any stretch of the imagination, but like, its not a bad starting point.

if you want to try it: https://huggingface.co/spaces/enzostvs/deepsite

[–] double_quack@lemm.ee 3 points 2 days ago (1 children)

Excuse me... what? Ok, that's something...

[–] TropicalDingdong@lemmy.world 2 points 2 days ago (2 children)

Here I'm DM"ing you something. Its very personal, but I want to share it with you and I made it using Deepsite (in part).