this post was submitted on 25 Mar 2025
140 points (100.0% liked)

Technology

38432 readers
376 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Linktank 13 points 6 days ago (6 children)

Okay, can somebody who knows about this stuff please explain what the hell a "token per second" means?

[–] IndeterminateName@beehaw.org 29 points 6 days ago (1 children)

A bit like a syllable when you are talking about text based responses. 20 tokens a second is faster than most people could read the output so that's sufficient for a real time feeling "chat".

[–] SteevyT@beehaw.org 2 points 6 days ago

Huh, yeah that actually is above my reading speed assuming 1 token = 1 word. Although, I found that anything above 100 words per minute, while slow to read, feels real time to me since that's about the absolute top end of what most people type.

[–] fluffykittycat@slrpnk.net 16 points 6 days ago

It's the generation speed. Internally LLMs use tokens which represent either words or parts of words and map them to integer values. The model then does it's prediction on which integer is most likely to come after the input. How the words are split up is an implementation detail that can vary from model to model

[–] IrritableOcelot@beehaw.org 11 points 6 days ago

Not somebody who knows a lot about this stuff, as I'm a bit of an AI Luddite, but I know just enough to answer this!

"Tokens" are essentially just a unit of work -- instead of interacting directly with the user's input, the model first "tokenizes" the user's input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.

I think tokens are used because most models use them, and use them in a similar way, so they're the lowest-level common unit of work where you can compare across devices and models.

[–] badcodecat@lemux.minnix.dev 9 points 6 days ago

it's a little similar to words per second

[–] Mniot@programming.dev 4 points 6 days ago

Not an answer to your question, but I thought this was a nice article for getting some basic grounding on the new AI stuff: https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[–] pppirate@lemmy.dbzer0.com 1 points 6 days ago (1 children)

basically how fast the output speed is when it generates text

[–] Linktank 1 points 6 days ago

Thank you! Seems like they could just put that more plainly in the headline instead of assuming everybody is an AI aficionado.