this post was submitted on 06 Jan 2025
775 points (99.9% liked)
TechTakes
1533 readers
96 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
please explain to us how you think having less, or more, subscribers would make this profitable
Yeah, the tweet clearly says that the subscribers they have are using it more than they expected, which is costing them more than $200 per month per subscriber just to run it.
I could see an argument for an economy of scales kind of situation where adding more users would offset the cost per user, but it seems like here that would just increase their overhead, making the problem worse.
LLM inference can be batched, reducing the cost per request. If you have too few customers, you can't fill the optimal batch size.
That said, the optimal batch size on today's hardware is not big (<100). I would be very very surprised if they couldn't fill it for any few-seconds window.
i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.
yep, original is still visible on mastodon
this sounds like an attempt to demand others disprove the assertion that they're losing money, in a discussion of an article about Sam saying they're losing money
What? I'm not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.
oh, so you’re that kind of fygm asshole
good to know
Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.
@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what ~~Open~~AI is operating at.
@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding "I would be very very surprised if they couldn't fill [the optimal batch size] for any few-seconds window" to mean "I would be very very surprised if they are not profitable"?
The tweet I linked shows that good LLMs can be much cheaper. I am saying that ~~Open~~AI is very inefficient and thus economically "cooked", as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems
my god! let me fix that