this post was submitted on 23 Sep 2024
364 points (96.7% liked)

Technology

60036 readers
2771 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] breadsmasher@lemmy.world 53 points 3 months ago (3 children)

The source didn’t have this detail - google training gemini “cloud” vs “own hardware”. Does Google Cloud not count as “own hardware” for google?

[–] bjorney@lemmy.ca 20 points 3 months ago

Does Google Cloud not count as “own hardware” for google?

That's why the bars are so different. The "cloud" price is MSRP

[–] pandapoo@sh.itjust.works 10 points 3 months ago* (last edited 3 months ago)

This is an accounting trick as well, a way to shed profit, and maximize deductions, by having different units within a parent company purchase services from each other.

I realize that my sentence long explainer doesn't shed any light on how it gets done, but funnily enough, you can ask an LLM for an explainer and I bet it'd give a mostly accurate response.

Edit: Fuck it, I asked an LLM myself and just converted my first sentence into a prompt, by asking what that was called, and how it's done. Here's the reply:

This practice is commonly referred to as "transfer pricing." Transfer pricing involves the pricing of goods, services, and intangible assets that are transferred between related parties, such as a parent company and its subsidiaries.

Transfer pricing can be used to shift profits from one subsidiary to another, often to minimize taxes or maximize deductions. This can be done by setting prices for goods and services that are not at arm's length, meaning they are not the same prices that would be charged to unrelated parties.

For example, a parent company might have a subsidiary in a low-tax country purchase goods from another subsidiary in a high-tax country at an artificially low price. This would reduce the profits of the high-tax subsidiary and increase the profits of the low-tax subsidiary, resulting in lower overall taxes.

However, it's worth noting that transfer pricing must be done in accordance with the arm's length principle, which requires that the prices charged between related parties be the same as those that would be charged to unrelated parties. Many countries have laws and regulations in place to prevent abusive transfer pricing practices and ensure that companies pay their fair share of taxes.

[–] General_Effort@lemmy.world 4 points 3 months ago

From the source:

Our primary approach calculates training costs based on hardware depreciation and energy consumption over the duration of model training. Hardware costs include AI accelerator chips (GPUs or TPUs), servers, and interconnection hardware. We use either disclosures from the developer or credible third-party reporting to identify or estimate the hardware type and quantity and training run duration for a given model. We also estimate the energy consumption of the hardware during the final training run of each model.

As an alternative approach, we also calculate the cost to train these models in the cloud using rented hardware. This method is very simple to calculate because cloud providers charge a flat rate per chip-hour, and energy and interconnection costs are factored into the prices. However, it overestimates the cost of many frontier models, which are often trained on hardware owned by the developer rather than on rented cloud hardware.

https://epochai.org/blog/how-much-does-it-cost-to-train-frontier-ai-models

[–] mox@lemmy.sdf.org 34 points 3 months ago (3 children)

I don't care how they estimate their cost in dollars. I think the cost to all of us in environmental impact would be more interesting.

[–] UnderpantsWeevil@lemmy.world 13 points 2 months ago* (last edited 2 months ago) (5 children)

Unless they're finding exciting new and efficient ways to generate electricity, I imagine its a linear comparison. Maybe some are worse than others. I know Grok's datacenter in Mississippi is relying exclusively on portable gas powered electric generators that are wrecking havoc on the local environment.

[–] downhomechunk@midwest.social 5 points 2 months ago (1 children)

Gas like natural gas? Or gas like gasoline? I'm sure it's the former, but I take nothing for granted anymore.

[–] mox@lemmy.sdf.org 4 points 2 months ago (1 children)

I didn't know that; thanks for sharing.

(BTW, I think you meant wreaking havoc.)

[–] UnderpantsWeevil@lemmy.world 1 points 2 months ago

All my misspellings are part of my charm.

[–] linearchaos@lemmy.world 1 points 2 months ago (3 children)

Maybe this is the push we need to switch to nuclear. The attack is good it just needs somebody with deeper pockets than coal/gas to lobby it.

load more comments (3 replies)
load more comments (2 replies)
[–] Sauerkraut@discuss.tchncs.de 4 points 2 months ago (2 children)

I want to see what the long term economic cost was after they fired tens of thousands of tech workers hoping to replace us with AI. It feels like workers are always the ones who suffer the most under capitalism.

load more comments (2 replies)
load more comments (1 replies)
[–] Ilovethebomb@lemm.ee 28 points 3 months ago (3 children)

Considering the hype and publicity GPT-4 produced, I don't think this is actually a crazy amount of money to spend.

[–] oce@jlai.lu 17 points 3 months ago* (last edited 3 months ago) (2 children)

Yeah, I'm surprised at how low that is, a software engineer in a developed country is about 100k USD per year.
So 40M USD for training ChatGPT 4 is the cost of 400 engineers for one year.
They say cost of salaries could make up to 50% of the total, so the total cost is 800 engineers for one year.
That doesn't seem extreme.

[–] my_hat_stinks@programming.dev 11 points 3 months ago (2 children)

100k USD per engineer assumes they're exclusively hiring from US and Switzerland, that's not a general "developed country" thing. US is an outlier.

[–] Tja@programming.dev 8 points 3 months ago (2 children)

US and Switzerland are way over 100k. For Netherlands and Germany 100k is a good approximation for the company costs for a senior SWE.

load more comments (2 replies)
[–] oce@jlai.lu 6 points 3 months ago (1 children)

I'm talking about the cost of the engineer for the company, not the salary, which is less relevant here. In some EU countries, the salaries may be lower, but the taxes are higher to pay for the social system, so the cost for the company is similar.

[–] General_Effort@lemmy.world 1 points 3 months ago

Yes. Also, Europeans work fewer hours per year. There are big differences between EU countries, though. https://en.wikipedia.org/wiki/List_of_countries_by_average_annual_labor_hours

[–] jacksilver@lemmy.world 9 points 3 months ago

This is just the estimates to train the model, so it's not accounting for the cost to develop the system for training, collecting the data, etc. This is just pure processing cost, which is staggeringly large numbers.

[–] Voroxpete@sh.itjust.works 10 points 3 months ago

Comparitively speaking, a lot less hype than their earlier models produced. Hardcore techies care about incremental improvements, but the average user does not. If you try to describe to the average user what is "new" about GPT-4, other than "It fucks up less", you've basically got nothing.

And it's going to carry on like this. New models are going to get exponentially more expensive to train, while producing less and less consumer interest each time, because "Holy crap look at this brand new technology" will always be more exciting than "In our comparitive testing version 7 is 9.6% more accurate than version 6."

And for all the hype, the actual revenue just isn't there. OpenAI are bleeding around $5-10bn (yes, with a b) per year. They're currently trying to raise around $11bn in new funding just to keep the lights on. It costs far more to operate these models (even at the steeply discounted compute costs Microsoft are giving them) than anyone is actually willing to pay to use them. Corporate clients don't find them reliable or adaptable enough to actually replace human employees, and regular consumers think they're cool, but in a "nice to have" kind of way. They're not essential enough a product to pay big money for, but they can only be run profitably by charging big money.

[–] huginn@feddit.it 7 points 3 months ago (1 children)

The latest releases ChatGPT 4o costs $600/hr per instance to run based on the discussion I could find about it.

If OpenAI is running 1k of those models to service the demand (they're certainly running more since queries can take 30+ seconds) then that's 200M/yr just keeping the lights on.

[–] Ilovethebomb@lemm.ee 2 points 2 months ago (1 children)

That's a lot, but what's their revenue?

[–] huginn@feddit.it 2 points 2 months ago

3.4bn is their gross - we have no idea what their operating costs are since they refuse to share them.

Some estimates say they're burning 8 billion a year.

[–] linearchaos@lemmy.world 23 points 2 months ago (2 children)

How in the hell is Gemini both two and a half times more expensive and vastly inferior to GPT?

[–] ZILtoid1991@lemmy.world 9 points 2 months ago (1 children)

Some claim due to it was trained on too much data with too little intervention

[–] postmateDumbass@lemmy.world 2 points 2 months ago

Maybe we donnot understand what its objective function actually wants?

Maybe it is impeding its users intentionally.

[–] PixeIOrange@lemmy.world 5 points 2 months ago

Google sucks

[–] KillingTimeItself@lemmy.dbzer0.com 18 points 2 months ago (1 children)

bro who the fuck is google paying to do cloud compute for them? Google cloud??

[–] ripcord@lemmy.world 11 points 2 months ago* (last edited 2 months ago) (1 children)

I assume they've come up with some generic cost if someone was training each model using cloud compute.

Eeit: below comments confirm this, from the source.

[–] KillingTimeItself@lemmy.dbzer0.com 6 points 2 months ago (1 children)

god i love accounting, it's so much fun.

[–] ripcord@lemmy.world 2 points 2 months ago (2 children)

But this isn't accounting, this is just the way the study calculated stuff.

load more comments (2 replies)
[–] Wispy2891@lemmy.world 16 points 3 months ago

It's obvious that Google didn't pay the crazy AWS prices to train Gemini, seeing how many servers they have in gcp.

They mean that they used creative accounting to pay themselves crazy gcp usage bills to deduct from taxes?

[–] FinishingDutch@lemmy.world 15 points 2 months ago (1 children)

Geez, you’d think Gemini would be better than it is if they spent that much on it…

load more comments (1 replies)
[–] potentiallynotfelix@lemmy.fish 12 points 2 months ago (2 children)

and gemini is still hot ass

load more comments (2 replies)
[–] IndustryStandard@lemmy.world 7 points 2 months ago* (last edited 2 months ago) (1 children)

Only 80 million dollars for gpt4? Cheaper than expected

load more comments (1 replies)
[–] hark@lemmy.world 6 points 2 months ago

Now imagine if they had to pay for the content they're training the models off of.

[–] AbouBenAdhem@lemmy.world 6 points 3 months ago* (last edited 3 months ago) (1 children)

How is Inflection-2 cheaper to train in the cloud than own hardware?

[–] General_Effort@lemmy.world 1 points 3 months ago

That probably indicates a problem with the estimates.

load more comments
view more: next ›