this post was submitted on 12 Jun 2024
1325 points (98.7% liked)

Memes

45661 readers
1092 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] JPAKx4@lemmy.blahaj.zone 117 points 5 months ago (2 children)

Now where is the shovel head maker, TSMC?

[–] kautau@lemmy.world 43 points 5 months ago (26 children)

And then China popping their head out claiming Taiwan is part of China because they want to seize TSMC

load more comments (25 replies)
[–] frezik@midwest.social 11 points 5 months ago

Eh, they'll have plenty of demand for their nodes regardless. Non-AI CPUs and GPUs are still going to want them.

[–] umbrella@lemmy.ml 83 points 5 months ago (7 children)

meanwhile i just want cheap gpus for my bideogames again

[–] frezik@midwest.social 35 points 5 months ago* (last edited 5 months ago) (8 children)

You can buy them new for somewhat reasonable prices. What people should really look at is used 1080ti's on ebay. They're going for less than $150 and still play plenty of games perfectly fine. It's the budget PC gaming deal of the century.

[–] linkhidalgogato@lemmy.ml 22 points 5 months ago (1 children)

it probably the best performance per dollar u can get but a lot of modern games are unplayable on it.

[–] frezik@midwest.social 14 points 5 months ago (1 children)

Lot of those games are also hot garbage. Baldur's Gate 3 may be the only standout title of late where you don't have to qualify what you like about it.

I think the recent layoffs in the industry also portend things hitting a wall; games aren't going to push limits as much as they used to. Combine that with the Steam Deck-likes becoming popular. Those could easily become the new baseline standard performance that games will target. If so, a 1080ti could be a very good card for a long time to come.

[–] micka190@lemmy.world 12 points 5 months ago* (last edited 5 months ago) (10 children)

Edit: Here's another comment I made with links and more information on why this is going to be more common going forward. There's a very real and technical reason for using these new rendering strategies and it's why we'll start seeing more and more games require at least an RTX series card.


You're misunderstanding the issue. As much as "RTX OFF, RTX ON" is a meme, the RTX series of cards genuinely introduced improvements to rendering techniques that were previously impossible to pull-off with acceptable performance, and more and more games are making use of them.

Alan Wake 2 is a great example of this. The game runs like ass on 1080tis on low because the 1080ti is physically incapable of performing the kind of rendering instructions they're using without a massive performance hit. Meanwhile, the RTX 2000 series cards are perfectly capable of doing it. Digital Foundry's Alan Wake 2 review goes a bit more in depth about it, it's worth a watch.

If you aren't going to play anything that came out after 2023, you're probably going to be fine with a 1080ti, because it was a great card, but we're definitely hitting the point where technology is moving to different rendering standards that it doesn't handle as well.

load more comments (10 replies)
load more comments (7 replies)
load more comments (6 replies)
[–] Meowie_Gamer@lemmy.world 75 points 5 months ago (1 children)

Nvidias being pretty smart here ngl

This is the ai gold rush and they sell the tools.

[–] Meltrax@lemmy.world 18 points 5 months ago

Yes that's the meme.

[–] Venator@lemmy.nz 74 points 5 months ago (2 children)

Edited the price to something more nvidiaish: 1000009536

[–] 8osm3rka@lemmy.world 35 points 5 months ago

Gotta add a few more 9s to that. This is enterprise cards we're talking about

[–] xenoclast@lemmy.world 12 points 5 months ago (2 children)

Literally about to do same.

Jensen also is obsessed with how much stuff weighs. So maybe he'd sell shovels by the ton.

load more comments (2 replies)
[–] Kalkaline@leminal.space 55 points 5 months ago (2 children)

Don't forget AMD, good potential if they bring out similar technology to compete with NVIDIA. Less so Intel, but they're in the GPU market too.

[–] Thekingoflorda@lemmy.world 13 points 5 months ago (4 children)

Does ARM do anything special with AI? Or is that just the actual chip manufacturers designing that themselves?

[–] SeekPie@lemm.ee 6 points 5 months ago

As I understand it, ARM chips are much more efficient on the same tasks, so they're cheaper to run.

load more comments (3 replies)
load more comments (1 replies)
[–] zakobjoa@lemmy.world 54 points 5 months ago (10 children)

They will eat massive shit when that AI bubble bursts.

[–] r00ty@kbin.life 40 points 5 months ago (2 children)

I mean if LLM/Diffusion type AI is a dead-end and the extra investment happening now doesn't lead anywhere beyond that. Yes, likely the bubble will burst.

But, this kind of investment could create something else. We'll see. I'm 50/50 on the potential of it myself. I think it's more likely a lot of loud talking con artists will soak up all the investment and deliver nothing.

[–] linkhidalgogato@lemmy.ml 20 points 5 months ago

bubbles have nothing to do with technology, the tech is just a tool to build the hype. The bubble will burst regardless of the success of the tech at most success will slightly delay the burst, because what is bursting isnt the tech its the financial structures around it.

[–] frezik@midwest.social 17 points 5 months ago (2 children)

It's looking like a dead end. The content that can be fed into the big LLMs has already been done. New stuff is a combination of actual humans and stuff generated by LLMs. It then runs into an ouroboros problem where it just eats its own input.

[–] greenskye@lemm.ee 17 points 5 months ago (1 children)

I mostly agree, with the caveat that 99% of AI usage today just stupid gimmicks and very few people or companies are actually using what LLMs offer effectively.

It kind of feels like when schools got sold those Smart Whiteboards that were supposed to revolutionize teaching in the classroom, only to realize the issue wasn't the tech, but the fact that the teachers all refused to learn and adapt and let the things gather dust.

I think modern LLMs should be used almost exclusively as an assistive tool to help empower a human worker further, but everyone seems to want an AI that you can just tell 'do the thing' and have it spit out a finalized output. We are very far from that stage in my opinion, and as you stated LLM tech is unlikely to get us there without some sort of major paradigm shift.

[–] micka190@lemmy.world 7 points 5 months ago

only to realize the issue wasn’t the tech

To be fair, electronic whiteboards are some of the jankiest piles of trash I've ever had to use. I swear to God you need to re-calibrate them every 5 minutes.

load more comments (1 replies)
[–] TheRealKuni@lemmy.world 9 points 5 months ago (1 children)

I doubt it. Regardless of the current stage of machine learning, everyone is now tuned in and pushing the tech. Even if LLMs turn out to be mostly a dead end, everyone investing in ML means that the ability to do LOTS of floating point math very quickly without the heaviness of CPU operations isn’t going away any time soon. Which means nVidia is sitting pretty.

[–] umbrella@lemmy.ml 10 points 5 months ago

the WWW wasn't a dead end but the bubble burst anyway. the same will happen to AI because exponential growth is impossible.

load more comments (7 replies)
[–] ImplyingImplications@lemmy.ca 30 points 5 months ago (4 children)

Worst one is probably Apple. They just announced "Apple Intelligence" which is just ChatGTP whose largest shareholder is Microsoft. Figure that one out.

[–] dependencyinjection@discuss.tchncs.de 35 points 5 months ago (17 children)

Well, most of the requests are handled on device with their own models. If it’s going to ChatGPT for something it will ask for permission and then use ChatGPT.

So the Apple Intelligence isn’t all ChatGPT. I think this deserves to be mentioned as a lot of the processing will be on device.

Also, I believe part of the deal is ChatGPT can save nothing and Apple are anonymising the requests too.

load more comments (17 replies)
[–] Rai@lemmy.dbzer0.com 19 points 5 months ago (1 children)

If you think that’s the WORST ONE, you have no idea about any of this

[–] frezik@midwest.social 6 points 5 months ago

Yeah, if anything, Apple is behind the curve. Nvidia/AMD/Intel have gone full cocaine nose dive into AI already.

[–] ken27238@lemmy.ml 8 points 5 months ago

Not true. Most if not all requests are handled by apples own models on device or on their own servers. When it does use OpenAI you need to give it permission each time it does.

[–] photonic_sorcerer@lemmy.dbzer0.com 7 points 5 months ago (8 children)

That's just not true. Most requests are handled on-device. If the system decides a request should go to ChatGPT, the user is promped to agree and no data is stored on OpenAI's servers. Plus, all of this is opt-in.

load more comments (8 replies)
[–] CoolerOpposide@hexbear.net 28 points 5 months ago (1 children)

All of this to run a program that is essentially typing a question into Google and adding “Reddit” at the end of it.

They spent so much time disconnected from reality and trying to create artificial intelligence that they forgot regular intelligence exists

load more comments (1 replies)
[–] art@lemmy.world 18 points 5 months ago (1 children)

Admittedly, I bought an Nvidia card for AI. I am part of the problem.

[–] moshtradamus666@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

I don't think it's a problem, more like a situation. You are not doing anything wrong or stupid, just interested in something new and promising and have the resources to pursue it. Good for you, may you find gold.

[–] phoenixz@lemmy.ca 10 points 5 months ago (4 children)

Serious Question:

Why is Nvidia AI king and I see nothing of AMD for AI?

[–] Naz@sh.itjust.works 19 points 5 months ago* (last edited 5 months ago) (1 children)

I'm an AI Developer.

TLDR: CUDA.

Getting ROCM to work properly is like herding cats.

You need a custom implementation for the specific operating system, the driver version must be locked and compatible, especially with a Workstation / WRX card, the Pro drivers are especially prone to breaking, you need the specific dependencies to be compiled for your variant of HIPBlas, or zLUDA, if that doesn't work, you need ONNX transition graphs, but then find out PyTorch doesn't support ONNX unless it's 1.2.0 which breaks another dependency of X-Transformers, which then breaks because the version of HIPBlas is incompatible with that older version of Python and ..

Inhales

And THEN MAYBE it'll work at 85% of the speed of CUDA. If it doesn't crash first due to an arbitrary error such as CUDA_UNIMPEMENTED_FUNCTION_HALF

You get the picture. On Nvidia, it's click, open, CUDA working? Yes?, done. You don't spend 120 hours fucking around and recompiling for your specific usecase.

load more comments (1 replies)
[–] morrowind@lemmy.ml 17 points 5 months ago

Simple Answer:

Cuda

load more comments (2 replies)
[–] Black_Mald_Futures@hexbear.net 7 points 5 months ago

I thought this was a strange trolley problem at first

load more comments
view more: next ›