this post was submitted on 16 Sep 2024
110 points (91.7% liked)

Games

16722 readers
530 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@fedia.io 12 points 1 month ago (2 children)

What do you mean "suddenly"? I was running path tracers back in 1994. It's just that they took minutes to hours to generate a 480p image.

The argument is that we've gotten to the point where new rendering features rely on a lot more path tracing and light simulation that used to not be feasible in real time. Pair that with the fact that displays have gone from 1080p60 vsync to 4K at arbitrarily high framerates and... yeah, I don't think you realize how much additional processing power we're requesting.

But the good news is if you were happy with 1080p60 you can absolutely render modern games like that in a modern GPU without needing any upscaling.

[–] Kushan@lemmy.world 13 points 1 month ago (1 children)

I think you just need to look at the PS5 Pro as proof that more GPU power doesn't translate linearly to better picture quality.

The PS5 Pro has a 67% beefier GPU than the standard PS5 - with a price to match - yet can anyone say the end result is 67% better? Is it even 10% better?

We've been hitting diminishing returns on raw rasterising for years now, a different approach is definitely needed.

[–] MudMan@fedia.io 3 points 1 month ago

Yeah, although I am always reluctant to quantify visual quality like that. What is "65% better" in terms of a game playing smoothly or looking good?

The PS5 Pro reveal was a disaster, partially because if you're trying to demonstrate how much nicer a higher resolution, higher framerate experience is, a heavily compressed, low bitrate Youtube video that most people are going to watch at 1080p or lower is not going to do it. I have no doubt that you can tell how much smoother or less aliased an image is on the Pro. But that doesn't meant the returns scale linearly, you're right about that. I can tell a 4K picture from a 1080p one, but I can REALLY tell a 480p image from a 1080p one. And it's one thing to add soft shadows to a picture and another to add textures to a flat polygon.

If anything, gaming as hobby has been a tech thing for so long that we're not ready to have shift to being limited by money and artistic quality rather than processing power. Arguably this entire conversation is pointless in that the best looking game of 2024 is Thank Goodness You're Here, and it's not even close.

[–] conciselyverbose@sh.itjust.works 5 points 1 month ago (1 children)

Yeah, there's a reason any movie attempting 3D CG with any budget at all has used path tracing for years. It's objectively massively higher quality.

You don't need upscaling or denoising (the "AI" they're talking about) to do raster stuff, but realistic lighting does a hugely better job, regardless of the art style you're talking about. It's not just photorealism, either. Look at all Disney's animated stuff. Stuff like Moana and Elemental aren't photorealistic and aren't trying to be, but they're still massively enhanced visually by improving the realism of the behavior of light, because that's what our eyes understand. It takes a lot of math to handle all those volumetric shots through water and glass in a way that looks good.

[–] MudMan@fedia.io 3 points 1 month ago

Yep. The thing is, even if you're on high end hardware doing offline CGI you're using these techniques for denoising. If you're doing academic research you're probably upscaling with machine learning.

People get stuck on the "AI" nonsense, but ultimately you need upscaling and denoising of some sort to render certain tier of visuals. You want the highest quality version of that you can fit in your budgeted frame time. If that is using machine learning, great. If it isn't, great as well. It's all tensor math anyways, it's about using your GPU compute in the most efficient way you can.