this post was submitted on 15 Sep 2024
133 points (100.0% liked)
Steam Deck
14803 readers
113 users here now
A place to discuss and support all things Steam Deck.
Replacement for r/steamdeck_linux.
As Lemmy doesn't have flairs yet, you can use these prefixes to indicate what type of post you have made, eg:
[Flair] My post title
The following is a list of suggested flairs:
[Discussion] - General discussion.
[Help] - A request for help or support.
[News] - News about the deck.
[PSA] - Sharing important information.
[Game] - News / info about a game on the deck.
[Update] - An update to a previous post.
[Meta] - Discussion about this community.
Some more Steam Deck specific flairs:
[Boot Screen] - Custom boot screens/videos.
[Selling] - If you are selling your deck.
These are not enforced, but they are encouraged.
Rules:
- Follow the rules of Sopuli
- Posts must be related to the Steam Deck in an obvious way.
- No piracy, there are other communities for that.
- Discussion of emulators are allowed, but no discussion on how to illegally acquire ROMs.
- This is a place of civil discussion, no trolling.
- Have fun.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Here is my view and a small timeline:
I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.
Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren't GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)
If the rendered image is only 85% of a 4k image, that's ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.
With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn't risk create additional lag. (I am just hypothesizing, btw.)