this post was submitted on 22 Dec 2023
451 points (95.4% liked)
Technology
59593 readers
3753 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've kept a Windows 10 install on a separate SSD for the programs that stubbornly refuse to run on Linux (games, in my case). However, I won't be upgrading that to Windows 11. I'll just reclaim that SSD for other purposes and use Linux exclusively.
I'm one of those maniacs who went to the trouble of setting up a GPU passthrough VM instead of dual booting, and I have no intention of switching it from Win10 to Win11. If it gets infected, it can't do jack or shit to the important parts of my system, and I can either roll back to a snapshot or nuke it.
I swear, I can read the first part of your first sentence just fine, but I don't understand what it means, lol!
I tried to look it up, and as far as I understood it, it's a technique that allows a virtual machine to access a physical GPU directly. I guess that means that even if your VM is elsewhere (a server or wherever) it can still use the GPU you have. But the more relevant part is that since your Win10 install is on a VM, it can't do shit on the rest of your system, and the GPU access is just there so that it won't run as slow as shit when gaming, right?
Pretty much
So, to get more technical, there's a motherboard technology called IOMMU, which was developed for containing malware that has infected device firmware. What Linux has is a kernel module that allows an IOMMU group to be isolated from the host operating system, and connected up to a virtual machine as if it were real hardware. On an expensive motherboard, you get a different IOMMU group for each PCIe lane, each M.2 socket, each cluster of USB ports, etc. On a cheap one, you get one that for each type of device, maybe the PCIe lanes are divided into two groups.
So the fun part, and why we do this, is that when you have two GPUs, in different IOMMU groups, one can remain on host and allow graphics drivers, desktop environment, etc. to remain loaded, while the other can be connected to the VM and used entirely for gaming (theoretically, if you wanted to you could game on both systems at once). Thankfully, cheap, shit secondary GPUs aren't expensive (was once on a 710, ditched that and its many driver issues for a 1050, and my main remains a 980ti), but setting up the main GPU to switch between proper drivers and "vfio-pci", the drivers that have to be loaded before the passthrough can occur, can be a pain.
Thanks for the explanation. Prior to our exchange, I didn't even know such a thing is possible. It's wonderful, though to be honest, being as technologically klutzy as I am, I might find it easier to just buy a different set of hardware for my win10 to use, if ever, and disable any networking capabilities (because if it's no longer supported, it needs to be taken offline).
Again, thanks!
I bought a cheap PSIe card that physically cuts the power to ssds. I just shut down and hit the button then power back on for my windows install. I rarely use it, so this makes it easy when I do without having to have a whole PC or grub menu EVERY boot
Huh, that's interesting. I've gotten used to using the Grub menu every time I had to reboot (which is quite often), but it defaults to the Linux installation (auto-selects the Linux install after a timeout), so if I want to go to Windows, I'll just have to make sure I catch the Grub menu.