Firestarter321

joined 1 year ago
[–] Firestarter321@alien.top 1 points 11 months ago

I don’t know what to say other than I’ve tried it and it doesn’t work. I thought it would but it doesn’t.

[–] Firestarter321@alien.top 1 points 11 months ago (2 children)

Not in my experience over the last couple of winters. The office just stays at 80F or more while the rest of the basement is 70F even with a fan blowing from the office out into the main room in the basement.

[–] Firestarter321@alien.top 1 points 11 months ago

Some of it would leave, however, most of it stayed in the office which it’s why it was 80F+ in there.

[–] Firestarter321@alien.top 1 points 11 months ago

I have a constant 1000W load which is ~3400 BTU.

 

I have my office and rack in the basement where I work out of during the week. In the early spring and late fall when there isn't much cooling it would get rather warm to be comfortable (80F-82F).

A few weeks ago I realized that I have a cold air return duct in the ceiling so I cut an 8"x10" hole in it and left the furnace fan on 24/7 hoping that would help...it didn't really.

Last week I decided to hang an ~8" fan 3" below the hole I cut into the cold air return to see what would happen if I forced air into the duct...it didn't do much.

Last Thursday I remembered something from my volunteer firefighter days about how to set up a fan to ventilate a room through a window/door and how it was important to have the wind cover the entire opening. This led me to put a 12" fan in place of the 8" fan at 9PM.

Fast forward about an hour and my office was now 76F. The next morning it was 72F and it has stayed at 72F-73F ever since then.

The side benefit is that I'm able to provide a bunch of supplemental heat to the upstairs meaning that rather than my heat pump running 16hr+ per day with the electric strips kicking on periodically overnight during the <15F weather we've been having the heat pump has been running for 8hrs per day and the electric backup strips haven't needed to kick on at all.

I'm curious how it works for cooling next summer when I won't be able to run the furnace fan 24/7 since that'd just dump humidity back into the house so we'll see how that goes.

I'm still pretty happy with the results at the moment.

 

I'm running 2 Proxmox nodes in an HA Cluster with dual E5-2697A V4's and while it works great it's making my office rather warm.

The 9124 isn't cheap, however, it has a 200W TDP and has a higher benchmark with just 1 of them compared to my dual Xeon setup. It's $1150 though which is kind of sucky, however, it should kick out much less heat I'd think?

Is anyone using this CPU or the other 2 I listed below? They are $1600-$2000 though.

Supermicro H13SSL-N Motherboard - https://www.supermicro.com/en/products/motherboard/h13ssl-n

AMD EPYC 9124 - https://www.amd.com/en/products/cpu/amd-epyc-9124

or

AMD EPYC 9224 - https://www.amd.com/en/products/cpu/amd-epyc-9224

or

AMD EPCY 9254 - https://www.amd.com/en/products/cpu/amd-epyc-9254

ETA: The Supermicro H13SSL-NT is the same as the H13SSL-N except that it has 10Gbe, however, it's $120 more expensive. I don't need it as I use SFP+ with fiber everywhere at the moment.

[–] Firestarter321@alien.top 1 points 1 year ago

I have 3 of these and like them overall.

https://www.fs.com/products/165979.html

[–] Firestarter321@alien.top 1 points 1 year ago

I can’t say that I’ve noticed anything, however, my containers aren’t IO intensive either.

I agree completely about the LXC being much more flexible when using bind mounts rather than fstab.

 

I've been dreading doing it, however, it wasn't too bad. I had 4 LXC's to convert to VM's in total.

The biggest difference is that the LXC I had which hosted CodeProject.AI for my Blue Iris server went from using 120GB down to 19GB to for the same containers. I'm guessing it's due to being able to change from using vfs in the LXC to overlay2 in the VM.

Having docker-compose yml files to recreate all containers on the VM helped a TON as well as using rsync to move everything over to the new VM that the containers needed.

Has anyone else made the move?

I got the kick in the pants to do it after trying to restore the 120GB LXC from PBS, giving up after 2 hours, and restoring it in totality from a standard Proxmox backup instead which only took 15 minutes.

 

I've been dreading doing it, however, it wasn't too bad. I had 4 LXC's to convert to VM's in total.

The biggest difference is that the LXC I had which hosted CodeProject.AI for my Blue Iris server went from using 120GB down to 19GB to for the same containers. I'm guessing it's due to being able to change from using vfs in the LXC to overlay2 in the VM.

Having docker-compose yml files to recreate all containers on the VM helped a TON as well as using rsync to move everything over to the new VM that the containers needed.

Has anyone else made the move?

I got the kick in the pants to do it after trying to restore the 120GB LXC from PBS, giving up after 2 hours, and restoring it in totality from a standard Proxmox backup instead which only took 15 minutes.