Shelves use almost no power; everything is in the drives themselves. I have a 60-bay JBOD that uses ~40W empty. That's for massive 1400W 208V-only PSUs, backplanes, etc. That said, the ~15 drives I have installed use quite a bit more, so I leave it off 95% of the time. Just fire it up, run a backup to it, shut it down.
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
Well that's at minimum 70W-100W. Insane how cheap larger drives are getting.
I have done both. In your case there is no point in a shelf and it just takes up extra space and power. That said there are more than a few shelf’s that only use about 25 watts empty. The LSI cards use around 10 watts. And then spinning rust uses well 5 to 10 each drive. For all the SAS card haters mother boards with lots a sata ports need extra controllers as well and the crappy ones take even more power. If you build a large drive setup disk shelf’s allow you to expand easily and without adding more motherboards which also take guess what.. power! As far as noise goes most of the shelf’s suck but some can be made to work without rocking your ears. But more power more heat more cooling.
Another reason not mentioned for having more disks is IOPs
For when you need performance along with the space.
If you need IOPS, you need SSDs. The days of getting IOPS from multiple hard disks have been over for a decade.
Ha, no.
SSD arrays absolutely have there place and I have deployed many for clients. But it is not the only performance solution. Like others have said capacity / performance planning is a must to know what you need and what you will need.
Hard drives are for capacity. SSD's are for performance. This has been settled for a number of years, and is why you see multiple levels of caching in front of any modern enterprise storage system.