Unraid: Unleash Your Hardware

2 readers
0 users here now

The unRAID community on Reddit. Reddit gives you the best of the internet in one place.

founded 1 year ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/dauser2222 on 2023-10-06 19:12:35.


This is a how to, rather than an argument for using Arc A380 with Unraid, Plex and Tdarr.You will need a 2nd computer to update the files on your unRAID Flash/USB.You will also likely need the Intel GPU TOP plugin.Based upon the guide of u/o_Zion_o and the kernel releases of thor2002ro

img

img

Steps it took:

  • Go to the MAIN tab in unRAID, find the Boot Device, click on the link to Flash, and use the FLASH BACKUP option. This will be your failback should you find issues and wish to revert to previous settings.

Backup your FLASH

Go to the TOOLS tab in unRAID, find the About section, choose Update OS. I updated to 6.12.4.

Update OS to 6.12.4

  • Follow the restart steps per normal.
  • REPEAT the Flash Backup step now you are using 6.12.4
  • Shut down the Array/unRAID and then remove the USB Flash drive.
  • Download and extract the archive :

Example of an archives contents. Extras are optional

  • You will REPLACE/OVERWRITE the 4 'bz' files from the archive to the USB. Adding the Extras won't hurt.
  • Plug the USB drive back into your server and power it on.
  • If everything boots ok, proceed. If not, start back at the first step and continue up to the previous point, but use the files you backed up earlier to revert the changes and get unRAID up and running again.
  • Add the following to the PLEX docker. Extra Parameters field: --device=/dev/dri:/dev/dri

--device=/dev/dri:/dev/dri

  • Add a new device to the PLEX docker. Value is /dev/dri/renderD128

/dev/dri/renderD128

  • Save the changes and PLEX will restart.

After this, if you go to the PLEX Settings page > Transcoding - and change the Hardware transcoding device to DG2 [Arc A380]

DG2 [Arc A380]

Plex should now use the A380 for Transcodes when required.

Transcode Load

Forced Transcode by using Edge.

Tdarr: Add the Extra Parameters: --device=/dev/dri:/dev/dri

--device=/dev/dri:/dev/dri

Tdarr should now be able to use your A380.

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/DayDreamEnjoyer on 2023-10-06 16:45:54.


For your server of course, not total.

I believe that some of you track it with a special plug, and i wonder how much the average rig cost.

Assuming its on 24/24h.

Edit : thanks you all for your answer that help me having a rough estimation!

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/sugadugaduga on 2023-10-05 17:00:39.


Went a little crazy building my unraid server a year or so ago. I put a 5900X in it thinking this would allow me multiple vms and other tasks with no need to worry about how to divide up resources.

I guess I didnt consider how much power this draws at idle compared to other CPUs. I watched wolfgang's video on youtube about building a low power server, and although he had some good suggestions, the video is a few years out of date.

Whats out there that is high-performance, but low idle draw?

Ideally I dont want to take a huge performance hit, or lose more than a few cores vs the 5900X (12 cores / 24 threads).

Whats the best options out there? Budget is around the same price as a 5900X.

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/skinna555 on 2023-10-05 11:11:33.


Hi all,

A few weeks back I had posted some questions regarding an unRAID server I was about to build. I would like to thank everyone who helped me out, but in particular: u/Lonely-Fun8074 and u/soxekaj. I went into this just wanting to have a plex server - and have come out of it having so much more.

I thought I would share the details of my setup and some things I really like and (not that I have many) dislike about unRAID. I have had it for a week now, and honestly - I can't think of any reason to ever go back to having a windows based server. The only time I would consider building one would be for someone else.

My Setup:

The PC: Core i5 12400 with quicksync, 32GB RAM, mini ITX case (Node 304) and motherboard. No graphics card.

Parity: 10TB WD Red

Array: 8TB WD Red x 3

Cache Pool: 2x500GB nvme SSD's

Downloads Pool: 2 x 2TB WD External Drives (once a week, I unplug a drive and take it to my parents house for offloading to their plex system). Why do I have 2 plugged in? because I have heaps of older externals sitting around, and this allows me to keep downloading and seeding while the other drive isn't plugged in).

Things I really like:

  • Power usage is amazing.

  • The UI.

  • Parity was dead simple to setup. I have, believe it or not, never had redundancy before now. I always assumed you needed DOUBLE THE SPACE. What unRAID does is one step off a magic trick for me.

  • The ease of making shares available on windows.

  • The array itself, cache, shares and pools are SUPER GODDAMN EASY to set up.

  • The ability to use the arrayed hard drives as though they are one, no matter the share you use. Not worrying about which drive to drag and drop things etc.

  • The ability to add more drives on the fly.

  • Bloat while using unRAID is zero. You really do get to maximise your hardware and use all of it, all of the time. For instance, I have qbittorent, sonarr, radarr, prowlarr, plex, and tautilli all start at system boot on unRAID. I would idle on windows using double the amount of RAM I use running all the docker containers I have running at once on unraid, while also watching a movie. This goes for disk space too, windows makes all of these funky partitions. On unRAID, all but the tiniest amount of a hard drive can be used.

  • While the initial learning curve is quite steep, once you get the hang of it - unRAID is a really intuitive and impressive piece of software. The biggest learning curve for me was the fact that each container has its corresponding paths. Once it clicks for you though, you're set. Very foolproof.

  • Docker containers being *pun intended* contained from one another is great. If something absolutely fucks up - just delete the container. Start over. No other containers will be affected.

  • Cache performance is incredible - using plex on my devices and seeing all the metadata come up for movies is so much more responsive then on windows 11 - my god.

Things I don't like after 1 week:

  • The initial learning curve (not sure how you would change this - but I didn't like it!).

  • Split level not being explained very well either within the software, or online. I figured it out before the install based on some videos and forum posts of course. But it was still a headache at the time.

  • For whatever reason, if I have my empty hard drive dock plugged in at boot, the server does not boot and hangs. Unplugging the dock eliminates this issue.

Questions I have

  • I am currently transcoding in RAM (/tmp). Would it be better to transcode in cache (appdata)?

  • I have installed the Tips and Tweaks utility, which by default had my turboboost set to on, however - on the dashboard - the base clock still appears. How do I confirm the CPU is being boosted when it needs to be?

  • If my fiancé would like to access her share remotely (via windows network SMB) - is it simply a matter of installing tailscale on unraid - and her computer (both registered on the website of course).

  • Can I make a share have a limit? My son will be making some videos and art. I would like to make him a share capped out at 500GB. Is this do-able?

  • I have a share for family pictures and videos. This is the only share I would like to have a cloud backup of. I'm sure there is a simple answer. But how would I go about having unRAID automatically upload to cloud storage (such as google drive - but I'm open to any provider).

Thanks guys! Glad to be a part of this community.

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/WonderingWhenSayHi on 2023-10-04 12:48:38.


Recently discovered AzerothCore, a docker container that allows you to spin up your own Solo World of Warcraft private server.

Looks great and would be really fun to play World of Warcraft with my family.

My question is, has anyone here done it? If so, any tips I need to know before doing it?

Secondly, if I wanted to give my brother access to play with me (he lives in a different city at the minute) what is the best and most secure method of doing this?

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/MarkPugnerIII on 2023-10-03 20:08:21.


TLDR: I think 6.12 is not properly handling OOM errors, causing hard locks

I've had zero luck with 6.12. It always would just hang up and require a hard reboot. Then I'd downgrade back to 6.11.5 and wait for the next 6.12.x and try again, same result. The longest I'd make it is maybe an hour or two before it would lock up

I also have been having issues with OOM errors on 6.11.5 that I could never fix. I finally figured out it was the Unassigned Devices plugin causing that. I removed and reinstalled that and the OOM went away in 6.11.5. After a day of no OOM errors (I always got them within 30 minutes of a reboot and then occasionally after that) I decided to try 6.12.4 again.

I've been up and running with no issues or lockups on 6.12.4 for a few hours now. That's the longest I've got on 6.12 with no lockups.

It seems like the lockups on 6.12 occurred about the same times the OOM errors occurred in 6.11.5. Fixing the OOM errors I had in 6.11.5 seems to have fixed the lockups in 6.12.

If I can make it a day without 6.12 king up, I'll consider that the likely fix. It could be the cause of other people's issues with 6.12 locking up on them too.

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/TMWNN on 2023-10-01 16:56:48.


BACKGROUND

For some years my home environment has been VMs on VirtualBox on Linux on HP MicroServer Gen7. I've run a very large software RAID array in the past, but it's been 12TB in JBODs on Gen7, mostly because I would have to back up the data, create a RAID array (and lose 25% of capacity), then restore.

I'd risked my data long enough. After obtaining a large-enough external drive I began the long process (whether the drive is plugged into USB 2 on Gen7 or USB 3 on Raspberry Pi via wireless) of backing up the JBODs. I needed to decide what OS to run. My choices:

  • Stay with the existing Linux setup, but just create an mdadm software RAID as in the past. Pro: Minimal work needed. Con: I wanted ZFS if possible, but wanted it built into the OS, as opposed to ZoL. Further, I wanted the overall environment to be as self-contained as possible.
  • Install VMware ESXi or Proxmox. Pro: Super-sophisticated mature VM setup. Con: Gen7 doesn't have hardware passthrough, and I'd heard horror stories about attempting to run ZFS (or software RAID in general) from a VM via RDM.
  • Install TrueNAS (Scale, because I have no experience with BSD). Pro: Self-contained, sophisticated, mature, and with native ZFS support the lack of hardware passthrough would not be a problem. Con: Not much, except the inflexibility of ZFS (or standard mdadm RAID) regarding future expansion. Did I really want to again lock myself into a single-sized array until and unless I build an entirely new, separate one?

UnRAID emerged late in the process. I'd heard of it going back 15 years but had never seriously looked at it. I knew of the flexible RAID expansion ability, but think that said flexibility contributed to my dismissing it in the past because it sounded like a gimmick; after all, if it were so easy to do reliably, surely ZFS and mdadm would have added the ability a long time ago? But the uniformly positive reviews persuaded me to purchase a Basic license.

THOUGHTS

Overall, UnRAID has been excellent.

Pro:

  • Ease of use. The UI is simple and straightforward. Having help blurbs available almost everywhere is great.
  • The documentation, while not complete, is very detailed yet friendly and approachable.
  • The enormous amount of community discussion, both here and at the UnRAID forums. I was impressed by how a) said forums a) go back almost all the way to UnRAID's debut, and b) someone took the time to make sure lime-technology.com/forum/ links still work despite the migration to new forum software. Thoughtful touches like that are evident in many places in UnRAID.
  • Being able to search for and install Docker containers and plugins from the UI is great.
  • Updates with significant new functionality, such as ZFS. I subscribe to commercial software that hasn't received new functionality (or real bug fixes) in years of anywhere the scale that UnRAID regularly gains.

Con:

  • The biggest disappointment is the VM support. I had been happy with VirtualBox, but KVM in UnRAID is a "real Type 1 hypervisor" so had to be better, amirite fellas? Boy, was I shocked when I saw how poor the VM functionality is compared to VirtualBox.

(Yes, I know that UnRAID's GUI does not contain all the functionality virsh has (I've heard that the next major release will improve on this). None of that changes the fact that KVM snapshots don't survive beyond reboots, making it impossible to do what I took for granted when running VMware or VirtualBox: Rebooting the server and expecting that my VMs will gracefully and automatically save, restart, and continue on.)

  • As I wrote elsewhere:

Unraids "Cache" has always felt like a joke to me, or a lie. The Unraid "cache" drives are only caches in the most indirect sense. They don't speed up reads, they're not read-ahead or read-behind, or write-through, or like, any of the types of systems that we commonly associate with caches.

Yes. I'm new to UnRAID, but almost immediately noticed its idiosyncratic use of "cache". It should be called "fast storage" or "high-priority storage" something else, not a word that has a specific, well-established definition in computing of short-term, transient storage.

As I understand it TrueNAS uses SSD in the traditional cache sense.

I realize that for the typical UnRAID use case as a media server the TrueNAS cache approach isn't necessarily very helpful. But it would still be preferable in terms of architectural hygiene, if that makes sense, especially given ...

  • mover's general ricketyness. Such functionality should in no way be dependent on a userspace shell script, of all things. I keep backups going back years in a homegrown variant of rsnapshot, which means many, many hard links, and mover just gets overwhelmed, running forever and ever. mover also fails to move socket files, causing the directory tree they are in to stay on the cache even if everything else is moved.
  • The built-in support for Apple Time Machine, as appearing in the official documentation, is flat-out broken. I did multiple full backups of my MacBook, each time finding that subsequent backups would not work, thinking that I'd done something wrong, erasing the backup, and starting over again. A search for discussions on the topic revealed that it wasn't just me. Thank goodness for mbentley's TimeMachine container, which actually works; as another said, it really, really needs to be adopted as the official method, whether by actual incorporation into UnRAID or at least in the official documentation the way other user-contributed utilities are cited.

CONCLUSION

The above criticisms come in a spirit of wanting a good thing to be even better. I meant what I said about UnRAID's overall high quality. I didn't list the most important pro, the flexible expansion, above because it's the whole raison d'etre for UnRAID in the first place. But to paraphase Seinfeld "it's real, and it's spectacular". I finished moving the data back onto the new UnRAID array last week and almost immediately needed more space. I am in the process of swapping in larger drives, one at a time; knowing in advance that it works in theory is not the same as actually experiencing the seemingly miraculous ability to do so. Still time-consuming, but not as much time (or money for the resulting capacity) as the alternative of again migrating off the array, rebuilding it completely with new disks, then migrating back on. Thank you, UnRAID.

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/yooames on 2023-09-30 14:36:33.

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/dcoulson on 2023-09-29 22:20:26.

10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/JaquelynElliot on 2023-09-27 20:44:30.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/moarmagic on 2023-09-27 14:30:26.


I have a 12 bay dell R 510 that's served as my unraid box for a good 6+ years, but I've hit the point where I'm running low on space, and further Hard drive upgrades don't feel economically viable (IE, smallest disk is a 10Tb, so I'd be paying for a 12+ tb drive, and only getting the difference.) I am stuck on my upgrade path from here, so looking for feedback/thoughts.

Options I see:

  1. Buy a bigger case- There are 24, 36 bay supermicros out there, but these seem to run around 300-500+ (with everything under 500 being older, 6GB/s hardware, 500+ having the 12/gb backplane) for baremetal, maybe including PSU, so I'm estimating I may have to put an additional 300ish into getting new hardware for it . This options would allow me to move everything, and double or tripple my current space. I could also keep the existing r510 around and through some hard drives on to it- make it a personal fileserver on Truenas or something, taking some of the pressure off my unraid server. rough cost estimate, 600-800 depending on hardware I settle on, 2-3x my existing storage, easier replacement options on mobo etc.
  2. Leaving the server cases out of it, there are some other options out there- fractal define, thermaltake's overkill modular 200 series. I'm mostly writing these off, because I'd still be looking at 200+ for a case with 0 extra hardware. While the define can take 18 drives, that's 1.5x my current storage at roughly the same price as getting the supermicro and kitting it out, i think. Might as well stay rackmount and get more space for my money, baring some insane deal popping up somewhere.
  3. I could pick up another 12 bay rackmount. R720xds are hitting dirt cheap, and i've seen some cheaper priced 730xds on occasion. This could end up getting a system on newer hardware with better cpu/ram to where i sit now for around 500 total with some luck. However, I'm only doubling my storage (assuming i also keep the 510 around. I'm really not sure how old is too old for home server gear. most Wear and tear is on the spinning rust, which is the part i've kept upgrading).
  4. Related to that last point, i could look at throwing together some sort of DAS. My unraid server's hardware is mostly sitting at 50% utilization, so if I could keep trusting the r510 to tick along without dying on me, I'd just need to get a power supply, external sas ports and some sort of case. Cost here is kinda unknown. I've seen a project to 3dprinted brackets designed to mount inside a standard ATX tower, adding 18 drives bays. This probably sits at the cheapest, but also would leave me all eggs in one older basket- If i picked up a second 12bay device, i'd at least have two machines in case the r510 abruptly gave up the ghost. So Maybe $200 USD, but feels like a bandaid fix, will still need to spend more money somewhere down the line.

I think that's a bit it. One moment I'm thinking that a DAS would be okay, next I'm worried that should my server die i'm going to need to spend the money on option 1-3 anyway. Then I'm thinking that 24+ bays is kinda ridiculous, i should save the money and just get another 12 bay, but that only really addresses my storage problem if i keep both the new and and old NAS running, otherwise i'm just upgrading the mobo/ram, which aren't actually the pain point right now. So I'd like to get some insight from other power users out here in unraid land, what do you do when it's less viable to keep replacing disks with larger disks?

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/The--Marf on 2023-09-27 03:14:14.


Hi All,

Still relatively new to Unraid/linux/docker etc. I've had my server up and running for a few months and I'm looking to add a new project to it. The only exp I have has been setting up a few of the basic arrs but I managed to make my way through it with the help of some friendly folk on reddit.

I've starting using IPTV and I'm really enjoying the Tivimate app and experience. It can record straight to an SMB share which is awesome.

Problem to solve: I need an android device that is always on with Tivimate running in order to catch recordings etc. I'd like to have this running on my Unraid server using as few resources as possible.

Is there a good way to setup a VM or some sort of docker that would allow me to run the app 24/7?

Thanks in advance and let me know if there is additional information that would be helpful.

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/19wolf on 2023-09-27 03:09:41.

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/bitzie_ow on 2023-09-27 03:01:56.


So for several years I've used an HP 290 with a few external drives for my Plex server. It's actually worked great. The obvious downside is having absolutely no data protection. After seeing a post about serverpartdeals and seeing how much cheaper drives are through them, I bit the bullet and put together a new server using my old pc that my girlfriend was using. (She is now using a much newer laptop.)

System wise it's nothing special. Intel i5-2500K, 8GB ram, Quadro P620, 650W Corsair psu, with a Phanteks Enthoo Pro II case. Currently I have 8 14TB WD DC HC530 drives with two being parity and one 2TB ssd for cache. Eventually I will probably shuck the externals from the old system and add them in the lower drive bays and might also grab another SSD for the cache. Further down the line I'll upgrade the mobo, cpu, ram, psu.

For now it's only running Plex, but I will look into what else I could need/want to run with it.

Very happy with UnRaid so far! A bit annoying to get Plex setup the way I like it and with using the P620 for transcoding, but it's all working amazingly well now.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/Other_Animal on 2023-09-26 16:39:20.


I set up binhex-deluge a long time ago and accidentally uninstalled all of my docker apps, though I had a backup of the appdata folder. If I reinstall binhex-deluge are all my settings the same or do I have to remember how to set up a VPN again?

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/danimal1986 on 2023-09-26 16:10:49.


I'm using the official Plex Inc docker and never had a problem with remote streaming until the last few weeks with recent updates.

I was noticing that some of my remote streams were at 2mbps when i have the quality set to 8mbps. It looks like its was doing an indirect play (through Relay i'm assuming).

When i went into the plex settings, sure enough there's a big red ! next to remote access. When i click the Retry button next to the Manually Specified Port (i dont have that checked) everything goes back to normal and i see all green.

I haven't changed anything network settings wise in a few years, its just worked. Any ideas?

Edit: May have been solved. I run a vpn setup through wireguard thats cooked into unraid. I do not have plex setup to use the vpn tunnel (only sab and qbittorrent) but i dissabled the vpn and BAM everythings green and back to normal. no idea why that would have changed anything. I have the plex docker set to Host.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/blue2020xx on 2023-09-26 14:54:01.

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/dcchillin46 on 2023-09-26 14:15:05.


I'm new. I've managed to get most things up and running but for the life of me I cannot get a vm to work. I have bought new hardware, tried every setting, posted multiple times on the sub and forums. I've run out of ideas and google things to try.

I built this pc just a few weeks ago: 12700k, z690, 32gb ram, rx580, 750w psu. Updated unraid from 6.12.2 to 6.12.4. Iommu and vtd emabled, Updated bios, tried newest gpu drivers, multiple vbios. In vm creation I have tried every setting. All the old q35 and 1440fx. Vnc, spice, gpu as primary and secondary. I even borrowed a 1070 from a buddy and got the same results. Tried both w10 and w11. I can run a 5min stress test through radeon, but as soon as I start a 3dmark run it locks up the system. I haven't been able to get a game to launch either.

Every time I add the gpu I've passed to vfio it gets repeating error on startup and continues until shutdown:

2023-09-14T23:41:25.917890Z qemu-system-x86_64: vfio_dma_map(0x14e477e5e600, 0x381000000000, 0x200000000, 0x14e26ec00000) = -22 (Invalid argument)

2023-09-14T23:41:25.918364Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument

2023-09-14T23:41:25.918371Z qemu-system-x86_64: vfio_dma_map(0x14e477e5e600, 0x381200000000, 0x200000, 0x14e26d800000) = -22 (Invalid argument)

2023-09-14T23:41:25.982628Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument

From what I've read I'm pretty sure it's unable to map the gpu memory. I've read 6.11.5 fixes it for some, but I started in 6.12 so I hesitate to downgrade. Also I know people are running windows vm in 6.12, so idk why mine refuses to work? Others have suggested changing qemu, but I have no idea how to do that?

Is it possible to do a clean unraid install? Maybe save my jellyfin settings, dump the rest, and start fresh?

It's gotten to the point where I'm willing to pay someone to fix this. It is driving me crazy, I've put significant money and time (hours a night for weeks) into an unraid server and can't get one of the main features, a vm, to function. I feel like I'm going crazy.

If you need any more information please let me know and I'll provide what I can.

Edit:

Most recent xml:

Unraid config:

my most recent log:

Things I've tried since making this post:

  • Remove iso, ensure vdisk as boot 1
  • multifunction, gpu/audio same device (xml edit)
  • remove gpu from vfio, reboot, add to vm, run msi tool
19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/cvandyke01 on 2023-09-26 06:12:44.


I have my box running with multiple VMs all sharing the same video card. Only one VM can run at a time so I have to shutdown oneVM and then start another and I have to it from a different box.

I was think it would cool to have an elgato stream deck mini attached to the Unraid box and passed through with a windows VM. From that VM I could have scripts that would shut off and restart VMs in Unraid.

I was wondering if anyone had done something like this and is there a cookbook of Python scripts for Unraid somewhere?

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/mdaname on 2023-09-25 22:11:53.

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/mdaname on 2023-09-25 19:39:13.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/mcking230 on 2023-09-25 20:12:40.

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/minektur on 2023-09-25 18:43:58.


Recently I posted about my rebuilt unraid array that idles at 35W compared to the power guzzler that I had before.

MANY people recommended to me to not use my 4 SSDs as the array, but to make a zfs-cache pool with it instead. The issues are with eventual poor performance due to lack of TRIM at best, or data-loss at worst, depending on the specifics of my SSDs.

I backed up all the data, nuked the config, made a new pool. I have two identical nvme drives and two identical 2.5" sata drives - which are now in a 2x2 mirrored zfs pool. Copied all the data back, but of course you can't operate with just a pool - you MUST have an array drive: With a USB motherboard header adapter I put an ancient 2GB usb stick in (and my unraid usb stick) - no more sticking out the back of the system. I'm happy to have all that stuff internal.

not this exact one, but one very like it:

It's kind of annoying that I have to have a usb "drive" in my array, that the array has to exist - it's not a big deal, but just a minor annoyance.

It also took a little while of fiddling to get ALL the data on the cache rather than the fake-noparity-usb-array. (notes for my future self if I need to repeat this)

  1. disable VMs and Docker in config
  2. Set all the shares to be "array->cache"
  3. run mover
  4. double check that the array is really empty
  5. re-enable VMs/Docker in config
  6. Set all shares to be "cache only"

I might set it up so that the boot/license usb stick gets tar-gz'd onto the actual array usb stick weekly or something - but otherwise, no data on a 15+ year old USB2 stick as my array.

Anyway, my data is all moved back and things are humming along. I also replaced my demo usb stick with my old licensed one (sandisk cruzer). I backed up the old stick, and the new demo stick. Then i basically copied the entire contents of the new stick onto/over the files of the old, made sure my license file was still there, and booted up - it worked first try.

The only real issue I have left is that I have a dmz on my firewall that I had plugged in to a second nic in the old system configured as a bridge for my VMs. Since part of this downgrade was to make it into a 1U shallow-depth case, I ended up in the short run with a usb network adapter, zip-tied to the back of the case. Theoretically it will change to one port of a 82576- based 2 port gig card, once my pcie flexi-riser arrives. Cabling wont be perfect, but it'll all fit in the little telecom rack I've got and once mounted I hope to just not think about it :) I don't need amazing network speeds for these VMs - they're mostly for me playing around, doing research, etc. Not a lot of data in and out - mostly just me via ssh sitting on an open terminal...

Lastly, I am still considering whether undervolt-underclock is worth pursuing for more power reduction.

Anyway, thanks for the feedback and suggestions everyone.

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/valain on 2023-09-25 13:18:33.


~~[SOLVED, see below]~~

[Alas not solved]

Hello!

I moved my array to a new server (new mobo, CPU, RAM, etc). Since the move I first got on average 1 SMART error of type "udma CRC" on the parity drive per day; no other issues, parity builds just fine, etc.

Today I swapped the SATA cables of the parity drive with one of the other drives in the array to find out whether it's disk, or cable/controller related. I started a parity rebuild to check.

Now I got TWO of the above errors, one still on the parity drive and the other on the drive I swapped the cables with initially.

So, I'm almost 100% confident that it's actually not the disks, but rather an issue with the cables or the controller... and cables would be weird because before the cable swap, it was only one drive exhibiting the issue. So, controller?

What would be my next troubleshooting step... replace the 2 cables on the now affected drives, or switch these two cables with the other two drives I have in the array (total of 4 drives) ?

I've read somewhere else that some people never got rid of these errors and just de-activated Unraid reporting them.... that doesn't really sound like an option because if errors get reported (even not frequently), there still seems to be something wrong.

Thanks!

UPDATE 2 : Alas, not solved... 30% into the parity check, I now got a 3rd drive report a CRC udma error.... So Now I have had CRC errors popping up on 3 different drives, not sharing any cables. This seems to point towards a problem with the built-in SATA controller on the mainboard (Asus Rog Strix Gaming Z690-G Wifi)?

~~UPDATE: SOLVED!~~ [No, it's not]

Typical case of error in front of the keyboard, or rather in front of the power cables. Initially I had 2 Y cables coming off a single SATA power cable, thus powering 4 drives on a single cable 🤯. I really didn't notice this mistake when I built the machine. I have now added an additional power cable, so that each power cable feeds two drives each, and now the CRC errors are gone. Feeling stupid, but at the same time smarter now 🥳

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/unraid by /u/SlowThePath on 2023-09-25 06:48:58.

view more: next ›