I’m just getting started on my first setup. I’ve got radarr, sonarr, prowlarr, jellyfin, etc running in docker and reading/writing their configs to a 4TB external drive.
I followed a guide to ensure that hardlinks would be used to save disk space.
But what happens when the current drive fills up? What is the process to scale and add more storage?
My current thought process is:
- Mount a new drive
- Recreate the data folder structure on the new drive
- Add the path to the new drive to the jellyfin container
- Update existing collections to look at the new location too
- Switch (not add) the volume for the *arrs data folder to the new drive
Would that work? It would mean the *arrs no longer have access to the actual downloaded files. But does that matter?
Is there an easier, better way? Some way to abstract away the fact that there will eventually be multiple drives? So I could just add on a new drive and have the setup recognize there is more space for storage without messing with volumes or app configs?
Add another vdev to my ZFS zpool. No changes to the filesystem or jellyfin.
You can’t remove drives from a zpool though. So if you start with a small drive and keep adding drives as you fill them up, you’ll eventually run out of SATA ports and want to replace the smallest drive. The only way to do that is to create a new zpool and copy all of your data to it, which means you need a second set of drives that’s at least as big as the first.Or you could add a pci-e SATA card, if you have an extra pci-e port. Used cards like the Dell PERC H310 are cheap and reliable and support 8 drive on their own, or >256 with cheap expander cards that can be daisy-chained (and only need power, so they don’t use up pci-e slots).
Edit: looks like they added support for removing drives about 5 years ago.
I prefer m2 pcie cards but same deal, expansion go brrr
Can also increase the size of a redundant vdev (eg zraid2) by replacing the drives one by one with larger ones. I recently used this approach to increase my 4TB vdev to 72TB