I’m still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I’m a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergerFS with a RAID option for drive redundancy. I use multiple 12TB drives right now and have my media types separated between each. I’d like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I’d like to start working with Home Assistant. Installing with WSL hasn’t worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I’m concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I’m comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
yeah, and qemu and lxc are very much legacy at this point. Stick with docker/podman/kubernetes for containers.
QEMU is legacy? Pray tell me how you’re running VMs on architectures other than x86 on modern computers without QEMU
Not QEMU in particular, poor phrasing on my part. I just mean setting up new environments that run applications on VMs.
I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it’s faster to deploy directly on VMs, and there’s also less moving parts (I run two 50 node K8S clusters so I’m not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I’m not yet running a k8s version that supports SELinux and I tend to keep it enabled.
Yeah I’m not saying everybody has to go and delete their infra, I just think that all new production environments should be k8s by default.
The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I’m a bit surprised that Kubernetes wouldn’t have enough availability to be able to co-locate your general workloads and your observability stack, but that’s totally fair to segment those workloads.
I’ve heard the argument that “kubernetes has more moving parts” a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example, I don’t run manual updates for minor versions in my homelab. I have a k8s CronJob that runs renovate, which goes and updates my Deployments in git, and ArgoCD automatically deploys the changes. Technically that’s a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.
I’m not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.
Yeah, apparmor/selinux isn’t very popular in the k8s space. I think it’s easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.
By more moving parts I mean:
Running ElasticSearch on RHEL:
In k8s:
Maybe it’s just me but I find option 1 easier. Maybe I’m just lazy. That’s probably the overarching reason lol
You’re not using a reverse proxy on rhel, so you’ll need to also make sure that the ports you want are available, and set up a dns record for it, and set up certbot.
On k8s, I believe istio gateways are meant to be reused across services. You’re using a reverse proxy so the ports will already be open, so no need to use firewall-cmd. What would be wrong with the Service included in the elasticsearch chart?
It’s also worth looking at the day 2 implications.
For backups you’re looking at bespoke cronjobs to either rsync your database or clone your entire 100gb disk image, compared to either using velero or backing up your underlying storage.
For updates, you need to run system updates manually on rhel, likely requiring a full reboot of the node, while in kubernetes, renovate can handle rolling updates in the background with minimal downtime. Not to mention the process required to find a new repo when rhel 11 comes out.
I am using a reverse proxy in production. I just didn’t mention it here.
I’d have to set up a DNS record for both. I’d also have to create and rotate certs for both.
We use LVM, I simply mounted a volume for /usr/share/elasticsearch. The VMWare team will handle the underlying storage.
I agree with manually dealing with the repo. I dont think I’d set up unattended upgrades for my k8s cluster either so that’s moot. Downtime is not a big deal: this is not external and I’ve got 5 nodes. I guess if I didn’t use Ansible it would be a bit more legwork but that’s about it.
Overall I think we missed each other here.
Well, my point was to explain how Kubernetes simplifies devops to the point of being simpler than most proxmox or Ansible setups. That’s especially true if you have a platform/operations team managing the cluster for you.
Some more details missed here would be that external-dns and cert-manager operators usually handle the DNS records and certs for you in k8s, you just have to specify the hostname in the HTTPRoute/VirtualService and in the Certificate. For storage, ansible probably simplifies some of this away, but LVM is likely more manual to set up and manage than pointing a PVC at a storageclass and saying “100Gi”.
Either way, I appreciate the discussion, it’s always good to compare notes on production setups. No hard feelings even in the case that we disagree on things. I’m a Red Hat Openshift consultant myself these days, working on my RHCE, so maybe we’ll cross paths some day in a Red Hat environment!
right tool for the job mate, not everything works great in a container.
Also Proxmox is not legacy as its used a lot in homelabs and also some companys
I use proxmox to carve up my dedicated host with OVH, 3 of the vms run docker anyway.
I’m not saying it’s bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.
All new build-outs are gitops and containerd-based containers now.
For the legacy VM appliances, Proxmox works well, but there’s also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.
If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.
Sometimes, VMs are simply the better solution.
I run a semi-production DB cluster at work. We have 17 VMs running and it’s resilient (a different team handles VMWare and hardware)
I have 33 database servers in my homelab across 11 postgres clusters, all with automated barman backups to S3.
Here is the entire config for the db cluster that runs my Lemmy instance
This stuff is all automated these days.
Ah thanks, I’ll go through it!
Yes, but no. There is still a lot of places using old fashioned VMs, my company is still building VMs from an AWS ami and running ansible to install all the stuff we need. Some places will move to containers and that’s great, but containers won’t solve every problem
Yes, it’s fine to still have VMs, but you shouldn’t be building out new applications and new environments on VMs or LXC.
The only VMs I’ve seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.
That’s a bold statement, VMs might be just fine for some.
Use what ever is best for you, if thats containers great. If that’s a VM, sure. Just make sure you keep it secure.
Some of us don’t build applications, we use them as built by other companies. If we’re really unlucky they refuse to support running on a VM.
Yeah, that’s fair. I have set up Openshift Virtualization for customers using 3rd party appliances. I’ve even worked on some projects where a 3rd party appliance is part of the original spec for the cluster, so installing Openshift Virtualization to run VMs is part of the day 1 installation of the Kubernetes cluster.
Why would you install a GUI on a VM designated to run a Docker instance?
You should take a serious look at what actual companies run. It’s typically nested VMs running k8s or similar. I run three nodes, with several VMs (each running Docker, or other services that require a VM) that I can migrate between nodes depending on my needs.
For example: One of my nodes needed a fan replaced. I migrated the VM and LXC containers it hosted to another node, then pulled it from the cluster to do the job. The service saw minimal downtime, kids/wife didn’t complain at all, and I could test it to make sure it was functioning properly before reinstalling it into the cluster and migrating things back at a more convenient time.
I’m a DevOps/ Platform Engineering consultant, so I’ve worked with about a dozen different customers on all different sorts of environments.
I have seen some of my customers use nested VMs, but that was because they were still using VMware or similar for all of their compute. My coworkers say they’re working on shutting down their VMware environments now.
Otherwise, most of my customers are running Kubernetes directly on bare metal or directly on cloud instances. Typically the distributions they’re using are Openshift, AKS, or EKS.
My homelab is all bare metal. If a node goes down, all the containers get restarted on a different node.
My homelab is fully gitops, you can see all of my kubernetes manifests and nixos configs here:
https://codeberg.org/jlh/h5b
You are going to what, install Kubernetes on every node?
It is far easier and more flexible to use VMs and maybe some VM templates and Ansible.
Yes.
It is not easier to use Ansible. My customers are trying to get rid of Ansible.
Agreed.
I run podman w/ rootless containers, and it works pretty well. Podman is extra nice in that it has decent suppor for kubernetes, so there’s a smooth transition path from podman -> kubernetes if you ever want/need it. Docker works well too, and
docker compose
is pretty simple to get into.Yeah, Kubernetes is more automated and expandable, but docker compose has a ton of good examples and it’s really easy to get into as a beginner.
Kubernetes is also designed for clustered workloads, so if you are mostly hosting on one or two machines, YAGNI applies.
I recommend people start w/ docker compose due to documentation, but I personally am switching to podman quadlets w/ rootless containers.
Yeah, definitely true.
I’m a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it’s super helpful to use Kubernetes in a homelab even if you only have one node.
What’s so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?
Sure!
I haven’t used quadlets yet, but I did set up a few systemd services for containers back in the day before quadlets came out. I also used to use docker compose back in 2017/2018.
Docker compose and Kubernetes are very similar as a homelab admin. Docker compose syntax is a little less verbose, and it has some shortcuts for storage and networking. But that also means it’s less flexible if you are doing more complex things. Docker compose doesn’t start containers on boot by default I think(?) which is pretty bad for application hosting. Docker-compose has no way of automatically deploying from git like ArgoCD does.
Kubernetes also has a lot of self-healing automation, like health checks that can either disable the load balancer and/or restart the container if an app is failing, automatic killing of containers when resources are low, preventing the scheduling of new containers when resources are low, gradual roll-out of containers so that the old version of a container doesn’t get killed until the new version is up and healthy (helpful in case the new config is broken), mounting secrets as files in a container, and automatic retry on failed containers.
There’s also a lot of ubiquitous automation tools in the Kubernetes space, like cert-manager for setting up certificates (both ACME and local CA), Ingress for setting up reverse proxy, CNPG for setting up postgres clusters with automated backups, and first-class instrumentation/integration with prometheus and loki (both were designed for kubernetes first).
The main downsides with Kubernetes in a homelab is that there is about a 1-2GiB RAM overhead for small clusters, and most documentation and examples are written for docker-compose, so you have to convert apps into a Deployment (you get used to writing deployments for new apps though). I would say installing things like Ingress or CNPG is probably easier than installing similar reverse-proxy automations on Docker-compose, though.
What are you going to run containers on? You need VMs to power everything.
I dont have any VMs running in my homelab.
https://codeberg.org/jlh/h5b
Most of my customers run their Kubernetes nodes either on bare metal, or on a cloud provisioned VM from AWS/GCP/Azure etc