• 1 Post
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • Mostly just as a wrapper for Docker. The main issue I’ve run into is Docker’s union file system functionality doesn’t work when backed by ZFS, so disk usage can balloon out of control. I wouldn’t use this in production but don’t tell me how to live my life mom.

    Beyond various Docker stacks I also have a Certbot container that uses Snap (sigh), and Hashicorp Vault container which runs as a vanilla SystemD service. I run Wireguard as part of my OPNSense VM. That’s something I would run in a VM since it’s exposed to the internet. I have an older MinIO and Concourse CI Docker Compose config that I’d love to run in LXC but I suspect that isn’t realistic.

    Note on Vault, I haven’t been able to get mlock to work (used to prevent sensitive memory from being swapped). By all accounts it should just work in LXC, but since it isn’t and there’s no swap on the host I just turned it off. I may migrate Vault to a VM at some point.

    I’m personally just interested in lightweight environments with good enough isolation and don’t break all the time over nothing. Docker mostly accomplishes that for me. LXC + Docker also mostly accomplishes that.

    (My heart yearns for FreeBSD Jails but with decent tooling)


  • I originally excited by Podman, but ultimately migrated away from it. Friendship ended with Ubuntu and Docker -> CentOS and Podman -> Proxmox + Debian LXC (which has its own irritations but anyway). Off the top of my head:

    • Can’t attach a containers to multiple networks. Most of my Docker Compose stacks have an Nginx reverse proxy and a network for each service.
    • But you can use pods. However since they share the same network interface if you have multiple legacy services that both insist on, say, port 80 they can’t be in the same pod. They also don’t isolate services, nor can you assert a specific pod is the one listening on a forwarded port.
    • Pods also have DNS issues with Nginx. It kept crashing since it couldn’t resolve the hostnames of the other containers in the pod, even if they were already running. If you launch a shell inside an Nginx container the other container hostnames resolve fine. I suspect the problem is the container is launched before its behind-the-scenes DNS infrastructure is ready.
    • Podman lets you use secrets on normal containers (yay) but if the secret changes you have to recreate the container. Amazing synergy with rotating TLS certificates.
    • Endless issues with SELinux and bind mounts. My Nginx container kept crashing because SELinux didn’t like the TLS certificate bind mount. This is where I reflected on the endless parade of random issues that I had no interest in solving and finally threw in the towel.

    I brought all this up in another community and was told the problem was [paraphrased] “people keep trying to use Podman like they use Docker” - whatever that means. I do like a number of design choices in it, like including the command used to create containers in the metadata, and how it’s easy to integrate into SystemD for things like scheduled updates.

    Cockpit is pretty slick though, need to install it on my bare metal Debian host.


  • More topical references would help if there was a strong commentary aspect to Futurama, but it’s never been that kind of show.

    The simplest explanation is jokes are the bread and butter of a comedy and they just aren’t that great in Hulurama. Having rewatched it recently, Foxurama also leaned heavily on the plot of individual episodes, but so far the plots feel like retreads or just not particularly interesting.

    Which now that I think about it, all of this can be said about The Simpsons.


  • It’s easy* to setup Hashicorp Vault with your own CA and do automated cert generation and rotation, if you are willing to integrate everything into Vault and install your root CA everywhere. (*not really harder than any other Vault setup, but yaknow). I may go down this route eventually since I don’t think a device I don’t control has ever accessed anything I selfhost, or ever will.

    I have a wildcard subdomain pointing to my public IP, and forward port 80 to an LXC container with certbot. Port 80 appears closed outside the brief window when certbot is renewing certs. Inside my network I have my PiHole configured to return the local IP for each service.

    Nothing exposed to the internet at all. There is a record of my hostnames on Let’s Encrypt but not concerned if someone will, say, deduce apollo-idrac is the iDRAC service for a Dell rackmount server called apollo and the other Greek/Roman gods are VMs on it. Seemed like a house of cards that would never work reliably, but three odd years later I only have issues if a DNS resolver insists on bypassing my PiHole. And that DNS resolver is SystemD-ResolveD which should crawl back into whatever hellhole it came out of.


  • They could hijack your site at any time, but with a copy of your live private certs they (or more likely whatever third party that will invariably breach your domain provider) can decrypt your otherwise secure traffic.

    I don’t think there’s significant real tangible risk since who cares about your private selfhosted services and I’d be more worried about the domain being hijacked, and really any sort of network breach is probably interested in finding delicious credit card numbers and passwords and crypto private keys to munch on. If someone got into my network, spying on my Jellyfin streaming isn’t what I’m going to be worried about.

    But it is why CSRs are used.


  • I’ve found the idea of LXC containers to be better than they are in practice. I’ve migrated all of my servers to Proxmox and have been trying to move various services from VMs to LXC containers and it’s been such a hassle. You should be able to directly forward disk block devices, but just could not get them to mount for an MinIO array - ended up just setting their entire contents to 100000:100000 and mounting them on the host and forwarding the mount point instead. Never managed to CAP_IPC_LOCK to work correctly for a HashiCorp Vault install. Docker in LXC has some serious pain points and feels very fragile.

    It’s damning that every time I have a problem with LXC the first search result will be a Proxmox forum topic with a Proxmox employee replying to the effect of “we recommend VMs over LXC for this use case” - Proxmox doesn’t seem to recommend LXC for anything. Proxmox + LXC is definitely better than CentOS + Podman, but my heart longs for the sheer competence of FreeBSD Jails.