I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • lefaucet@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    i use it for gitea, nextcloud, redis, postgres, and a few rest servers and love it!, super easy

    it can suck for things like homelab stablediffusion and things that require gpu or other hardware.

    • Aiyub@feddit.de
      cake
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      As someone who does AI for a living: GPU+docker is easy and reliable. Especially if you compare it to VMs.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      postgres

      I never use it for databases. I find I don’t gain much from containerizing it, because the interesting and difficult bits of customizing and tayloring a database to your needs are on the data file system or in kernel parameters, not in the database binaries themselves. On most distributions it’s trivial to install the binaries for postgres/mariadb or whatnot.

      Databases are usually fairly resource intensive too, so you’d want a separate VM for it anyway.

      • lefaucet@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Very good points.

        In my case I just need to for a couple users with maybe a few dozen transactions a day; it’s far from being a bottleneck and there’s little point in optimizing it further.

        Containerizing it also has the benefit of boiling all installation and configuration into one very convenient dockercompose file… Actually two. I use one with all the config stuff that’s published to gitea and one that has sensitive data.