Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 180 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

    The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

    In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

    The packages in the Arch repo are legit saner than the Docker version.


  • I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.


  • If downgrading the kernel fixes it then it sounds a lot like a kernel bug. Still worth reporting to libinput I guess, they’ll probably be of better help to report it to the kernel properly with details of what broke, if it ends up that way.

    If you really want to get involved you can also bisect the exact commit that caused it in the kernel, but that’s a lot of kernel compiling and rebooting ahead of you.


  • That shouldn’t be a problem. You can even install them on the same btrfs partition if you wanted to as long as each distro gets its own set of subvolumes for stuff. Separate partitions and even separate physical disks? No issues there, that’s even less weird.

    Ideally what I’d do in this scenario is at least make a subvolume for the Steam library that way you don’t have to mount the actual home folder, just the Steam library subvolume. I also have a separate subvolume for movies and TV shows, and a few other things. It’s just very convenient for organization purposes, and also technical purposes because now my home snapshots don’t take all that much space as all the big data stuff is separate. No point backing up a Steam library.

    But in the end, none of this really matters, you can mount anything just about anywhere. We all already mount a FAT32 at /boot or somewhere similar because UEFI requires it. The filesystems all have UUIDs which are usually used for configuring fstab and GRUB and whatnot, precisely so even if you physically swap the disks or even put it into another computer, it still works.


  • That sounds great and all on paper but that also requires a ton of moderation overhead as now every small instance has to have enough mods to deal with everything being posted, since moderation would be local only. So all the spam and CSAM would have to be taken down by each individual instance. Would also somehow have to find a way for instances to pull the hashtags out of every federated instance too. The way it works on Mastodon is someone follows an account and that causes the data to get pulled in. On Lemmy you don’t follow users, you need a way to pull the data in.

    The end result would be a mess of instances not even agreeing on vote counts with vastly different comments too, and even the posts.

    Lemmy doesn’t aim to be an uncensorable platform. I join communities for the content, the users, and for better or for worse, the mods too.

    The individual problems of having to deal with the duplicate communities will get worked on eventually.





  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.


  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.




  • IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.

    With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.

    That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.

    I’ve fogotten Minecraft servers online and really nothing happened whatsoever.



  • Manipulating the game can be a lot of fun, more than the game itself. In a way, it kind of becomes like a higher level kind of game. When done appropriately and not ruining other people’s fun, that is. I’ve had good fun on friend’s private servers and giving their shit code a good stress test.

    I have zero respect for those that just download cheats and use them to pass off as skilled and ruin the fun for others. It’s like ethical hacking: do it with permission or at least be transparent about it.

    There’s game servers out there to play against other cheaters, and it can truly be hilariously broken and entertaining. I’ve also been quite fascinated by Minecraft servers like 2b2t where cheating is basically necessary to survive at all. The exploit content and drama that have come out of this server is bonkers. But everyone knows they’re playing against cheaters, the fun is seeing how you can outcheat your opponents.

    There’s also the whole speedrunning community, the ways people have broken games wide open. Fascinating and very entertaining stuff. The skills you need to perform a lot of those glitches are insane and extremely challenging. Hours of grinding to get frame perfect glitches work, several times during a run. It’s a whole new puzzle, with so many more variables.

    Why would someone cheat on games like CS2, Apex, Valorant and the likes, that I don’t know. Some people are really just kind of losers I guess. I personally don’t see the appeal, I’d want to be famous for the cheats and not even compete with non-cheaters because that’s just plain unethical and unfun. There’s also a big difference between finding dupes in Minecraft vs an aimbot in a competitive shooter.


  • If your stuff is all Docker then yeah, immutable makes sense as it makes the entire box declarative and immutable: you can get back the exact same operating Docker environment on the server, and then you can get back the exact same Docker workloads going with the Docker compose configurations.

    If you ever need to run stuff you’d run on Debian, you can just shove it in a Debian container.

    That said, if most of the stuff is containers, the risk of just the core Debian breaking is fairly low. Pick whatever is easiest for you to deal with based on your needs. Immutable distros have a bit of a learning curve.