• 0 Posts
  • 63 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle


  • Generally power supplies are the most electrically efficient at 20-60% utilization, so there’s no issue with over-provisioning power, other than the (generally minor) upfront extra cost, which might very well pay for itself in the first months/years of usage. I’ll take a look and see what I can find on those sites.

    Edit: okay, trying to shop through google translate / currency calculator is actually aids so I’m gonna teach a man to fish instead. This is what I should have done from the start anyway.

    Power supply: Anything from a decent brand, at basically anything >450W. a 650W or 850W is totally fine if it’s at a decent price. They only draw the power they need, they don’t just constantly pull 850W if the downstream components aren’t calling for it.

    CPU: 12400 is a fine cpu for what you’re doing. You’ll transcode at 720p no problem, 1080p maybe a single stream in real-time. I wouldn’t bank on more than that. Only downsides here are the relatively shallow core counts if you ever expanded into other workloads. Without access to used xeon boards/cpus, it might be a reasonable choice though. What I would say is look for something older but with more cores/threads if you can. For example, a 10900 or even 10700k would probably be a better server cpu than a 12400.

    Memory: DDR4 platforms are a great way to save money, as long as you aren’t planning on expanding to inferencing on cpu. Get as much as you can. 32-64gb of ddr4 should be dirt cheap, especially if you find a cheap motherboard with 4 memory sockets.

    Motherboard: If you want this thing to be versatile, you want 2x pci-e slots. Old gaming full-sized ATX boards are the way to go here. 1 slot for an HBA, 1 slot for a GPU, and that should be all you need. Bonus for as many open sata sockets as possible. 6-8 is pretty typical on 10th-12th gen gaming ATX boards.

    GPU: gpus will be much more efficient at transcoding than an igpu, especially from older intel CPUs. A 1050, 2060, 3050, basically anything from the 10-series onward has a decent nvenc encoder that would work well with plex/jellyfin. My goto is generally old workstation cards, I use a p620 myself and it handles a single 4k encode job no problem. I’m not sure if they’re viably purchasable anywhere in your area, but I’d definitely look out for a P620, P1000, or T400. Great value in those cards.

    Drives/HBA: there are inexpensive LSI HBA cards to expand how many drives you can attach to a system if you need them, all you need is a spare pci-e slot and a place to physically mount the drives. The cheapest way to start here is to look for a motherboard with 4-6 sata slots and use those. Hardware raid is functionally dead these days in the real world, just use zfs or mdadm under linux to create an array with your desired level of resiliency/capacity.

    Once you’ve priced out what it would cost to buy all of this new, look for prebuilt gaming PCs and office PCs that might be able to be expanded to fit these requirements. Prices look kind of steep on those markets you listed, but I’m sure something exists if you look hard enough.






  • +1 to all of this.

    For ~3 years I ran a Debian system off of a raid 1 of 2 USB drives. I didn’t have the spare drive bay slots in my cs24-ty and I didn’t have the room for an expander.

    SanDisk apparently didn’t consider my use case “warranty-voiding” and were content to replace them whenever they failed. (I was honest during the first warranty inquiry about how they were used; I doubt you could get away with this with modern SanDisk though) I had a 3-year warranty on the drives, and checking my email, I replaced a total of 11 over the 3 year period. The first 7-8 were before I moved logging to a zfs dataset on the spinners, which helped a lot as those 7-8 failures were all in year 1 with the constant journaling, writing, and syncing of mostly logs.

    TL/DR: great for testing if drivers and hardware work; don’t do this in production


  • When I started learning Linux at work, the game I played with myself was i’d install Debian stable minimal on my primary workstation and I would not reinstall it ever. No matter what happened, I would always fix it.

    I learned to install the basic subsystems to get a GUI and audio, learned the fun of Nvidia drivers to get xinerama and hw decoding working. In retrospect it seems trivial but as a new learner it was challenging and rewarding.

    At one point I was trying to do something, and a guide online suggested installing some repo and installing newer libraries. I did so, and a week later I did a dist-upgrade (because I didn’t know any better) and when I rebooted I was presented with a splash screen for “crunchbang” linux.

    Figuring out how to get back to Debian without breaking everything probably taught me more about packages, package managers, filesystems, system config files, init (systemd wasn’t really a thing yet) than everything else I had done combined.

    For anyone wondering: 12 years into the project I had a drive from the mdadm mirror die, and while mdadm was copying to another mirror, the other drive died. I considered that a win but y’all can be the judge (no files were lost, 12yr into my Linux journey I had long since figured out automating NFS and rsync).


  • Dran@lemmy.worldto3DPrinting@lemmy.worldCable self clipper
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 months ago

    I don’t know of any off the top of my head, but with a cheap digital caliper and tinkercad, I assume you’d be able to model one fairly trivially. You could friction-fit two halves around the cable, and secure it with some simple adhesive, or some kind of simple bolt/nut fastener mount if you wanted to get clever.

    Never not learn a new skill!


  • The canvas API needs specific access to hardware that isn’t usually available via browser APIs. It’s usually harder to get specific capability information from a user’s GPU for example. The canvas API needs capability information to decide how to draw objects across differently capable hardware, and those extra data points make it that much easier to uniquely identify a user. The more data points you can collect, the more unique each visitor is.

    Here’s a good utility from the EFF to demonstrate the concept if you or anyone else is curious.

    https://coveryourtracks.eff.org/


  • I use ansible on one of my side projects; I use puppet at work. It’s the same reason I use raw docker and not rancher+rke2… it’s not about learning the abstractions; it’s about learning the fundamentals. If I wanted a simple abstraction I’d have deployed truenas and Linuxsserver containers instead of Taco Bell programming everything myself.


  • Sure. I have an r630 that is configured as an NFS server and a docker host called vacuum. There is a script called install_vacuum.sh that with a single command, can build the server to my spec from a base install of Ubuntu 24.04. it has functions to install base packages from repositories, add new repositories, set up users, create config files for NFS, smb, fstab, crontab, etc… once an NFS server exists on my network, any other server could be my docker host. My docker host is set up from a script install_containers.sh. as with before, it does all the things to get me a basic docker host, firewalled, and configured for persistence via my NFS server. It also has functions to create and start docker containers for all of my workflows (Plex, webserver, CA, etc), and if those containers don’t exist, it will build a docker image for said workflow based on a standardized format (you guessed it) bash build script for the containers. There is automation via cron on whatever host runs docker to build and update the containers once a week, bare-metal servers update themselves nightly, rebooting when necessary via unattended-upgrades.

    Basically, you break everything down into the simplest function possible, have everything defined via variables in shared configurations that everything sources before running, and you have higher and higher level functions call other functions until you have a single function that cascades into a functioning system. Does that make sense?



  • Not sure if many people do what I do, but instead of taking notes I make commented functions in bash. My philosophy is: If I can’t automate it; I don’t understand it. After a while you build enough automation to build your workstations, your servers, all of your vms and containers, your workflows, etc, and can automate duplicating / redeploying them whenever required. One tarball and like 6 commands and I can build my entire home + homelab.






  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.

    Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.

    If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.