

It’s just a YAML thing, if you do FILEBROWSER_CONFIG:"/config/config.yaml"
instead it might work with quotes.
It’s just a YAML thing, if you do FILEBROWSER_CONFIG:"/config/config.yaml"
instead it might work with quotes.
It’s interesting because you’re not the first person to complain about getting ISOs in Proxmox, but on my instance if I click on my local storage it has an upload ISO button, and a download ISO from URL button right there, so it’s really simple.
It can also mount network storage with existing ISOs and just pull from that.
I don’t use ISOs very often though, either a Debian 12 container template, or a custom Debian 12 cloud-init VM I made and backed up, so I can just hit restore and it gives me a fresh VM with new networking config and everything through cloud-init automatically.
Is it all automated with versioning intervals and stuff? Or is restic required as a third party step and maintaining a duplicate of data on the server for it to grab?
Overall it sounds like a decent VM manager but is meant for enterprise stuff where they’ll be building their own backup systems.
That’s what proxmox has too, but snapshots aren’t backups and aren’t being sent to a remote backup server… You’re also not supposed to keep snapshots around for very long, whereas I have backups going back several months.
Or are you sending snapshots to a remote server? I think ZFS can do that, so maybe that’s an option I can look at.
I’d say about 2-3 minutes all in total.
This looks interesting, how do you handle automated backups of all the VMs/Containers? Their docs kind of seem to say “stop everything and figure it out”, but with Proxmox I’m used to it handling everything automatically to my PBS server every night.
It’s definitely not an easy migration in my experience, because they run rootless and they cannot auto-start without making a system service for every stack, there is a lot that needs to change in a compose stack, especially with file permissions for shared mounts.
Doesn’t windows have storage spaces or something like that? I’m not familiar with windows for any kind of server stuff but I remember reading some things.
It’s a drive pooling application for windows, lets you merge multiple drives into a single mount.
Since you’re running it in docker all you need to do is change the mount locations in your docker-compose file. Then copy the existing data to the new location.
If you’re currently using a volume instead of a bind mount then the existing data will be under /var/lib/docker/volumes
Try and nail down a 30 second 2:1 pull, so about 20g of liquid out with 10g of beans. And give the hand grinder a try.
If the puck after is really wet and soupy, try more coffee, if it’s a brick and water doesn’t want to flow, try a bit less.
Something that could be happening with the cheap grinder is it could be producing a lot of random grind sizes, so you end up with super fine grinds, the grinds you want, and coarse grinds. This would make it hard to get a good tasting shot no matter what you end up with for timing and ratios.
Yeah it’d just be nice if they weren’t trying so hard to emulate all the bad parts of paper printer companies!
Do you need a TPU? If you have a 7th gen or later Intel iGPU you can use OpenVINO in frigate and it works just as well if not better.
I’m happy I built a Voron Trident a few years ago instead of going for the Bambu Lab printer. It’s more work of course, but I also know what parts are in it and how to fix it or upgrade it.
Opnsense kinda has a webUI for HAProxy, but it’s also not very good.
I recommend learning the config files, since HAProxy is probably the best option for a HA load balancer.
I remember people warning this would happen years ago on various online threads, and getting ridiculed for it.
Yes some, but the power consumption is extremely high. A cheap $40 PC with an i5-6500 CPU would out perform it at about 1/15th the power draw.
This thing is mostly just interesting to play with.
when selfhosters can just help each other storing parts of others backup.
That’s essentially what Storj, Sia, etc… are for, they’re decentralized storage systems where users can contribute storage to the network which automatically distributes data over all the ‘hosters’.
LXCs are more an alternative to VMs if your use-case supports it.
Docker is its own thing with pre-made application images.
VMs barely use more resources than LXC, debian minimal probably needs another 50MB of RAM in a VM vs LXC and that’s about the only difference. It matters at scale but for home use it really doesn’t IMO.
That said LXC has some benefits over a VM of being able to pass through mounts and parts of devices, those can be useful for Frigate where you want to use Intel Quicksync or OpenVINO and still share it with the host and other containers, because you can’t do that on a VM unless you have a device you can dedicate to the VM only. You can also bind mount a directory on the host to a directory inside the container which is useful for sharing files between multiple containers.
You can either:
A) Use a different port, just set up the new service to run on a port that’s not used by the other service.
B) If it’s a TCP service use a reverse proxy and a subdomain.