

most fediverse software does not collect precise geolocation though, which is why this point was brought up


most fediverse software does not collect precise geolocation though, which is why this point was brought up


unless another federated instances synced them
I don’t think the homeserver tries to fetch media remotely that was local but since deleted
we often talk about how discord is a black hole of information, but this is worse than that


Eh, if it’s an open-source application where you can review the code to confirm that the software isn’t tracking you, then it’s not an issue.
you can’t review what’s running on the actual server, what did your local admin add to it.


are the media files redownloaded from other servers when someone tries to load them? I guess all local media is lost forever, but maybe not remote ones


Make regular backups of the DB.
tbh that should apply to any kind of selfhosted service, especially when its not only for you


I don’t know, I would not recommend kubernetes to most people not already familiar with it, but especially to beginners. It’s too many moving parts, and fir most selfhosted setups, its capabilities are not needed I think.


ok, but who is the target audience for that? I am interested now


it was just a joke, to the “one more dashboard” part :D its fine


soo… servers your router doesn’t like for whatever reason blocked for everyone else? with gov ID checks? why would we want that?
and how is this a dashboard idea?


wonderful! now somebody needs to rewrite this in Rust and we are done!


A time limit after disasters would be necessary. It’s difficult to think of a proper time limit though, as even a month might not be enough time if your entire house burns down.
and also accounting for low bandwidth connections… whats more, some shitty providers even have monthly data caps
Maybe a payment system could be set up to where, if your server doesn’t ping for a week, your credit card is automatically charged (after pinging you with many emails).
yeah, that would be almost a necessary feature. being able to hold on to the backup when you really can’t restore.


such a system would need a strict time limit for restoration after the catastrophe. Otherwise leeching would be too easy.


better would be something that can just eat a zfs send stream, but I guess for an emergency it’s fine. but I would still want to encrypt everything somehow.


a firewall can be used to filter incoming traffic by its properties. most consumer home routers don’t expose the firewall settings


oh! I don’t know how nix containers work, but I would be looking into creating a shared network between the containers, that is not the normal network.


oh, I see what you mean!
they do that for the sake of providing an example that works instantly. but on the long term it’s not a good idea. if you intend to keep using a service, you are better off connecting it to a postgres db that’s shared across all services. once you get used to it, you’ll do that even for those services that you are just quicly trying out.
how I do this is I have a separate docker compose that runs a postgres and a mariadb. and these are attached to such a docker network, which is created once with a command, rather than in a compose file. every compose file where the databases are needed, this network is specified as an “external” network. this way containers across separate compose files can communicate.
my advice is its best to also have this network as “internal” too, which is a weird name but gist is, this network in itself won’t provide access to your LAN or the internet, while other networks may still do that if you want.
basically setup is a simple command like “docker network create something something”, and then like 3 lines in each compose file. you would also need to transfer the data from the separate postgreses to a central one, but thats a one time process.
let me know if you are interested, and I’ll help with commands and what you need. I don’t mind it either if you only get around to this months later, it’s fine! just reply or send a message


just to be clear, are you saying that most beginners just copy paste the example docker compose from the project documentation, and leave it that way?
I guess that’s understandable. we should have more starter resources that explain things like this. how would they know, not everyone goes in with curiosity to look up how certain components are supposed to be ran


almost every self hosted service needs a database. and what “another” database? are you keeping separate postgreses for each service that wants to use it? one of the most important features of postgres is that it as a single database server can hold multiple databases, with permissions and whatnot


I think it depends. when you run many things for yourself and most services are idle most of the time, you need more RAM and cpu performance is not that important. a slower CPU might make the services work slower, but RAM is a boundary to what you can run. 8 GB is indeed a comfortable amount when you don’t need to run even a desktop environment and a browser on it besides the services, but with things like Jellyfin and maybe even Immich, that hoard memory for cache, it’s not that comfortable anymore.
how did tou migrate your existing accounts to this system? or did you just make a new account from scratch?