Obviously. I’m not an Artist.
Obviously. I’m not an Artist.
My Blåhajar: Definitely not Hitlers, and also of course alive.
Ich bin gelegentlich noch dabei, neue Dienste aufzusetzen, eine geteilte Ablage für alle Geräte in meinem Netzwerk hin zu bekommen, sobald wir Glasfaser bekommen und ich meinen Zweitwohnsitz in Frankfurt habe muss ich dann sämtliche Netzwerke umgestalten:
Also mein Bogen funktioniert einfach. Nach ein bisschen Konfiguration ist es auch Benutzer Tauglich. Und das dauert nicht mal länger, als Fenster ein zu richten und den unnötigen Krempel runter zu schmeißen. Andere wert geschätzte Nutzer wie @Peter_Arbeitslos@feddit.org werden dir dabei helfen.
The local backups are done hourly, and incrementally. They hold 2+ weeks of backups, which means I can roll back versions of packages easily, as the normal package cache is cleaned regularly. They also prevent losing individual files accidentally through weird behaviour of apps, or me.
The backups to my workstation are also done hourly, 15 minutes shifted for every device, and also incrementally. They protect against the device itself breaking, ransomware or some rouge program rm -rf’inf /, which would affect local backups too (as they’re mounted in /backups, but those are mainly for providing a file history as I said.)
As most drives are slower than the 1 Gbps ethernet, the local backups are just more convenient to access and use than the one on my workstation, but otherwise exactly the same.
The .tar.xz’d backups are actual backups, considering they are not easily accessible, and need to be unpacked and externally stored.
I didn’t measure the speeds of a normal SSD vs the raid - but it feels faster. Not a valid argument, of course. But in any way, I want to use it as Raid 0/Unraided for more storage space, so I can have 2 weeks of backups instead of 5 days (considering it always keeps space for 2 backups, I would have 200- GB of space instead of 700+).
The latest hourly backup is 1.3 GB in size, but if an application is used which has a single, big DB that can quickly shoot up to dozens of GB - relatively big for a homeserver hosting primarily my own stuff + a few things for my father. Like synapses’ DB has 20 GB alone. On an uneventful day, that would be 31 GB. With several updates done, which means dozens of new packages in cache, that could grow to 70+GB.
If it fails, I will just throw in a new SSD and redo the backup. I sometimes delete everything and redo it anyway, for various reasons. In any case, I usually have all copies of all files on the original drive, as local backup on the device and backup on the workstation. And even if those three should fail - which I will immediately know, due to monitoring the systemd job - I still have daily backups on two different, global hosters as well as the seperate NAS. The only case in which all full backups would be affected would be a global destruction of all electronics due to solar storms or a general destruction of earth, in which case that’s the least of my problems. And in case the house burns down, and I only have the daily backups, potentially losing 24 hours of data, that’s also the least of my problems. Yes, generally using Raid 5 for backups is better, but in my case I have multiple copies of the same data at all times, surpassing the 321 rule (by far - 622, and soon 623). As all of my devices are connected via Gigabit, getting backups from eg. the workstation after the PC (with backups) died is just as fast as getting backups from the local PC backup Raid itself. And using Raid 0 is better (in speeds) than just slapping them together in series.
Because that’s what Raid 0 for, basically adding together storage space with faster reads and writes. The local backups are basically just to have earlier versions of (system) files, incrementally every hour, for reference or restoring. In case something goes wrong with the main root NVMe and a backup SSD at the same time (eg. trojan wiping everything), I still have exactly the same backups on my “workstation” (beefier server), on also a RAID 0 of 3 1 TB HDDs. And in case the house burns down or something, there are still daily full backups on Google Cloud and Hetzner.
256 GB root NVMe, 1 TB games hdd, 3* 256 GB SSD as raid 0 for local backups, 256 GB HDD for data, 256 GB SSD for VM images.
Then, he inserted a trojan in multiple steps until he gained RCE as root.
Ich verstehe. Ich würde aber auch einfach nicht Fenster benutzen lül
RX 7900 XTX + HL 1
Not really Mac, but I had more issues with normal, very light, work on my iPad than on Arch testing, even with NVidia.
Everyone who needs to use Mac or Windows due to work will very likely not have permissions to install anything anyway. And the lost souls using those “Operating Systems” out of free will … well “some just need to be left behind. The family doesn’t need to care for them anymore.”
The index is technically a bit better, but not only just as swamped with the same SE optimized shit, but also packed with irrelevant bullshit.
Be depressed
Want to commit suicide
Google it
Gets this result
Remembers comment
Sues
Gets thousands of dollars
Depression cured (maybe)
Every political position should just be done by a Blåhaj.