

Not to worry, they’ll ship 'em via snap.
Not to worry, they’ll ship 'em via snap.
It’s okay, all they have to do is get a bid from another Big Tech company that doesn’t have home stuff like Amazon. Failing that, there’s always private equity. 🥲
Don’t know. I remember seeing it happen on my laptop, but I don’t recall the context. It’s possible that I’m hallucinating and it was the previous Ubuntu install. But I think it was Debian 12.
Debian 12 also does this. 😂
I’m writing here to give my sincere applause to this effort.
An open source CPU, somewhat competitive with good ARM and x86 cores would be a groundbreaking achievement.
I didn’t assume that. While that’s one interpretation of my comment, there are others.
Well in capitalist countries there’s also the problem of distribution of the value created by automation that displaces workers. So workers have the incentive to not automate since they’re often left out of the value the automation produces.
Is it an official Chinese policy to pursue automation as a means of dealing with population decline or is it just the obvious solution?
This is another reason why income and wealth inequality are bad for us. We have a natural level of psychopathy in the population. Some are bound to stumble into the kind of power given by obscene inequality.
Right so I guess the question of 3 is whether it means 3 backups or 3 copies. If we take it literally - 3 copies, then it does protect from user error only. If 3 backups, it protects against hardware failure too.
E: Seagate calls them copies and explicitly says the implementer can choose how the copies are distributed across the 2 media. The woodchipper scenario would be handled by the 2 media requirement.
Hm I wonder why snapshots wouldn’t satisfy 3. Copies on the same disk like /file, /backup1/file, /backup2/file should satisfy 3. Why wouldn’t snapshots be equivalent if 3 doesn’t guard against filesystem or hardware failure? Just thinking and curious to see opinion.
Does this make sense?
If Raid is backup, then Unraid is?
Try ZFS send if you have ZFS on the other side. It’s insane. No file IO, just snap and time for the network transfer of the delta.
Every hour. Could do it more frequently if needed.
It depends on how resource intensive the backup process is.
Consider an 800GB Immich instance.
Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you’re using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.
Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don’t have to stop the service either.
When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there’s almost no visible impact.
I’m now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I’ll move all my machines to ZFS on root.
What’s the second B stand for?
Or drink non-decaf and be up at 2AM. 😆
Why did you do that to yourself… 🥹
I like snap, send me to camp.