by curvilinear_m on 5/2/22, 7:30 PM with 9 comments
I am going through the pain of restoring my files after a bad error on my part : I used déjà-dup (the default with Ubuntu 20.04.4) to back up the default home folder.
Unfortunately, my update broke everything and I lost access to my session. I hoped that deja-dup would allow me to restore from the live CD, but found out that I need the same computer name and username, so I had to reinstall everything before being able to restore.
This seems inconvenient, and I don't understand why it is the default behavior.
What are you using to do your backups ? Have you felt the same frustration I'm facing ?
by smoldesu on 5/2/22, 7:38 PM
Nobody made the conscious decision to kneecap you here: it's simply dangerous to treat one computer's hardware configurations the same as another. Backing up your home folder is your best bet, as it's relatively safe to "restore" after an install. It's up to you to have a bootstrapping script ready to get your system to the same place it was before it was borked.
This is one of the things NixOS would solve quite well, but unfortunately the software just isn't quite there yet, in my opinion. Your Nix install has two configuration files: a software config and a hardware one. Your software configuration is portable, you can use it to define things like users, user configurations, permission structures, desired global applications, themes, groups, system config and more. Your hardware configuration file is specific to the machine you're running, and is auto-generated when you install a system. This separation makes it really easy to "carry around your system" as a 10kb text file. The tough part is that it doesn't work very well with most desktop environments, and it's something of a pain in the ass to set up. One day I hope we get there though, because it really would be a best-in-class solution.
by LinuxBender on 5/2/22, 7:40 PM
Before planning a backup I first ensure all the files I care about are isolated into unique directories not shared by anything I don't care about. e.g. /data/something_unique /opt/something_unique /home/username/something_unique and so on. something_unique just being a unique directory that contains anything or everything I care about. One could also define other shared directories in rsnapshot like /home/username/.config or the entire home directory if you have the disk space for it.
I then have rsnapshot installed and it will be a local snapshot of files I consider important enough that I want a few versions of. RSnapshot is just a perl script that uses hardlinks to reduce disk space usage of duplicate files. Rsnapshot is executed in crond. I rsync that snapshot over to a NAS. Rsync is also called by crond. If files are very important I will also copy them to a portable NAS and put it in my vehicle.
Anything not defined in /etc/rsnapshot.conf will be files I do not care about and will be treated as ephemeral just like the rest of the OS.
If one does not have their OS set up to be ephemeral (cattle vs kittens) then another lesser-secure option might be to keep your important files in a unique LVM volume and use lvchange to set that volume read-only to reduce the risk of the OS upgrade from touching it.
[1] - https://rsnapshot.org/
by firebaze on 5/2/22, 8:36 PM
by ohiovr on 5/2/22, 9:11 PM
Here is an example of a backup script I use: https://pastebin.com/16MPxby0
this script will show me what files have changed. So if a huge number of files have changed I can perceive that as a potential crypt attack. Or some other bonehead thing I did..
This script will DELETE files on the backup that no longer exist in my home directory. I do that so I don't have a bunch of zombie files laying around that are no longer needed.
by themodelplumber on 5/2/22, 7:37 PM
This is run periodically via your favorite method. You can then boot off the backup destination drive later if needed (GRUB picks it up when plugged in). This seems to work fine for me.
BTW generally "broke everything" ought to be listed out, clarified, etc. so you can understand or document your system better. The notes you keep about it can help future upgrades complete with fewer issues.
by synicalx on 5/3/22, 4:57 AM
- Nothing important stored "on" the machine itself, I put it in one or more cloud storage options, git, password manager etc. Important documents like photos and legal stuff I keep in two cloud storage services + REALLY important stuff gets burned onto a DVD or BluRay that I lock up somewhere.
- I have a.... rather long "sudo apt install blah" script that has every package I've needed or wanted to install. Every time I install something new I just add another entry to that script. Although I have to say, this is looking a bit ridiculous now at 160+ lines most of which are just individual packages.
- I have another script to clone in my dot file/config file repo and move all of those into position.
- I don't have anything to setup my user account, but that's only a 5 second job so it's probably not worth the time.
I generally find actual backups are more hassle than they're worth for this kind of thing. The old fashioned sysadmin in me knows that no backup actually exists until you test it - and I really don't feel like testing my own backups all the time.
by runnerup on 5/2/22, 10:31 PM
by doubled112 on 5/3/22, 1:16 AM
I have a git repo with some dotfiles and setup scripts - dotbot configures my user, Ansible configures my system.
Most files go in a self hosted Nextcloud sync folder. If it is larger it goes on a file share.
Anything else is simply forgotten. I can rebuilt in ~ 5 minutes after a clean install. I can keep multiple machines in sync like this as well.
by croh on 5/3/22, 10:49 AM
- config using dot files (may be some remote git repo)
- projects in git repo
- media and other stuff in different drive
Everytime you make changes in config or projects, push them to remote repo immediately. This way, you will be always ready to upgrade.