I just had a long day migrating a user to a new server. The trouble was that the user had about 70Gb of mail, and 6 web domains of 10Gb, and a couple of 2Gb databases. A long time ago we’d abandoned the Hestia backup system and were backing up to S3 type storage. But now we had to move to a new server, and there wasn’t enough room to do a backup.
So my first approach was to break the backup into several stages. On the source server I used Backup Excludes to exclude everything but databases. Made a backup. Copied to target server. Restored using: v-restore-user user userfile.tar "no" "no" "no" "yes" "no" "no" "no"
I repeated this process for user dirs + cron, and websites. For the websites it was necessary to restore each domain individually eg: v-restore-user user userfile.tar "domain1.com" "no" "no" "no" "no" "no" "no"
So far so good. But then I ran into a problem with the mail. There was only 30Gb of space on the root filesystem, so because v-restore-user untars the files first before copying them into place, you need enough space for the original tar file, plus the expanded tar file. That wasn’t going to work, and no way to split it up.
I’d added an additional drive and mounted it as /home/ , so I followed the instructions for doing a bind mount to mount the /backup/ location directory under there. That didn’t work as v-restore-user was still putting its tmp files in /backup/ and maxing out the root drive.
Finally I found a solution by looking in the code. There seems to be an undocumented feature, to change the location of the place where v-restore-user puts its temp files. So you set it first with export BACKUP_TEMP=/path/to/drive/with/ample/space
then run the comnand as normal to extract the mail v-restore-user user userfile.tar "no" "no" "maildomain.com" "no" "no" "no" "no"
The temp file was created on my spacious extra drive, the root file system didn’t max out, and the user was restored.
A couple of thoughts. Might be worth adding BACKUP_TEMP to the documentation!
Also it would be handy if the v-backup-user command could be asked to create a user backup which creates the skeleton of the users data, but without the big files. eg v-backup-user username --minimal would create the structure of web dirs, mail dirs, and all the templates, ssl certs, users etc, but just omit the files in /web//public_html/ and /mail// which could be rsynced over afterwards.
For such problems like yours when you need more space than you have there are several options.
First one: to use any cloud-based provider with hour billing. Like Vultr for example.
They have VPSes with a lot of storage, those billed hourly, and whole operation will cost you maybe few cents.
This is optimal in my opinion option.
The second option to use pipes in combo with rsync/scp / zstd / zip i dunno what do you use.
There are infinite amount of options how to do that, i dunno what exactly to pick, depends on the case and problem. But for example you can grab data via rsync, via scp, via pipes with scp/rsync, via modifying backup script and saving data on your new server instead of your own, or mounting remote disk as local one (some kind of ceph shit).
I mean there are really a lot of options.
All of them extremely complicated, because backup thing in hestiaCP very good in my opinion, but bad for big archives of data.