Issue with Bakcup

Hello guys,

i am facing an issue with backups

Filesystem Size Used Avail Use% Mounted on
/dev/ploop32399p1 30G 13G 16G 45% /
none 3.0G 0 3.0G 0% /sys/fs/cgroup
none 3.0G 0 3.0G 0% /dev
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 615M 1.3M 614M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
none 3.0G 0 3.0G 0% /run/shm
tmpfs 615M 0 615M 0% /run/user/1000

When i am trying to do the backup i am getting back that i have no space. “not enough diskspace available to perform the backup.”
Also the backups is being done to an ftp server outside of this machine
The web size is 8.35GB

Can you tell me what is going on?

Best Regards

you will need 8.35GB for the tar file + the space of the zip file. That`s why you don’t have enough space.

And why it’s not saved in zip from the start, or sending the tar file without zip it?

Because the source code says so.

It’s stupid!

Well. It is definitely not useful for you or for me.

Feel free to change the behaviour and provide a Pull Request on our github project.

Tell me where and i will

1 Like

You should find all informations here: hestiacp/CONTRIBUTING.md at main · hestiacp/hestiacp · GitHub

Currently we create a “temp” folder and from that folder we create a .tar file and that tar file we quickly upload to a server.

Yes it is an know issue and so far I know there is no simple improvement for it…

I have been thinking about backups recently.

I think the actual system needs a rewrite and it is a big problem since the development team is already spending a lot of time in it and I don’t know if all that effort will be wasted.

I love the functionality to copy a backup over ftp or amazon s3
I yesterday I was thinking about mounting /backups directly on an s3 unit
I think that the incremental backups are a must and very efficient

With all that in mind, I don’t know were are we heading. I see lots of options that are incompatible with one another.

But if is that, with just a tmp folder and one file, we need to check if the button for local save is on or off! and then check the *2

Yes the current backup system works great for few smaller sites but if you have larger sites or a lot of sites it tanks the performance…

So it needs a rewrite but also ipv6 support. Web UI it self and so on…

FTP, SFTP, Blackblaze (B2) is allready supported if somebody wants to invest 1 hour in S3 it also can be added.

I yesterday I was thinking about mounting /backups directly on an s3 unit

Already supported how ever last time I tried it I had issues with restoring …

I think that the incremental backups are a must and very efficient

Borg / Duplicity https://www.duplicati.com / Or any other 3rd party incremental backup support is probably the route to go…

temp file is still created on the local disk even for remote backups

Currently if you have multiple large sites I would suggest create multiple users…

You can remove check for free space or change the formule for it as i did:
By default it need x2 of space
v-backup-user line 181
let u_disk=$(($(get_user_disk_usage) * 2))

I changed 2 to 1.7 and it feets perfectly.

Be carefull, there is a possibility that the space in your root will go to 0 while backup.
It’s better to connect one more additional disk for backup and mount it to /backup.

It depends on your current “website” a lot of video or images might need a higher number…

2 Likes

Maybe we should put that number in a variable and assign it via UI

It would be nice if we had an “history” of the compression ration:

For example:

Yesterday: User 1: 500mb = Backup size = 200mb = 0.4

If the new “User usage” is now 550mb you can assume more ore less that the next backup would become: 550 * 0,4 + safety factor = 250mb * 2 = 500 mb disk space needed

User 2: 1GB → 0,8GB = 0.8
New back up = 1.05 GB = 0,8 * 2 = 0.84 + 10% Safety factor = 0.94 * 2 = 1808mb

Other idea is to compress each backup straight into the master “backup” and delete the temp backup directly. Have no idea if it is faster and probrabally a lot of work