I was wondering if HestiaCP has a built-in feature that allows setting different backup locations for individual users. For example, I’d like to configure each user’s backups to go to a separate remote destination (like different FTP, S3 accounts or rclone).
As far as I can see, it’s not possible to set different backup locations for individual users directly through the HestiaCP interface. However, I’d like to discuss possible ways to achieve this. Maybe through configuration changes or custom scripts.
Has anyone tried something similar or found a good workaround?
It’s a shame I can’t distribute backups between local and remote…
I tried sending old backups to a shared folder and leaving a symbolic link in /backup, but Hestia is laughing at me.
I’ve thought of a useful format when the problem is physical space (as in my case):
Delete /backup - Create /backup as a remote directory with sshfs
and perform backups as if they were all local
I have no idea if Hestia has any system to detect this. I’m tired of trying things with backups.
if I ever find the time, I’ll restart my battle.
Has been quite a long time after this question, but here is my workaround solution;
Since HestiaCP stores local backups in /backup with a consistent naming convention (username.date.time.tar), we can use a custom bash script to “watch” this folder. Once a backup file is generated and completed (checking the modification time), we can use rclone to move specific users to specific remote destinations.
This may bypass the global Hestia backup configuration and give you granular control per user.
The Bash Script
Create a script at /usr/local/bin/custom_backup_router.sh. This script checks for files in /backup that haven’t been modified in the last 5 minutes (to ensure the dump is finished) and moves them based on the filename prefix
#!/bin/bash
# HestiaCP default backup location
WATCH_DIR="/backup"
LOG_FILE="/var/log/hestia_custom_backup.log"
LOCK_FILE="/tmp/backup_router.lock"
# Avoid concurrent execution
if [ -e "$LOCK_FILE" ]; then
# Optional: Check if lock is stale (e.g., > 1 hour) and remove it
exit 0
fi
touch "$LOCK_FILE"
# Find files in /backup that are files (-type f),
# older than 5 minutes (-mmin +5),
# and explicitly ignore standard Hestia system files if any.
find "$WATCH_DIR" -maxdepth 1 -type f -mmin +5 -name "*.*" | while read file; do
filename=$(basename "$file")
# Extract username (Hestia format is username.YYYY-MM-DD...)
# We take the first part before the first dot
user_prefix=$(echo "$filename" | cut -d'.' -f1)
# Logging helper
log_action() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
# Logic Router
case "$user_prefix" in
"admin")
# Example: Move admin backups to a secure S3 bucket
log_action "Processing Admin backup: $filename"
rclone move "$file" secure_s3:admin-backups/ --log-file="$LOG_FILE"
;;
"client_one")
# Example: Move specific client to their own Dropbox/FTP
log_action "Processing Client One backup: $filename"
rclone move "$file" client_dropbox:backups/ --log-file="$LOG_FILE"
;;
"client_two")
# Example: Move another client to Google Drive
log_action "Processing Client Two backup: $filename"
rclone move "$file" gdrive_remote:backups/ --log-file="$LOG_FILE"
;;
*)
;;
esac
done
rm -f "$LOCK_FILE"
Automation
You need to run this script frequently so it catches the backups shortly after Hestia creates them. Add this to your root crontab (sudo crontab -e) or via HestiaCP crontab section
# Run every 10 minutes
*/10 * * * * /usr/local/bin/custom_backup_router.sh
Visual Workflow
Here is a diagram illustrating how this logic sits on top of the standard HestiaCP flow:
+---------------------+
| HestiaCP Server |
| (Cron Job) |
+----------+----------+
|
| 1. Generates local backup
v
+---------------------+
| Local Directory |
| /backup | <--- [ file: userX.2025.tar ]
+----------+----------+
|
| 2. Custom Script detects completed file
| (Logic: If filename starts with...)
v
+------+----------------------+-------------------------+
| | |
| if admin | if client_A | if client_B
| | |
v v v
[ Rclone MOVE ] [ Rclone MOVE ] [ Rclone MOVE ]
(Uploads & Deletes Local (Uploads & Deletes) (Uploads & Deletes)
| | |
v v v
( AWS S3 Bucket ) ( FTP Server ) ( Google Drive )
The reason symbolic links or mounting drives directly to /backup often fails in Hestia is due to permission strictness and how v-backup-user calculates disk space. Using rclone move is cleaner because Hestia creates the file locally (fast), and this script offloads it to the cloud (slow) and cleans up the local disk automatically, keeping your server storage usage low.