yunli
February 13, 2026, 4:58pm
1
root@debian:~# systemctl status hestia
× hestia.service - LSB: starts the hestia control panel
Loaded: loaded (/etc/init.d/hestia; generated)
Active: failed (Result: exit-code) since Sat 2026-02-14 00:54:08 CST; 45s ago
Docs: man:systemd-sysv-generator(8)
Process: 628 ExecStart=/etc/init.d/hestia start (code=exited, status=1/FAILURE)
CPU: 142ms
2月 14 00:54:07 debian.xxx systemd[1]: Starting hestia.service - LSB: starts the hestia control panel...
2月 14 00:54:08 debian.xxx hestia[628]: Starting hestia-nginx: hestia-nginx
2月 14 00:54:08 debian.xxx hestia[672]: nginx: [alert] could not open error log file: open() "/usr/local/hestia/nginx/logs/error.log" failed (30: Read-onl>
2月 14 00:54:08 debian.xxx hestia[672]: 2026/02/14 00:54:08 [warn] 672#0: "ssl_stapling" ignored, issuer certificate not found for certificate "/usr/local>
2月 14 00:54:08 debian.xxx hestia[672]: 2026/02/14 00:54:08 [emerg] 672#0: open() "/var/log/hestia/nginx-error.log" failed (30: Read-only file system)
2月 14 00:54:08 debian.xxx systemd[1]: hestia.service: Control process exited, code=exited, status=1/FAILURE
2月 14 00:54:08 debian.xxx systemd[1]: hestia.service: Failed with result 'exit-code'.
2月 14 00:54:08 debian.xxx systemd[1]: Failed to start hestia.service - LSB: starts the hestia control panel.
root@debian:~# cat /usr/local/hestia/nginx/logs/error.log
2026/02/14 00:17:36 [warn] 57569#0: "ssl_stapling" ignored, issuer certificate not found for certificate "/usr/local/hestia/ssl/certificate.crt"
2026/02/14 00:52:33 [warn] 1601#0: "ssl_stapling" ignored, issuer certificate not found for certificate "/usr/local/hestia/ssl/certificate.crt"
root@debian:~# cat /proc/mounts | grep " / "
/dev/vda1 / xfs ro,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
root@debian:~# systemctl start hestia
Job for hestia.service failed because the control process exited with error code.
See "systemctl status hestia.service" and "journalctl -xeu hestia.service" for details.
The server is Debian 12
sahsanu
February 13, 2026, 5:32pm
2
yunli:
root@debian:~# cat /proc/mounts | grep " / "
/dev/vda1 / xfs ro,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
Your root file system is mounted as read only so no one can write there. Maybe the FS is corrupted so the system mounted it as read only…
3 Likes
yunli
March 1, 2026, 10:35am
3
I found the solution to the error. It’s because
main ← sahsanu:add-support-for-ext4-native-quotas
opened 03:45AM - 30 Dec 25 UTC
Refactor system quota management and add explicit support for ext4 native quotas… while improving installation behavior.
- Source hestia.conf explicitly and quote $HESTIA/$BIN paths consistently
- Rework **v-add-sys-quota**:
- Detect /home mount device, fstab line and filesystem type more robustly
- Add support for ext4 native quotas
- Cleanly migrate from external quotas to ext4 native quotas when available
- Normalize and update fstab mount options for ext4 and xfs
- Improve xfs handling for both root and separate /home partitions
- Add safer remount logic and clearer logging for quota configuration
- Make reboot requirements explicit and adjust user-facing messages
- Rewrite **v-delete-sys-quota**:
- Add helpers to remove quota options from fstab and quota files safely
- Properly disable quotas before cleanup for ext4 and xfs
- Remove xfs root quota flags from GRUB when disabling quotas
- Improve detection of /home mount and fstab entries
- Clean up cron jobs, forcequotacheck, and only purge quota package if installed
- Enhance installer UX by printing a message before configuring quotas
- Additionally, fix SSL key parsing for Debian/Ubuntu installers (unrelated to quotas):
- Use RSA key markers only for Debian 11
- Use BEGIN/END PRIVATE KEY markers for Debian 12+ and Ubuntu
Tested on:
- Ubuntu 22.04 and 24.04, Debian 11 and 12
- Each on six VM setups:
- xfs with /home on /
- xfs with /home on a separate /home partition
- ext4 (external quotas) with /home on /
- ext4 (external quotas) with /home on a separate /home partition
- ext4 (native quotas) with /home on /
- ext4 (native quotas) with /home on a separate /home partition
**IMPORTANT**: these changes do not enable native quotas on ext4 partitions by themselves. The user must enable them manually, because it requires the filesystem to be unmounted and running `tune2fs -O quota /device`. For the root (/) partition, this must be done by booting the server using a rescue/live ISO (or similar) to perform the changes while the filesystem is not mounted.
Follow-up to #5048
Replaces #5123
Fixes #5145
The root cause is an error caused by HestiaCP’s disk quota script. The v-add-sys-quota script writes the following configuration to the /etc/fstab file. This can cause the system kernel to enter protected mode, and this issue has already been fixed in the patch you provided.
This is the code in the original /etc/fstab file!
UUID=ef94bc71-6cfe-4f7b-a849-fd4b6cac584b / xfs defaults,usrquota,grpquota,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 0
This is the code from the adjusted /etc/fstab file
UUID=ef94bc71-6cfe-4f7b-a849-fd4b6cac584b / xfs defaults,usrquota,grpquota 0 0
After editing is completed, remount the system drive to enable the system to enter read-write mode!
sudo mount -o remount,rw /dev/vda1 /
Then just restart the system and it will be fine.
reboot
Could you provide more details?
yunli
March 1, 2026, 12:23pm
6
I added detailed instructions. Is this suitable?
In addition, I submitted a fix request for the v-list-sys-services script, but it seems it hasn’t been merged into the main branch. Can you help me get it approved?
main ← hestiacn:patch-4
opened 08:48PM - 13 Feb 26 UTC
## Description
The `v-list-sys-services` script currently uses `ps -eo pid,pcpu… ,size` to collect memory statistics, which reports the virtual memory size (VSZ) rather than physical memory usage (RSS). This leads to significantly inflated memory values in the service list output.
## Root Cause
The `size` field in ps output represents the total virtual memory allocated to a process, which includes shared libraries, mapped files, and swapped-out pages. This value is typically much larger than the actual physical RAM consumption.
## Changes
Replace the ps column selector from `size` to `rss` to report Resident Set Size (physical memory):
```diff
- ps -eo pid,pcpu,size > $tmp_file
+ ps -eo pid,pcpu,rss > $tmp_file
```
## Impact
Before this change, service memory usage was reported as:
- PostgreSQL: 1056 MB (actual: ~130 MB)
- PHP-FPM: 456 MB (actual: ~50 MB)
- MariaDB: 663 MB (actual: ~180 MB)
After the fix, values now accurately reflect physical memory consumption:
| Service | Before (VSZ) | After (RSS) | Actual |
|---------|--------------|-------------|--------|
| postgresql | 1056 MB | 132 MB | ✓ |
| php8.3-fpm | 456 MB | 51 MB | ✓ |
| mariadb | 663 MB | 177 MB | ✓ |
| nginx | 29 MB | 57 MB | ✓ |
| clamav-daemon | 991 MB | 980 MB | ✓ |
<img width="1027" height="742" alt="two" src="https://github.com/user-attachments/assets/8b4520c5-26cb-40a5-8345-f61e88edb0c1" />
<img width="1029" height="752" alt="one" src="https://github.com/user-attachments/assets/fca1a899-2490-44d1-ba65-23cb5ff2ea07" />
## Testing
- ✅ Verified on Debian 12
- ✅ Verified on Ubuntu 22.04
- ✅ Cross-referenced with `ps aux` and `htop` output
- ✅ Confirmed all services show correct physical memory values
## Backward Compatibility
This change is fully backward compatible as it only affects displayed values. No configuration changes required.
## Additional Context
The RSS field represents the portion of a process's memory held in physical RAM, which is what administrators expect when monitoring system resources. This aligns with how standard monitoring tools (htop, free, top) report memory usage.
Ok, that makes sense and that’s the reason I’ve modified the quota scripts.
Actually it is not fine:
1.- usrquota and grpquota are not the right values for XFS file system.
2.- keep in mind that right now you are not using quotas. If you execute repquota -a you shouldn’t get any output so that means quota is not enabled. To enable XFS quota on / you must add the rootflags to grub.
To fix it use the new quota scripts:
sudo-i
cd /usr/local/hestia/bin/
mv v-add-sys-quota v-add-sys-quota.ori
wget https://raw.githubusercontent.com/sahsanu/hestiacp/34282438f64147962c37b0dcaeabde1b6b619111/bin/v-add-sys-quota
chmod +x v-add-sys-quota
mv v-delete-sys-quota v-delete-sys-quota.ori
wget https://raw.githubusercontent.com/sahsanu/hestiacp/34282438f64147962c37b0dcaeabde1b6b619111/bin/v-delete-sys-quota
chmod +x v-delete-sys-quota
Once done:
v-delete-sys-quota
v-add-sys-quota
And after reboot you should have quota enabled:
repquota -a
I can’t help here, take a look to all my open PRs
yunli
March 1, 2026, 3:04pm
8
Uh, after I restarted and went into the web panel to rerun all the services, everything was back to normal. Thank you for your complete supplement.
1 Like