Project to run Hestia in Docker

Hello desp,

I disagree about the complexity part, Docker can be a little difficult to understand at first but once you understand it it is very easy to use. Docker gives you the security of knowing exactly what you have running with minimal surprises when putting it into production. If the problem at first is to know what is happening, you can try some software like portainer that helps you configure, edit, update, view logs… About backups, if the container you are running needs to keep the data , you can backup only the data in volume and ignore everything else that is in the container, now if your container is just to process something and send it somewhere, you can just discard the container after it finishes the task .

In the case of Hestia, imagine that you are going to run an update routine on your server, this update may require you to stop some service and if it fails it may take a while for you to solve the problem. With the services running in containers, all this step has already been done and in a few seconds you can change the production image and update the entire server, and in case any problems have gone unnoticed, you just need to recreate the container using the previous image and all the problems go away with her. You can prepare the image with everything you need, applications, settings, routines… and when you install the image on your machines they will all run exactly the same, no matter if you have 1 or 100 servers. If you use some tool to orchestrate the containers you won’t need to run any commands for that.

There are some more advantages such as, for example, if your machine has a dedicated disk to store the container data, if you need to move your server to another machine you can simply uncouple the volume from one machine and connect it to another, and when you run the container everything will be there the same way. This is even more practical to create a snapshot, you don’t have to worry so much about the machine that is running the container as it can be discarded.

Initially what led me to add Hestia to the container was the ease of replicating the same configuration on several servers, the update time and the security of knowing everything that is running. And that was just a very superficial explanation, there are many other advantages of using a service in a container.


If you run docker directly on a bare metal server I would prefer to use LXC container with proxmox instead. And you have the same advantages.

As you don’t install updates over apt it might cause issues because user configs and user settings are not rebuild over a longer period of times. In the future it might cause issue.

Also with the disabling of apt updates you are disabling any possible updates from Hestia and any security patches that you will miss. Unless you follow HestiaCP actively there is no way to to get notified when there are updates.

Off course docker has advantages even for us. As it would allow us to “start” a new docker “instance” → Install hestia update → do function tests and trash the server again and start over again on the next image.


I think often the motivation is that docker plays quite nice with things like kubernetes to reach a different kind of scalability for instance. also in terms of CI/CD it’s a whole different story to automate deployment of updates and such.

so docker for sure has it’s place and meaning. however, I agree, that does not mean, everything has to go in it. and probably a lot of people trying to put things into docker do not exactly understand the conecpt itself or why they are doing after all :wink:

that said, I also agree that I don’t see a big benefit of putting something like Hestia into docker. while the arguments about updating and similarity that @jhmaverick already wrote for sure come into play, I heavily doubt that there are many people managing hundreds of hestia servers :wink:

you need to attach/mount external folders for data storage and handle port forwarding. or let a proxy live outside, which again adds unneeded complexity. and probably quickly outweight any benefits on the deployment part.

apart from that it’s probably still just another container, so there is no difference to lxc or the likes unless you start ripping it apart a bit more, so you can use seperate containers for instance for databases, redis caching and the likes.

PS: interesting project nevertheless.

1 Like

If hestiaCP allowed to manage services in a distributed environment. One panel to control.

  • DNS on any machine
  • nginx on any machine
  • email on any machine
    Then it would make a lot more sense to dockerize.

User settings depends a lot on how they are handled in the image update, the system only applies the essential volume locations like “hestia/data”, “hestia/conf” and “hestia/ssl”, also "/home ", “/etc/exim4/domains”, “/etc/nginx/conf.d/domains”, “pool.d” of PHP versions and a few other places, all the rest of the Hestia files and applications are in the image and will vary according to the version installed. Whenever I change the version of Hestia in the image, I check the “upgrades” files and take only the parts that influence the data that is in volume so that they run when the container image is changed.

The most extreme use that I experienced with data updates in volume was a few months ago when I migrated from Vesta to Hestia 1.5.x, to update the servers I just had to make a script to change some of the data that were in volumes, run a rebuild the configurations and the container did the rest of the work, and even with all the difference that exists between Vesta and Hestia nowadays it didn’t present problems.

The Hestia update via APT had to be disabled to prevent the update from overwriting the changes made to docker, in addition the update could modify the volume files that when recreating the container would be in a different version than the rest of the image. The issue about the update notice can be resolved in a similar way to the Docker versions of Nextcloud or that launch a notification notifying you about the update.

1 Like

This is just the first step, Hestia already runs in docker and keeps all the configurations that will vary from one server to another, MariaDB was the first to be moved out of the container and the next steps are to do this for other services like postgres, php, bind… and keep the main container controlling all that, for example, once php is moved out it will be possible to use load-balanced templates among other things.

1 Like

I have been thinking about this feature for a while and I think that the idea is to dockerize the services in the panel so we can distribute loads among machines with one control panel.


Wouldn’t it be more complete to use in a systems container?
I recommend using LXD.

Application container is usually used in applications and not as a complete system, you need a system container for a web system for example.

Systems container vs Application container

1 Like

Is this project no longer alive? I see that @jhmaverick has archived it and there is no update for Hestia 1.6.X :cry:

It’s a fantastic idea for localhost developers to utilize, especially with the multi-PHP options.

What would be needed to get it working with Hestia 1.6.X?

For Development is suggest to use Multipass:

Unless you want to test on Debian…

I’ve been able to create a simplified docker, simulated a systemd with a python script a fellow wrote; that seems to satisfy it, now working through some other bugs. Fingers crossed; we may have a solution (I’ll naturally contribute). :wink:

Update: Install crashed very close to the end. But I’ve been able to login, create users, right now it’s error’ing on Error: ERROR: Restart of php8.2-fpm failed. (probably systemctrl related). Trying to figure out which script is having trouble; possibly a false error.

I still want to have a docker image for automated test so please share…

1 Like

Here it is! Works well for a good, basic, local web development environment. It’s sacrilegious in that it’s been hacked to use systemctl and runs multiple services under one container as opposed to the “Docker way” of making each service its own composition; but the target for this is testing/development services on your local PC -not for production. You can find the repo for the Dockerfile at:

This is a ‘lite’ build, sans fail2ban, iptables, ClamAV, etc; as a lot of those things don’t quite make sense for a localhost dev. server. There is an option for the full bore build; but I’m not sure what all works and doesn’t. You can find the ‘lite’ Docker image at:

This is one annoying issue I can’t figure out: phpPgAdmin is broken. It displays:

Configuration error: Copy conf/ to conf/ and edit appropriately.

If anyone has any ideas where to look or what might be the cause; that’d be great. PhpMyAdmin seems to be working well; both MariaDB and Postgresql services appear to be in memory. I made this the docker repo’s first “issue #1”.


PHPGaAdmin is broken anyway probally need to look into replacing it…

We only support 5.6 and higher :slight_smile: Submitted a small PR

I noticed:
debconf: delaying package configuration, since apt-utils is not installed

[ * ] Configuring MariaDB database server…

ERROR 1045 (28000): Access denied for user ‘root’@‘localhost’ (using password: NO)

1 Like

Despite the error message, MariaDB appears to install fine and I can create databases for accounts without issue. Accessing them in phpMyAdmin works beautifully. Although I suspect this issue is legit in that the installer could not set a root pw.

:rofl: When a bug becomes a feature:

Turns out that for localhost development, no root pw on MySQL is the default for most devs; MAMP, XAMPP, etc. are all like this. I can login to MySQL’s CLI (only when root inside Docker). At the same time, this doesn’t allow root access via phpMyAdmin or from the CLI if one isn’t the Linux root user, which is important if one decides to proxy their personal computer for a public preview.

As for Postgresql, I’m not sure what’s going on there. I do know that phpPgAdmin does work in my VM, and Oracle based hosting providers. I’ll have to investigate further why Docker is a problem. Any hints greatly appreciated. Debating on dropping it; I know the Gitea and other popular projects use it but also have MySQL listed as an alternative.

I was allready afraid of it :cry:

It would be nice if it worked…

Well, the Docker image is a kludge, an ill-assorted collection of parts :rofl: Without running the patch (see the Dockerfile), an upgrade has no chance of succeeding. This is because Hestia’s install script will cause the /usr/bin/systemctl to be overwritten (via apt-get upgrade, etc.) with the base image; Docker is not a fan of using systemctl. The “patch” I implemented causes systemctl to be overwritten with the hack I sorted out to force it to work.

Speaking of which; because this is for a local/personal computer PHP developer stack; is there a way to inhibit upgrades? Perhaps via an exit call or something in /etc/hestiacp/hooks/ ?