Project to run Hestia in Docker

I have a version of Hestia running in docker and was wondering if anyone would be interested in helping with the project? The project works relatively well in the container but sometimes in Hestia version updates there is a problem in the build. In addition, there are a few more things to be done to enable services in scale out.
I think if anyone else was willing to help it would be a very good thing for the community.

The image installs most services with the exception of MariaDB which runs in a separate container to speed up startup, and the hestia container forwards connections to it with proxy

Some advantages of using docker:

  • You can test the entire server before putting it into production;
  • Only the essential parts are kept in volume (the fewer configuration files in volume, the lower the chances of breaking the container in an update);
  • Downtime of about 15 seconds on updates;


  • Make the service scale out.
  • Create an installer for docker so that no more rewrites are required. Currently the script creates an installer using the debian version as a base and applying seds to add or remove parts of the code to enable installation in a container.
  • Improve the way communication with MariaDB is done to allow the use of an external service like RDS in place of the container.
  • Add settings to run postgres externally or in a separate container just like MariaDB.

Here is the project repository: GitHub - jhmaverick/hestiacp-docker: Dockerized Hestia Control Panel


Beta branch is temporary so it will get removed after 1.6.0 has been released… Becarefull with it…


The version that is using it is also temporary, after it is finished I intend to use main as beta or in some other tag to use as a preview. The image also removes the parts that download updates, so an image that was built with the current beta version remains until the container is recreated with another image.

1 Like

Can I ask a provocative question? Don’t be offended by my question. I do NOT mean to offend you.
But why need to wrap everything under docker? For what? Why need that extra layer? For what? Why need to make things extremely complex, why every software everyone trying to wrap in docker and making in result working with the software - extremely hard?

All my colleagues for example trying to wrap emulator of some server software that working just perfectly fine on VPS for decades. Without any dockers. They separate:
emulator logic / MySQL server / different required components to different containers.

Result: impossible to trace, debug, backup, move, configure, edit, and fix on production state without consequences working emulator.

Why? Because of extreme level of difficulty, layer that added with docker.
And last year instead of working with emulator code, we’re working on bugs which hard to trace, fix because of docker.

Why everyone trying to wrap things under docker? Just for what? No. Seriously.
For what? HestiaCP in my opinion NOT MICROSERVICE thing….

Please try to get me correctly. I just wanna to understand. I really tring my best to realize the reason: “for what need to wrap everything behind the docker?”


Hello desp,

I disagree about the complexity part, Docker can be a little difficult to understand at first but once you understand it it is very easy to use. Docker gives you the security of knowing exactly what you have running with minimal surprises when putting it into production. If the problem at first is to know what is happening, you can try some software like portainer that helps you configure, edit, update, view logs… About backups, if the container you are running needs to keep the data , you can backup only the data in volume and ignore everything else that is in the container, now if your container is just to process something and send it somewhere, you can just discard the container after it finishes the task .

In the case of Hestia, imagine that you are going to run an update routine on your server, this update may require you to stop some service and if it fails it may take a while for you to solve the problem. With the services running in containers, all this step has already been done and in a few seconds you can change the production image and update the entire server, and in case any problems have gone unnoticed, you just need to recreate the container using the previous image and all the problems go away with her. You can prepare the image with everything you need, applications, settings, routines… and when you install the image on your machines they will all run exactly the same, no matter if you have 1 or 100 servers. If you use some tool to orchestrate the containers you won’t need to run any commands for that.

There are some more advantages such as, for example, if your machine has a dedicated disk to store the container data, if you need to move your server to another machine you can simply uncouple the volume from one machine and connect it to another, and when you run the container everything will be there the same way. This is even more practical to create a snapshot, you don’t have to worry so much about the machine that is running the container as it can be discarded.

Initially what led me to add Hestia to the container was the ease of replicating the same configuration on several servers, the update time and the security of knowing everything that is running. And that was just a very superficial explanation, there are many other advantages of using a service in a container.

1 Like

If you run docker directly on a bare metal server I would prefer to use LXC container with proxmox instead. And you have the same advantages.

As you don’t install updates over apt it might cause issues because user configs and user settings are not rebuild over a longer period of times. In the future it might cause issue.

Also with the disabling of apt updates you are disabling any possible updates from Hestia and any security patches that you will miss. Unless you follow HestiaCP actively there is no way to to get notified when there are updates.

Off course docker has advantages even for us. As it would allow us to “start” a new docker “instance” → Install hestia update → do function tests and trash the server again and start over again on the next image.


I think often the motivation is that docker plays quite nice with things like kubernetes to reach a different kind of scalability for instance. also in terms of CI/CD it’s a whole different story to automate deployment of updates and such.

so docker for sure has it’s place and meaning. however, I agree, that does not mean, everything has to go in it. and probably a lot of people trying to put things into docker do not exactly understand the conecpt itself or why they are doing after all :wink:

that said, I also agree that I don’t see a big benefit of putting something like Hestia into docker. while the arguments about updating and similarity that @jhmaverick already wrote for sure come into play, I heavily doubt that there are many people managing hundreds of hestia servers :wink:

you need to attach/mount external folders for data storage and handle port forwarding. or let a proxy live outside, which again adds unneeded complexity. and probably quickly outweight any benefits on the deployment part.

apart from that it’s probably still just another container, so there is no difference to lxc or the likes unless you start ripping it apart a bit more, so you can use seperate containers for instance for databases, redis caching and the likes.

PS: interesting project nevertheless.

1 Like

If hestiaCP allowed to manage services in a distributed environment. One panel to control.

  • DNS on any machine
  • nginx on any machine
  • email on any machine
    Then it would make a lot more sense to dockerize.

User settings depends a lot on how they are handled in the image update, the system only applies the essential volume locations like “hestia/data”, “hestia/conf” and “hestia/ssl”, also "/home ", “/etc/exim4/domains”, “/etc/nginx/conf.d/domains”, “pool.d” of PHP versions and a few other places, all the rest of the Hestia files and applications are in the image and will vary according to the version installed. Whenever I change the version of Hestia in the image, I check the “upgrades” files and take only the parts that influence the data that is in volume so that they run when the container image is changed.

The most extreme use that I experienced with data updates in volume was a few months ago when I migrated from Vesta to Hestia 1.5.x, to update the servers I just had to make a script to change some of the data that were in volumes, run a rebuild the configurations and the container did the rest of the work, and even with all the difference that exists between Vesta and Hestia nowadays it didn’t present problems.

The Hestia update via APT had to be disabled to prevent the update from overwriting the changes made to docker, in addition the update could modify the volume files that when recreating the container would be in a different version than the rest of the image. The issue about the update notice can be resolved in a similar way to the Docker versions of Nextcloud or that launch a notification notifying you about the update.

1 Like

This is just the first step, Hestia already runs in docker and keeps all the configurations that will vary from one server to another, MariaDB was the first to be moved out of the container and the next steps are to do this for other services like postgres, php, bind… and keep the main container controlling all that, for example, once php is moved out it will be possible to use load-balanced templates among other things.

1 Like

I have been thinking about this feature for a while and I think that the idea is to dockerize the services in the panel so we can distribute loads among machines with one control panel.

1 Like

Wouldn’t it be more complete to use in a systems container?
I recommend using LXD.

Application container is usually used in applications and not as a complete system, you need a system container for a web system for example.

Systems container vs Application container

1 Like