Project to run Hestia in Docker

I have a version of Hestia running in docker and was wondering if anyone would be interested in helping with the project? The project works relatively well in the container but sometimes in Hestia version updates there is a problem in the build. In addition, there are a few more things to be done to enable services in scale out.
I think if anyone else was willing to help it would be a very good thing for the community.

The image installs most services with the exception of MariaDB which runs in a separate container to speed up startup, and the hestia container forwards connections to it with proxy

Some advantages of using docker:

  • You can test the entire server before putting it into production;
  • Only the essential parts are kept in volume (the fewer configuration files in volume, the lower the chances of breaking the container in an update);
  • Downtime of about 15 seconds on updates;

Futures:

  • Make the service scale out.
  • Create an installer for docker so that no more rewrites are required. Currently the script creates an installer using the debian version as a base and applying seds to add or remove parts of the code to enable installation in a container.
  • Improve the way communication with MariaDB is done to allow the use of an external service like RDS in place of the container.
  • Add settings to run postgres externally or in a separate container just like MariaDB.

Here is the project repository: GitHub - jhmaverick/hestiacp-docker: Dockerized Hestia Control Panel

5 Likes

Beta branch is temporary so it will get removed after 1.6.0 has been releasedā€¦ Becarefull with itā€¦

2 Likes

The version that is using it is also temporary, after it is finished I intend to use main as beta or in some other tag to use as a preview. The image also removes the parts that download updates, so an image that was built with the current beta version remains until the container is recreated with another image.

1 Like

Can I ask a provocative question? Donā€™t be offended by my question. I do NOT mean to offend you.
But why need to wrap everything under docker? For what? Why need that extra layer? For what? Why need to make things extremely complex, why every software everyone trying to wrap in docker and making in result working with the software - extremely hard?

All my colleagues for example trying to wrap emulator of some server software that working just perfectly fine on VPS for decades. Without any dockers. They separate:
emulator logic / MySQL server / different required components to different containers.

Result: impossible to trace, debug, backup, move, configure, edit, and fix on production state without consequences working emulator.

Why? Because of extreme level of difficulty, layer that added with docker.
And last year instead of working with emulator code, weā€™re working on bugs which hard to trace, fix because of docker.

Why everyone trying to wrap things under docker? Just for what? No. Seriously.
For what? HestiaCP in my opinion NOT MICROSERVICE thingā€¦.

Please try to get me correctly. I just wanna to understand. I really tring my best to realize the reason: ā€œfor what need to wrap everything behind the docker?ā€

5 Likes

Hello desp,

I disagree about the complexity part, Docker can be a little difficult to understand at first but once you understand it it is very easy to use. Docker gives you the security of knowing exactly what you have running with minimal surprises when putting it into production. If the problem at first is to know what is happening, you can try some software like portainer that helps you configure, edit, update, view logsā€¦ About backups, if the container you are running needs to keep the data , you can backup only the data in volume and ignore everything else that is in the container, now if your container is just to process something and send it somewhere, you can just discard the container after it finishes the task .

In the case of Hestia, imagine that you are going to run an update routine on your server, this update may require you to stop some service and if it fails it may take a while for you to solve the problem. With the services running in containers, all this step has already been done and in a few seconds you can change the production image and update the entire server, and in case any problems have gone unnoticed, you just need to recreate the container using the previous image and all the problems go away with her. You can prepare the image with everything you need, applications, settings, routinesā€¦ and when you install the image on your machines they will all run exactly the same, no matter if you have 1 or 100 servers. If you use some tool to orchestrate the containers you wonā€™t need to run any commands for that.

There are some more advantages such as, for example, if your machine has a dedicated disk to store the container data, if you need to move your server to another machine you can simply uncouple the volume from one machine and connect it to another, and when you run the container everything will be there the same way. This is even more practical to create a snapshot, you donā€™t have to worry so much about the machine that is running the container as it can be discarded.

Initially what led me to add Hestia to the container was the ease of replicating the same configuration on several servers, the update time and the security of knowing everything that is running. And that was just a very superficial explanation, there are many other advantages of using a service in a container.

2 Likes

If you run docker directly on a bare metal server I would prefer to use LXC container with proxmox instead. And you have the same advantages.

As you donā€™t install updates over apt it might cause issues because user configs and user settings are not rebuild over a longer period of times. In the future it might cause issue.

Also with the disabling of apt updates you are disabling any possible updates from Hestia and any security patches that you will miss. Unless you follow HestiaCP actively there is no way to to get notified when there are updates.

Off course docker has advantages even for us. As it would allow us to ā€œstartā€ a new docker ā€œinstanceā€ ā†’ Install hestia update ā†’ do function tests and trash the server again and start over again on the next image.

3 Likes

I think often the motivation is that docker plays quite nice with things like kubernetes to reach a different kind of scalability for instance. also in terms of CI/CD itā€™s a whole different story to automate deployment of updates and such.

so docker for sure has itā€™s place and meaning. however, I agree, that does not mean, everything has to go in it. and probably a lot of people trying to put things into docker do not exactly understand the conecpt itself or why they are doing after all :wink:

that said, I also agree that I donā€™t see a big benefit of putting something like Hestia into docker. while the arguments about updating and similarity that @jhmaverick already wrote for sure come into play, I heavily doubt that there are many people managing hundreds of hestia servers :wink:

you need to attach/mount external folders for data storage and handle port forwarding. or let a proxy live outside, which again adds unneeded complexity. and probably quickly outweight any benefits on the deployment part.

apart from that itā€™s probably still just another container, so there is no difference to lxc or the likes unless you start ripping it apart a bit more, so you can use seperate containers for instance for databases, redis caching and the likes.

PS: interesting project nevertheless.

1 Like

If hestiaCP allowed to manage services in a distributed environment. One panel to control.

  • DNS on any machine
  • nginx on any machine
  • email on any machine
    -ā€¦
    Then it would make a lot more sense to dockerize.
2 Likes

User settings depends a lot on how they are handled in the image update, the system only applies the essential volume locations like ā€œhestia/dataā€, ā€œhestia/confā€ and ā€œhestia/sslā€, also "/home ", ā€œ/etc/exim4/domainsā€, ā€œ/etc/nginx/conf.d/domainsā€, ā€œpool.dā€ of PHP versions and a few other places, all the rest of the Hestia files and applications are in the image and will vary according to the version installed. Whenever I change the version of Hestia in the image, I check the ā€œupgradesā€ files and take only the parts that influence the data that is in volume so that they run when the container image is changed.

The most extreme use that I experienced with data updates in volume was a few months ago when I migrated from Vesta to Hestia 1.5.x, to update the servers I just had to make a script to change some of the data that were in volumes, run a rebuild the configurations and the container did the rest of the work, and even with all the difference that exists between Vesta and Hestia nowadays it didnā€™t present problems.

The Hestia update via APT had to be disabled to prevent the update from overwriting the changes made to docker, in addition the update could modify the volume files that when recreating the container would be in a different version than the rest of the image. The issue about the update notice can be resolved in a similar way to the Docker versions of Nextcloud or Rocket.chat that launch a notification notifying you about the update.

1 Like

This is just the first step, Hestia already runs in docker and keeps all the configurations that will vary from one server to another, MariaDB was the first to be moved out of the container and the next steps are to do this for other services like postgres, php, bindā€¦ and keep the main container controlling all that, for example, once php is moved out it will be possible to use load-balanced templates among other things.

1 Like

I have been thinking about this feature for a while and I think that the idea is to dockerize the services in the panel so we can distribute loads among machines with one control panel.

2 Likes

Wouldnā€™t it be more complete to use in a systems container?
I recommend using LXD.

Application container is usually used in applications and not as a complete system, you need a system container for a web system for example.

Systems container vs Application container
https://linuxcontainers.org/lxd/introduction/

1 Like

Is this project no longer alive? I see that @jhmaverick has archived it and there is no update for Hestia 1.6.X :cry:

Itā€™s a fantastic idea for localhost developers to utilize, especially with the multi-PHP options.

What would be needed to get it working with Hestia 1.6.X?

For Development is suggest to use Multipass:

Unless you want to test on Debianā€¦

Iā€™ve been able to create a simplified docker, simulated a systemd with a python script a fellow wrote; that seems to satisfy it, now working through some other bugs. Fingers crossed; we may have a solution (Iā€™ll naturally contribute). :wink:

Update: Install crashed very close to the end. But Iā€™ve been able to login, create users, right now itā€™s errorā€™ing on Error: ERROR: Restart of php8.2-fpm failed. (probably systemctrl related). Trying to figure out which script is having trouble; possibly a false error.

I still want to have a docker image for automated test so please shareā€¦

1 Like

Here it is! Works well for a good, basic, local web development environment. Itā€™s sacrilegious in that itā€™s been hacked to use systemctl and runs multiple services under one container as opposed to the ā€œDocker wayā€ of making each service its own composition; but the target for this is testing/development services on your local PC -not for production. You can find the repo for the Dockerfile at:

This is a ā€˜liteā€™ build, sans fail2ban, iptables, ClamAV, etc; as a lot of those things donā€™t quite make sense for a localhost dev. server. There is an option for the full bore build; but Iā€™m not sure what all works and doesnā€™t. You can find the ā€˜liteā€™ Docker image at:

https://hub.docker.com/repository/docker/steveorevo/hestiacp_dockered

This is one annoying issue I canā€™t figure out: phpPgAdmin is broken. It displays:

Configuration error: Copy conf/config.inc.php-dist to conf/config.inc.php and edit appropriately.

If anyone has any ideas where to look or what might be the cause; thatā€™d be great. PhpMyAdmin seems to be working well; both MariaDB and Postgresql services appear to be in memory. I made this the docker repoā€™s first ā€œissue #1ā€.

2 Likes

PHPGaAdmin is broken anyway probally need to look into replacing itā€¦

We only support 5.6 and higher :slight_smile: Submitted a small PR

I noticed:
debconf: delaying package configuration, since apt-utils is not installed

And
[ * ] Configuring MariaDB database serverā€¦

ERROR 1045 (28000): Access denied for user ā€˜rootā€™@ā€˜localhostā€™ (using password: NO)

1 Like

Despite the error message, MariaDB appears to install fine and I can create databases for accounts without issue. Accessing them in phpMyAdmin works beautifully. Although I suspect this issue is legit in that the installer could not set a root pw.

:rofl: When a bug becomes a feature:

Turns out that for localhost development, no root pw on MySQL is the default for most devs; MAMP, XAMPP, etc. are all like this. I can login to MySQLā€™s CLI (only when root inside Docker). At the same time, this doesnā€™t allow root access via phpMyAdmin or from the CLI if one isnā€™t the Linux root user, which is important if one decides to proxy their personal computer for a public preview.