Project to run Hestia in Docker

As for Postgresql, I’m not sure what’s going on there. I do know that phpPgAdmin does work in my VM, and Oracle based hosting providers. I’ll have to investigate further why Docker is a problem. Any hints greatly appreciated. Debating on dropping it; I know the Gitea and other popular projects use it but also have MySQL listed as an alternative.

I was allready afraid of it :cry:

It would be nice if it worked…

Well, the Docker image is a kludge, an ill-assorted collection of parts :rofl: Without running the patch (see the Dockerfile), an upgrade has no chance of succeeding. This is because Hestia’s install script will cause the /usr/bin/systemctl to be overwritten (via apt-get upgrade, etc.) with the base image; Docker is not a fan of using systemctl. The “patch” I implemented causes systemctl to be overwritten with the hack I sorted out to force it to work.

Speaking of which; because this is for a local/personal computer PHP developer stack; is there a way to inhibit upgrades? Perhaps via an exit call or something in /etc/hestiacp/hooks/ ?

I am sure the Hestia package didn’t overwrite system /usr/bin/systemctl

And the "docker build command went fine. To be fair I didn’t tried it with out building the new package first :slight_smile:

It didn’t took me a long time to setup it up and try it out… Would be nice if it would work better :). Allowed us to go back to an stable state instead patching over patching over patching

1 Like

I think we call the presinstall hook still in post

Mainly for us to disable debug mode on our demo server …

Ideally we should alter that to the preinstall hook… And I don’t know if calling exit would work.

1 Like

If Hestia invokes apt at anytime, chances are systemctl is restored. Check the /usr/bin/systemctl, if it isn’t a symbolic link to /usr/bin/, which then calls over 6000 lines of python code in /usr/bin/! :crazy: :grimacing:

Yup. You are correct; phpPgAdmin is broken on my Oracle and VM boxes. I guess I never noticed that it doesn’t work with Postgres 15. So after investigation I see that the phppgadmin/phppgadmin repo simply forgot to include a default database connection for unknown version numbers (they check to see if it’s up to Postgres 14). I went ahead and made a pull request to include a default; that seems to fix it on real and VM boxes with full fledged operating systems.

On my Dockerfile; it doesn’t quite cut it. Looks like there is some sort of weird permission/symbolic link honoring issue. I see the symbolic link for it:

/usr/share/phppgadmin/conf/ -> /etc/phppgadmin/

But for whatever reason, the Docker instance doesn’t like it. Simply copying over the file makes it work :man_shrugging: Not sure why. I’ll update the Dockerfile to kludge that too :-/

cp /etc/phppgadmin/ /usr/share/phppgadmin/conf/

I agree, too many patches going on here. But for localhost dev; it’s decent.

Hey, is there progress towards a production-ready docker container?
Is anybody here experienced with helm charts? If so, we would happily co-sponsor an effort to develop a productive helm chart this month.

It is far form production ready… Last time I had issues with iptables / fail2ban

What about the first post, wasn’t that used in production for some time?

Maybe go straight to nftables rather than iptables?

project is archived, no code change since nearly a year - don’t think its a good idea to use it for productive…

not in this state of course, but maybe somebody can pick it back up?
Maybe it is already more advanced than the other dockered project?

I think a couple people might have fluttered around this idea, but a very direct answer to your question is the “cattle, not pets” philosophy.

Ask yourself, if you lost one of your servers, would you be distressed (because it was set up so lovingly)? If so, your servers are pets. If not (because you can easily redeploy your server in minimal steps), your servers are cattle.

Docker containers provide a solution to this issue by making the (re)deployment of servers extremely simple (not necessarily easy, but none of this was easy in the first place). The idea being that you isolate the logic from the data, and if the logic dies, just redeploy.

Speaking to this concept in a personal manner, I set up a Hestia server a few years ago and rely heavily on it now. I recently realized it’s running Ubuntu 18. Now I have to figure out how to safely migrate it and all my data to an upgraded OS that will still receive updates (and most importantly, security patches).

Were this a docker setup, my data would already be isolated from my server, allowing me to migrate the data and logic separately, making the entire situation moot. I’d then be able to very simply update to a new docker image that’s already running the latest configuration.

I wholeheartedly support the idea of dockerizing Hestia. I specifically see the benefit via using a compose file to separate all the server components from the Hestia logic into separate images.

Plus, as a benefit, with docker support, deployment can literally be as simple as setting your settings in a docker compose file and hitting start. (and you can store that compose file in a git repo to track history, backup the data separately, and you have a VERY portable and resilient setup)