Need to change Hestia configurations

Vultr - Ubuntu - Hestia - Cloudflare - 1GB RAM - 1vCore - 15GB available

Web Template APACHE2 = default
Backend Template PHP-FPM = Default
Operating System: Ubuntu 20.04 (x86_64)
Load Average:0.4

Google webmaster has been showing me this error “Google webmaster says - Host had problems last week” for some time now. This is a server built for automated sites. There are 2 more sites on this server. They are also sites that automatically publish content. About 50 posts are published per day from one site. All sites are wordpress

I also use “NextScripts: Social Networks Auto-Poster”. This plugin will auto-share my old posts.

I have heard about people who run a lot of sites on servers like this. So it is not possible to think that there is a load. I think my configurations need to change.

Here are the configurations of Hestia.
Configure Server: PHP

  • max_execution_time : 30
  • max_input_time: 60
  • memory_limit: 250m

Configure Server: MARIADB

  • max_connections : 30
  • max_user_connections : 20
  • wait_timeout : 10

Here are the errors pages that google webmaster shows me

I can’t really find a question in your thread? Set the values above should not be a big problem.

On small instances, the database config will probably need to be adjusted to be lighter. There isn’t a single answer to your problem, but the max_connections parameter is the one that has the most effect on memory usage of mysql/mariadb.
I’d look at that, with the aid of, which will give you many suggestions.

1 Like

With those specs for the VPS Debian may have been a better option for an OS than Ubuntu it’s a lot lighter you been to has a lot of bloat

Here i would like to see some facts and proofs - combine debian 11 and ubuntu server 20.04 lts snd tell me if there is really such a difference…

1 Like

Hestia says my load average is 0.2 - 0.6

but in google webmaster (google search console) says, “Host had problems last week” and server connectivity → High fail rate last week (25% - 35%).

[Explain further - These sites publish content automatically and I have a WordPress plugin to share my old posts. That WordPress plugin shares my old content on 5 different social media sites. Usually, I have 5000-10000 posts on one site.]

What should I do to resolve this issue?

Thank you,
I hope your answer solves my problem. I will try.

If mainly static files use enable fastcgi cache / caching template it will save a lot of resources. Wordpress is known to be not the best performance software …

I agree with eris’s suggestion, many plugins use up resources when you load an PHP page. You can use uptime robot to detect downtime. They alerts you of any downtime you might be facing.

First check the error logs. Hosting multiple wordpress sites and adding 50 posts per day on a small VPS with only 1 core is not the best choice.
Since the robots.txt (static file on the default hestia config) fetch looks good but server connectivity has huge timeouts, it looks like the issue is on the php+mysql side.
Running normal wordpress sites with few posts would fit this config. Adding 50 per day increases the wp_postmeta exponentially and cripple the database so you will need more cores and lots of ram for the database.
Running lots of sites on hestia - like ~100 sites per host look like this.

  1. What happens if we use only nginx without apache?
  2. I use vultr servers. if I move to amazon servers, it will help?
  3. As you think, How many cores do I need according to my requirements?

I used to have a nginx + apache configuration when I was running VestaCP, and my ram and cpu usage was very high, I even thought I may have misconfigured the values. When I moved to HestiaCP I choose to go with only nginx + php-fpm instead and ditch apache.

I also realised one thing that really messes with CPU usage is allowing wordpresss XML-RPC to keep running. I saw in my logs that random IPs were hammering xmlrpc.php on my wordpress sites.

I have a 4 core linode and when the bots were trying to brute force the xmlrpc.php api I was getting about 20% across all cores and that site was getting 0 real visitors. So after I blocked it the cpu utilization only went up when I or a visitor loaded the site.

I also used the fastcgi cache and it really helps. I tried using k6 performance testing and I ran into nginx setting limitations before maxing out my cpu. Where else without caching even a load of 50 would max out my CPU usage. With caching I think I did a few thousand VUs(it was a long time ago I don’t remember). Bottom line my recommendation would be to use caching when possible and block access to xmlrpc.php

The way I block xmlrpc.php is in my nginx template (if you like I can include the whole thing but below is really the only bit you would need. Add it in your server { } of the wordpress template and make a copy with a different name of it will be overwritten during updates. Remember to add your own servers ip addresses so it can be accessed internally.);

location = /xmlrpc.php {
allow Public IPv4;
allow Private IPv4;
allow Public IPv6;
allow Private IPv6;
deny all;

As for how many cores I would say at least 2 cores 4gb of ram. But if you can swing it, 4 cores and 8gb of ram would be better. I would not go to amazon, linode(or digital ocean) would be a much cheaper option.

1 Like

4 cores + 4 gb of memory is more then enough…

Enable fastcgi cache… However accept there is a delay of about 10 to 15 min before the changes are visible on the main index page/

I am currently working a module to integrate with nginx to allow purging via how ever it still in early alpha mode.


I created a Test Server, and I try to block xmlrpc.php
If My Test Server IP is, then the below lines are correct? I don’t have IPv6

location = /xmlrpc.php {
deny all;

I’m a beginner at This kind of server works. sorry, if I have done any silly thing :expressionless: Actually, it’s good to have an error like this. I learned a lot. :yum: :yum:

and you said, “the whole thing”, can I know what is that?

yeah should be fine, the allow is only incase your wordpress plugins require the xmlrpc api. I haven’t run into a plugin that needs it. If you have a private ip I would add that as well but other than that you should be good to go. I would check your php logs to see if people have tried to brute force the api.

By the whole thing I meant the entire nginx template. This is if you don’t know how to edit the template file and add the location directive to block xmlrpc.php If you can do it then great. Its good to learn, when I first started a few years back I was lost. But over time we learn.

And @eris is also right 4 cores with 4 gb of ram is more than enough. If you can get a VPS with that config it will do nicely. Keep up the good work.

Would it make sense to have a preconfigured fail2ban rule for this in hestia?

I read a lot about Optimizations. Then I found the SWAP method can help with RAM. But the internet also says not to use it for SSDs. (But DigitalOcean’s servers have default SWAP, right?) Do not really use this? Is there any other alternative to the SWAP Method?

resources - How To Add Swap Space on Ubuntu 20.04 | DigitalOcean

Swap will help somewhat if a services uses for a short period a lot of memory. How ever for a continues time adding more memory in faster an has more preformace.

Swap can also kill the disks faster if you are using ssd…

Alternative add more memory


It would depend on individual use cases. If your Wordpress installation uses it, then yes. If not best to just block it completely.

Yeah swap is kinda like emergency ram. If you don’t have swap and you run out of memory, processes will start to be killed. For me personally I have an overkill setup for my use case, 8gb ram + 5gb swap. I always like to have double of what I use. Im not sure about digital ocean. Like @eris said alternative is to add more ram. Unless I had like 24gb of ram I would still keep a few gb of swap for emergencies. You can always set the swappiness very low to discourage using the swap.

1 Like