Google webmaster has been showing me this error “Google webmaster says - Host had problems last week” for some time now. This is a server built for automated sites. There are 2 more sites on this server. They are also sites that automatically publish content. About 50 posts are published per day from one site. All sites are wordpress
I also use “NextScripts: Social Networks Auto-Poster”. This plugin will auto-share my old posts.
I have heard about people who run a lot of sites on servers like this. So it is not possible to think that there is a load. I think my configurations need to change.
Here are the configurations of Hestia. Configure Server: PHP
max_execution_time : 30
max_input_time: 60
memory_limit: 250m
Configure Server: MARIADB
max_connections : 30
max_user_connections : 20
wait_timeout : 10
Here are the errors pages that google webmaster shows me
On small instances, the database config will probably need to be adjusted to be lighter. There isn’t a single answer to your problem, but the max_connections parameter is the one that has the most effect on memory usage of mysql/mariadb.
I’d look at that, with the aid of mysqltuner.pl, which will give you many suggestions.
but in google webmaster (google search console) says, “Host had problems last week” and server connectivity → High fail rate last week (25% - 35%).
[Explain further - These sites publish content automatically and I have a WordPress plugin to share my old posts. That WordPress plugin shares my old content on 5 different social media sites. Usually, I have 5000-10000 posts on one site.]
If mainly static files use enable fastcgi cache / caching template it will save a lot of resources. Wordpress is known to be not the best performance software …
I agree with eris’s suggestion, many plugins use up resources when you load an PHP page. You can use uptime robot to detect downtime. They alerts you of any downtime you might be facing.
First check the error logs. Hosting multiple wordpress sites and adding 50 posts per day on a small VPS with only 1 core is not the best choice.
Since the robots.txt (static file on the default hestia config) fetch looks good but server connectivity has huge timeouts, it looks like the issue is on the php+mysql side.
Running normal wordpress sites with few posts would fit this config. Adding 50 per day increases the wp_postmeta exponentially and cripple the database so you will need more cores and lots of ram for the database.
Running lots of sites on hestia - like ~100 sites per host look like this.
I used to have a nginx + apache configuration when I was running VestaCP, and my ram and cpu usage was very high, I even thought I may have misconfigured the values. When I moved to HestiaCP I choose to go with only nginx + php-fpm instead and ditch apache.
I also realised one thing that really messes with CPU usage is allowing wordpresss XML-RPC to keep running. I saw in my logs that random IPs were hammering xmlrpc.php on my wordpress sites.
I have a 4 core linode and when the bots were trying to brute force the xmlrpc.php api I was getting about 20% across all cores and that site was getting 0 real visitors. So after I blocked it the cpu utilization only went up when I or a visitor loaded the site.
I also used the fastcgi cache and it really helps. I tried using k6 performance testing and I ran into nginx setting limitations before maxing out my cpu. Where else without caching even a load of 50 would max out my CPU usage. With caching I think I did a few thousand VUs(it was a long time ago I don’t remember). Bottom line my recommendation would be to use caching when possible and block access to xmlrpc.php
The way I block xmlrpc.php is in my nginx template (if you like I can include the whole thing but below is really the only bit you would need. Add it in your server { } of the wordpress template and make a copy with a different name of it will be overwritten during updates. Remember to add your own servers ip addresses so it can be accessed internally.);
As for how many cores I would say at least 2 cores 4gb of ram. But if you can swing it, 4 cores and 8gb of ram would be better. I would not go to amazon, linode(or digital ocean) would be a much cheaper option.
yeah should be fine, the allow is only incase your wordpress plugins require the xmlrpc api. I haven’t run into a plugin that needs it. If you have a private ip I would add that as well but other than that you should be good to go. I would check your php logs to see if people have tried to brute force the api.
By the whole thing I meant the entire nginx template. This is if you don’t know how to edit the template file and add the location directive to block xmlrpc.php If you can do it then great. Its good to learn, when I first started a few years back I was lost. But over time we learn.
And @eris is also right 4 cores with 4 gb of ram is more than enough. If you can get a VPS with that config it will do nicely. Keep up the good work.
I read a lot about Optimizations. Then I found the SWAP method can help with RAM. But the internet also says not to use it for SSDs. (But DigitalOcean’s servers have default SWAP, right?) Do not really use this? Is there any other alternative to the SWAP Method?
Swap will help somewhat if a services uses for a short period a lot of memory. How ever for a continues time adding more memory in faster an has more preformace.
Swap can also kill the disks faster if you are using ssd…
Yeah swap is kinda like emergency ram. If you don’t have swap and you run out of memory, processes will start to be killed. For me personally I have an overkill setup for my use case, 8gb ram + 5gb swap. I always like to have double of what I use. Im not sure about digital ocean. Like @eris said alternative is to add more ram. Unless I had like 24gb of ram I would still keep a few gb of swap for emergencies. You can always set the swappiness very low to discourage using the swap.