HestiaCP Load test

Hey guys, I recently saw this post on Hestia Forums and everyone was saying they don’t have a clear answer if OpenLitespeed actually helps handle load better. So I thought to run some tests myself.

Server Specs:

I ran the test setup on Oracle Always Free tier with max configurations so it’s easy to replicate.
Ampere 4vCPU core
24GB RAM
200GB Disk Space
Ubuntu 22.04
The tests were run on a Clean install of Wordpress on the default 2024 homepage.

Cyberpanel Test

On First test, I installed Cyberpanel with OpenLitespeed. Installed a Website, Installed WordPress and ran three Load tests with 4000-8000 Clients per second.



As you can see, the server handled the load pretty well, and after the initial spike the latency is always less than 250-500ms.

Hestia with Apache + php-fpm test

For the second test I installed Apache and php-fpm on Hestia using the installation args. Nginx was also present as a Reverse Proxy.I ran one test with 2000 clients per second.

Apache was already doing worse. the response time was going above 2 seconds at some point.

Hestia with Nginx + php-fpm test

For the last test, I ran Nginx as a web server. with PHP-fpm. No apache. I also enabled FastCGI cache from the settings. I ran a test with 2000 clients per second.

Nginx was holding better with about 1 second of response with 2000 clients.

Conclusion

I am really torn after seen the results. the gaps are much wider than I ever expected and not in a good way. Upon more inspection and fiddling with Apachebench. I noticed that during the load with Hestia and Nginx, the CPU usage get’s to a full 100%. While memory usage increases by a % depending on the clients visiting per second. But to my surprise while the memory increase was similar with Openlitespeed. The CPU didn’t even budge during the load. I tried all these in vanilla settings. Was I doing anything wrong? Did I miss any obvious optimisation. Are my results expected? I want your opinions.

I am running my production servers cyberpanel, but its utterly broken. I would like to move to Hestia, However if the load capacity gets reduce by certain magnitude it can be an issue.

Use nginx and fpm only, tune nginx config - apache2 is known to not be that “fast” at all.

1 Like

Even with Nginx and fpm, the server struggles to go beyond 2k clients/second. while OLS can handle 8k clients/second on the server. Is that normal? that’s a significant difference of 4 times.

nginx and fpm can handle the same amount of connections, it all depends on configuration. Maybe there are some other users which want to share some samples.

What tool/service did you use to perform the load tests?

The screenshot provided are from Loader.io
I’ve also used Apache bench to test it.
I was also looking into Apache Jmeter but I don’t have the setup rn to run that software.

However, the results were pretty similar with Loader.io and Apache bench.

1 Like

Does anyone have any insights?

Do you really have a lot of traffic or are you checking just for fun? nginx handles traffic great, the question is how many people visit your website daily?

I have been using Hestia + Nginx with 2 / 3 k users at the same times…

So it should work fine …

1 Like

that’s why I’m asking. Is it a real problem or just a statistic?

I believe both of you are missing the actual point. If OLS can handle more than 2x users as per my results. I can scale down and save money or have more load on the same server, depending on the use case. I am wondering if my results are accurate or if I am missing something crucial.

My current production server uses cyberpanel and handles load of 3-5k users at the same time with OLS without any issue. However, I would prefer to switch to Hestia due to issues with Cyberpanel. If the server capacity almost halves with NGINX compared to OLS. I might have to reconsider or scale up the hardware.

Also, this isn’t just about me. If OLS is really far more efficient compared to Nginx+fpm. it can benefit a lot of users to manage load better.

@eris your data says nothing. how does your server perform with OLS compared to Nginx+Hestia. Does it also peak at 2-3k users or can it handle more/less.

1 Like
1 Like

Which One Is Better?
In our two tests, OpenLiteSpeed lost one round by a margin of 0.49% and won another round by a margin of 0.17%. The difference in measurements is insignificant and the fluctuations can be attributed to randomness in the environment. After analyzing the results, there is no clear winner. Enabling the FastCGI Caching on Nginx makes it just as performant as OpenLiteSpeed.

I’ve run this test; HestiaCP + Nginx standalone + https + Wordpress + 3875 Clients/sec. I’ve activated FastCGI cache and I’m using default theme Twenty Twenty-Four.

These are my server specs:

Hestia     : 1.8.11
Processor  : AMD EPYC Processor
CPU cores  : 3 @ 2445.406 MHz
RAM        : 3.7 GiB
Swap       : 0.0 KiB
Disk       : 75.0 GiB
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.1.0-14-amd64
VM Type    : KVM

3875 clients per second over 1 minute and an average of 411 ms, 0 errors and 232147 success responses with a maximum load average of 1.75 during the load test (the CPU has 3 processors).

Cheers,
sahsanu

4 Likes

This benchmark is excellent @sahsanu. The problem is FastCGI cache, if it is deactivated the processor consumption increases dramatically and PHP-FPM goes down.

PHP is the most memory intensive part…

That will hit server the most by default Litespeed has it already enabled. Nginx doesn’t how ever for websites that are hit a lot of times it is almost mandatory…

How ever enabling on default has also a lot of problems mainly with web shops or where user need to login a lot …

1 Like

Yes I am aware of these tests. However:

  • A lot of people(including some mods of Hestia) say these tests are sponsored by OLS or some other entity.
  • These Results vary a lot compared to my results so I believe they are performed under completely different scenario.

Thanks for providing the test results however this only paints one side of the picture and the data is useless until we get comparable results with OLS. How does this same server perform with OLS Can it still handle a similar amount of load? Less? More? unless we get some data to compare it with, It’s not useful. Also some htop data during the load might be useful.

I’m comparing it to your tests performed in a better specs server.

In my previous post I already said the peek load average was 1.75 with 3 processors.

Anyways, here a new test performed 20 minutes ago.

$ ./loadavg
0.03  1% :: 2023-12-12 15:26:26
0.03  1% :: 2023-12-12 15:26:31
0.02  0% :: 2023-12-12 15:26:36
0.50 16% :: 2023-12-12 15:26:41 <-- here the load test begins
0.62 20% :: 2023-12-12 15:26:46
0.73 24% :: 2023-12-12 15:26:51
0.83 27% :: 2023-12-12 15:26:56
0.85 28% :: 2023-12-12 15:27:01
0.94 31% :: 2023-12-12 15:27:06
0.87 29% :: 2023-12-12 15:27:11
0.88 29% :: 2023-12-12 15:27:16
0.97 32% :: 2023-12-12 15:27:21
1.13 37% :: 2023-12-12 15:27:26
1.28 42% :: 2023-12-12 15:27:31
1.34 44% :: 2023-12-12 15:27:36
1.23 41% :: 2023-12-12 15:27:41 <-- the load test ends here
1.13 37% :: 2023-12-12 15:27:46
1.04 34% :: 2023-12-12 15:27:51
Peek Load Average with 3 processors: 1.34 44% :: 2023-12-12 15:27:36

loadavg is a script I’ve made to check load average every 5 seconds:

#!/usr/bin/env bash

# Variables
timesecs="${1:-90}"
pausesecs="5"
ADDLAVG=""
SECONDS=""
FINISH=0

# Trap signals
trap "FINISH=1" INT TERM HUP

# Action
while [[ "$SECONDS" -lt "$timesecs" ]]; do
        LAVG="$(awk '{print $1}' </proc/loadavg)"
        DATE="$(date +'%Y-%m-%d %H:%M:%S')"
        NCPUS="$(getconf _NPROCESSORS_ONLN)"
        PERCENT="$(echo "scale=0; $LAVG * 100 / $NCPUS" | bc -l)"
        ADDLAVG+="$LAVG $PERCENT% $DATE\n"
        echo "$LAVG $PERCENT% :: $DATE" | sed -E 's/(\s[0-9]%\s::)/ \1/'
        if [[ "$FINISH" -eq 1 ]]; then
                break
        fi
        sleep "$pausesecs"
done

# Result
PEEKAVG="$(echo -e "$ADDLAVG" | sort -n | tail -n1)"
echo -n "Peek Load Average with $NCPUS processors: "
awk '{print $1,$2" :: "$3,$4}' <<<"$PEEKAVG"

Oracle Cloud ARM 4 Cores are limited to 0,25 real cores. Als ARM preformance doesn’t really scale with x86 cores …