Optimize mariaDB's speed

I would appreciate your help
I need to optimize mariaDB’s speed
I have a task that performs about 10 000 registrations, that task in Vestacp took about 10 to 15 seconds on a dedicated Hetzner AMD Ryzen 5 3600 Hexa-Core 64 GB DDR4 2x 512 GB NVMe SSD server.

However, with HestiaCp you are doing 3 minutes on the same server and end up giving a timeout, but the task always ends without problems.

I believe that the problem is solved through configuration made by someone who understands the subject more than me.

Below I publish the settings of the file /etc/mysql/my.cnf

symbolic-links=0
local-infile=0

skip-external-locking
key_buffer_size = 32G
max_allowed_packet = 512M
table_open_cache = 4096
sort_buffer_size = 16M
read_buffer_size = 16M
read_rnd_buffer_size = 64M
myisam_sort_buffer_size = 1024M
thread_cache_size = 128
query_cache_size= 256M
thread_concurrency = 8

#innodb_use_native_aio = 0
innodb_file_per_table

max_connections=10000
max_user_connections=10000
wait_timeout=300
interactive_timeout=300
long_query_time=50

Thank you

UPDATE

I made a clone of the system on the same server but under a different account and it worked ok
I believe if I delete the account and create it again, the problem will be solved.
However, I am always open to new ways to increase performance. :grin:

this does not sound like an issue with mysql after all. more likely limitations in php or the webserver config. this can be concluded because the mysql config does not change for different user, while the nginx/apache/php setting could - depending on what package/template you set in the panel.

however, without any information about your systems os and configuration it’s gonna be wild guessing only anyway :wink:

1 Like

I am using Debian 9

But it’s true, I had already thought about it, but I never cared about it until I read your comment.
I’m actually using one account to install the files and another account with the database.
I do this in order to be able to make a backup of the databases every 2 minutes while I do not spend resources with the backups of the files, because in fact it is not necessary because the files never change and I have them saved in my office.
On the other hand, the database is receiving customer data every second.

Anyway, I’m going to delete the account at dawn from next Saturday to Sunday, which is when users are sleeping and I’m going to install everything again.
If that doesn’t work, we’ll have to investigate further.

depending on how you do that, you might run into table locking issues. and with that high timeouts might be even worse.

maybe you rather want to look into true mysql replication aka run a second server as slave and use the inbuilt functions of mysql-server to get your safety net? also backups on the near-live slave server then won’t affect the production/master at all.

before you ask :rofl:: no HestiaCP so far does not offer a function to set up a mysql slave for you as far as I am aware.
so I am afraid you would need to customize that on your own (but afail shouldn’t be too hard - just need a second server where you can run mysql on)

You are right, but I solved this problem as follows:

As I have several servers with very different contents to perform backups, I purchased kimsufi servers with 2TB of space each and installed Hestia on them to be able to receive the backups from the production servers (Hetzener) in an organized way.
So this server that is having this problem is sending the backps remotely to a Kimsufi server, which in turn is also receiving backups from other servers.
It may not be a perfect solution, but it works.
Changing this organization was probably going to be an odyssey and I would have to deal with things that I don’t really want or have time for.

You know that there are things that are born in a certain way and then grow over time, later on it is difficult to change them, but we know that there are better ways.

1 Like

very true indeed.

and nothing against your backup solution, splitting accounts to be able to do a solely db backup is for sure a feasible workaround - though you’d probably be better of with a small shell script that dumps your db and sends it somewhere (on a cronjob).

this is because the backup-routine is going to dump your db, archive it temporarily and then tar it again into the final archive. obviously if your database grows the time taking a backup needs will grow and even by a factor involved by the routine.
on top kimsufi (which tend to be great value) are still limited in network speed, so another (future) culprit maybe.

again, nothing wrong with your way of doing, but the sooner you start to optimize/rethink or even completely change that process the easier it will be and knockon effects might already be remarkable.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.