Fail2ban to stop DDOS

We had a site DDOS’ed this morning, slamming our site with 404 errors.

So, I created this file: /etc/fail2ban/filter.d/nginx-4xx.conf
and wrote this to the file:

failregex = ^<HOST>.*"(GET|POST).*" (404|444|403|400|500) .*$
ignoreregex =

then edited the /etc/fail2ban/jail.conf file
and added to the end of the file:

enabled = true
port = http,https
logpath = /var/log/nginx/domains/
maxretry = 3

This seemed to work as the attack slowed, however the banned ips weren’t reported in the panel firewall log, although there were dozens of ‘HESTIA’ labeled blocked IPs that appeared after I wrote the above files and restarted Fail2ban. Also, I didn’t edit the /etc/fail2ban/jail.local file – wasn’t sure if was required.

Any insight on the above and/or if anyone else has faced this, would be appreciated.

I also don’t understand how to ban with my own rule and make it show in the hestiaCP fail2ban list.

You can always ask fail2ban to make that list for you in the cli.

I’m not expert, but i have a lot of experience with ddos attacks (because of nature of game-server scene).

So, what basic principles how to protect against DDoS?
Short: use any 3rd party service for filtering ddos.
You’re lucky, not like me in 2010, 2013 where no options at scene at the moment ta all for normal okay price. Right now there are dozens of services for acceptable price that filter ddos attacks.

Examples? Easy:

  2. Authentication required - flowProxy
  3. (not sure about this option)

All of them except hostslick has good Layer 7 (application layer, i.e. your webserver) ddos protection for acceptable price tag like 10-15 usd / mo.

That’s what need to know about DDoS attacks.

Lets talk about DoS or DoS from several VPS/servers that attack you manually.

Main goal - minimize CPU / traffic usage per visitor per connection.
For that goal need to use as lightweight bundle of software as possible.

Classic ones: LEMP (linux, nginx, mariadb, php)

usually mariadb not require optimizations.
php require fixed max php value, max child process, restart after X requests, and settings for fpm to limit max execution time for php request.
Also important to enable opcache.

The second biggest and most important thing is nginx itself.
If you have not really dynamic website (i.e. not updated frequently, or without user authorization) use: How to Use the Nginx FastCGI Page Cache With WordPress | Linode Docs or something like that.

I.e. main idea - cache pages to reduce forward visitor connections from nginx to php or your app.

Instead of using a lot of CPU to generating a same page over and over again from php process - you will just deliver page from cache.

Another thing it’s about Disk. If you have NVMe or memory ramdisk, your cache will be extremely fast (because of iops)

Another option conn_limit:

or like that.
or: NGINX Rate Limiting

That will have dramatically drop loading.

Another solution even for dynamically active content is using micro-cache:

can be used under heavy loading, almost not really noticeable issues for users, but very effective with minimal efforts.

And only last: netfilter firewall i.e. iptables / ufw firewall connection limits per user per X.

What will make impossible to mitigate weak ddos (i.e. multiple instances of dos attack)

  • if you have apache
  • if your have PHP < 8
  • if you have not indexed tables in your database and search / sign up / reset form require a lot of resources to be executed.
  • when you have big images, js, etc on your website.

Overall → all related to DoS above - can be included inside providers above.
And you can use any server as you wish behind and do not give a heck about that not really critical problem in 2023.

The best way to stop DDoS is CloudFlare (free) and CSF firewall.

I can stop DDoS with 10k IP and more than 10 million requests to my VPS 4CPU and 8GB RAM. (don’t have a chance to test higher requests). But trust me, Cloudflare is the best way to stop DDoS.

Best way to stop a firewall is Cloudflare…

Hestia has build in support for FastCGI and / or Proxy cache.

For Nginx only setups enable it in the advanced settings

For Nginx + Apache2 use the proxy cache settings…

Thanks for the comments. Due to hole-punching, we are unable to use NGINX FastCGI Page cache with ecommerce however we’re using Varnish which seemed to have helped us significantly with the attack with no noticeable slowing of the site at any time during the attack.

In past testing, adding Cloudflare for css and js added too much TTFB and resolving time to our site, but this may be a forced option, as well as considering adding CSF which I like a lot and have used in the past.

Back in the days of using Apache, adding a user agent block to the .htaccess file was easy which killed off any attack almost immediately. I’ve never had to deal with it using NGINX, so this also may be something to look into, of course when we’re monitoring the server to add it live.