Hello everyone, I’m having serious issues on my server. I’m dealing with a persistent denial-of-service attack where the IP keeps changing every time. Is there any solution that could help me without using Cloudflare?
Thank you.
Hello everyone, I’m having serious issues on my server. I’m dealing with a persistent denial-of-service attack where the IP keeps changing every time. Is there any solution that could help me without using Cloudflare?
Thank you.
Hi,
It’s really hard to stop this kind of DDoS attack if you don’t want to use (or can’t use) a proxy with anti DDoS services like Cloudflare ![]()
Is it an HTTP flood attack or a SYN flood attack?
Are you using Nginx, or Nginx + Apache2?
Do the attacking IPs come from specific countries?
Is there any pattern in the logs (same user-agent, same requested files, etc.)?
You could add these directives to /etc/nginx/nginx.conf so they will be used by all your sites:
limit_req_zone $binary_remote_addr zone=req_limit:20m rate=5r/s;
limit_conn_zone $binary_remote_addr zone=addr_limit:10m;
limit_req zone=req_limit burst=30 nodelay;
limit_conn addr_limit 15;
map $http_user_agent $bad_ua {
default 0;
"" 1;
"~*(curl|python|wget|Go-http-client|aiohttp|Java)" 1;
}
map $bad_ua $force_block {
1 403;
0 0;
}
You should also be sure that your sites are using cache.
If it’s a syn flood, these iptables rules could help.
iptables -N syn_flood
iptables -I INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j syn_flood
iptables -A syn_flood -m hashlimit --hashlimit-upto 4/sec --hashlimit-burst 20 --hashlimit-mode srcip --hashlimit-name syn_flood -j RETURN
iptables -A syn_flood -j LOG --log-prefix "iptables:drop:syn_flood "
iptables -A syn_flood -j DROP
And you could also enable SYN Cookies:
sudo sysctl -w net.ipv4.tcp_syncookies=1
Hello @Sahsanu, thank you very much for your reply. Today has been a really difficult and chaotic day for me, so I appreciate your help even more.
I’m currently using Nginx (as reverse proxy) + Apache2, and I’m being hit by a flood of requests mainly from random IP ranges in Asia. Here is a sample from my access logs:
grep “-------” /var/log/nginx/domains/-----------.gov.br.log | awk ‘{print $1}’ | sort | uniq -c | sort -rn | head -30
6101236 193.95.53.131
2898030 103.55.22.225
2516985 103.179.154.42
1939675 58.69.182.53
1077436 120.28.218.15
453736 103.214.102.154
…
The attack is focused on the home page ( / ) and all requests emulate mobile browsers using very similar user-agents like:
Mozilla/5.0 (Linux; Android 8.0.0; LDN-L01) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.101 Mobile Safari/537.36
Example requests:
103.55.22.225 - - [05/Dec/2025:17:10:18 -0300] “GET / HTTP/2.0” 200 1024 “-” “Mozilla/5.0 (Linux; Android 8.0.0; LDN-L01)…”
103.179.154.42 - - [05/Dec/2025:17:10:18 -0300] “GET / HTTP/2.0” 200 1024 “-” “Mozilla/5.0 (Linux; Android 8.0.0; LDN-L01)…”
103.55.22.225 - - [05/Dec/2025:17:10:18 -0300] “GET / HTTP/2.0” 200 1024 “-” “Mozilla/5.0 (Linux; Android 7.0; BQ-5591)…”
I’ve already tried blocking entire IP families, netblocks and countries, but nothing seems to work. Even custom firewall rules haven’t helped enough.
At this point, I’m going to try adjusting the mitigation strategy you suggested. One question:
Regarding caching, should I switch from the default configuration to the Nginx cached mode, and place the cache directly on the / route?
Thank you again for your support.
Hey, me again lol
I’m still fighting against massive bot abuse on my server (mostly curl, wget, python-requests, Go-http-client, aiohttp, etc.). Even with the config below, this simple test is still getting 200s instead of being limited or blocked:
My current config:
# Server globals
user www-data;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /etc/nginx/conf.d/main/*.conf;
include /etc/nginx/modules-enabled/*.conf;
# Worker config
events {
worker_connections 9000;
use epoll;
multi_accept on;
}
http {
limit_req_zone $binary_remote_addr zone=req_limit:20m rate=5r/s;
limit_conn_zone $binary_remote_addr zone=addr_limit:10m;
limit_req zone=req_limit burst=5 nodelay;
limit_conn addr_limit 15;
map $http_user_agent $bad_ua {
default 0;
"" 1;
"~*(curl|python|wget|Go-http-client|aiohttp|Java)" 1;
}
map $bad_ua $force_block {
1 403;
0 0;
}
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 180s;
client_body_timeout 180s;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 1024m;
large_client_header_buffers 4 8k;
send_timeout 600s;
keepalive_timeout 30s;
keepalive_requests 1000;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
charset utf-8;
# FastCGI settings
fastcgi_buffers 512 4k;
fastcgi_buffer_size 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_connect_timeout 30s;
fastcgi_read_timeout 300s;
fastcgi_send_timeout 180s;
fastcgi_cache_lock on;
fastcgi_cache_lock_timeout 5s;
fastcgi_cache_background_update on;
fastcgi_cache_revalidate on;
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Early-Data $rfc_early_data;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_buffers 256 4k;
proxy_buffer_size 32k;
proxy_busy_buffers_size 32k;
proxy_temp_file_write_size 256k;
proxy_connect_timeout 600s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request "$status" $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
log_not_found off;
access_log on;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_vary on;
gzip_static on;
gzip_comp_level 6;
gzip_min_length 1024;
gzip_buffers 128 4k;
gzip_http_version 1.1;
gzip_types text/css text/javascript text/js text/plain text/richtext text/shtml text/x-component text/x-java-source text/x-markdown text/x-script text/xml image/bmp image/svg+xml image/vnd.microsoft.icon image/x-icon font/otf font/ttf font/x-woff multipart/bag multipart/mixed application/eot application/font application/font-sfnt application/font-woff application/javascript application/javascript-binast application/json application/ld+json application/manifest+json application/opentype application/otf application/rss+xml application/ttf application/truetype application/vnd.api+json application/vnd.ms-fontobject application/wasm application/xhtml+xml application/xml application/xml+rss application/x-httpd-cgi application/x-javascript application/x-opentype application/x-otf application/x-perl application/x-protobuf application/x-ttf;
gzip_proxied any;
# Cloudflare IPs
include /etc/nginx/conf.d/cloudflare.inc;
# SSL PCI compliance
ssl_buffer_size 1369;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256";
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_early_data on;
ssl_ecdh_curve auto;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets on;
ssl_session_timeout 7d;
resolver 10.100.16.161 10.100.16.101 8.8.8.8 valid=300s ipv6=off;
resolver_timeout 5s;
# Error pages
error_page 403 /error/404.html;
error_page 404 /error/404.html;
error_page 410 /error/410.html;
error_page 500 501 502 503 504 505 /error/50x.html;
# Proxy cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Cache-Control Expires;
proxy_cache_use_stale error timeout invalid_header updating http_502;
proxy_cache_valid any 1d;
# FastCGI cache
fastcgi_cache_path /var/cache/nginx/micro levels=1:2 keys_zone=microcache:10m inactive=30m max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503;
add_header X-FastCGI-Cache $upstream_cache_status;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache (static assets)
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
# Wildcard include
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/conf.d/domains/*.conf;
}
Test I’m running:
for i in {1..20}; do curl -s -o /dev/null -w "%{http_code} " https:/------------gov.br/; done
Expected behavior: [e.g., “I expected to get 429 Too Many Requests after X requests”]
Actual behavior: [e.g., “Still returning 200 OK every time”].
I was getting too. Did not have any other option but to block bot ips via firewall ipbanlist. If you like, you can use my list. Might have few false positives, which can be removed from the list if I am told so.
https://git.flossboxin.org.in/vdbhb59/hosts/raw/branch/main/bots.txt
Sorry, my bad, some directives must be in http{} and others in the server{} block of your site.
In http{}:
limit_req_zone $binary_remote_addr zone=req_limit:20m rate=5r/s;
limit_conn_zone $binary_remote_addr zone=addr_limit:10m;
map $http_user_agent $bad_ua {
default 0;
"" 1;
"~*(curl|python|wget|Go-http-client|aiohttp|Java)" 1;
}
In the server{} block of your site (nginx.conf and nginx.ssl.conf):
limit_req zone=req_limit burst=5 nodelay;
limit_req_status 429;
limit_conn addr_limit 15;
if ($bad_ua = 1) {
return 403;
}
After restart nginx:
Your for will reach the 403 error, to thest the connection limit you can’t use that for (you must run curl in parallel).
Use this to test the User Agent block:
seq 1 20 | xargs -P 20 -I {} curl -s -o /dev/null -w "%{http_code}\n" https://yoursite
And this (changing the user agent) to test the connection limit:
seq 1 20 | xargs -P 20 -I {} curl -s -o /dev/null -w "%{http_code}\n" --user-agent 'TestAgent' https://yoursite
Unfortunately, nothing is working. I set the limitation, but it’s not respecting it. I tried blocking it with a firewall, but it doesn’t work. Any ideas?
Check the log.
14.130.175.18 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36/8mqY1SuL-72"
114.130.175.18 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36/8mqY1SuL-72"
185.82.238.42 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 12037 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
185.82.238.42 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 12037 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
91.200.161.250 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
91.200.161.250 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
91.200.161.250 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
103.39.51.156 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 8.1.0; Lenovo TB-7104I) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36"
103.147.246.184 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
103.39.51.156 - - [06/Dec/2025:17:20:51 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 8.1.0; Lenovo TB-7104I) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36"
103.147.246.184 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
103.147.246.184 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
101.255.166.185 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 6.0.1; Redmi 4 Build/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.99 Mobile Safari/537.36 YaApp_Android/9.00 YaSearchBrowser/9.00"
45.189.253.164 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
69.75.140.157 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Android 11; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0"
200.49.99.78 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 8.0.0; XT1635-02) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.181 Mobile Safari/537.36"
121.100.19.82 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 10; SM-A705MN) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.116 Mobile Safari/537.36"
103.147.246.184 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
36.93.214.253 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 8.0.0; XT1635-02) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.181 Mobile Safari/537.36"
103.146.185.140 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 9; LM-Q610.FG) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Mobile Safari/537.36"
5.28.35.226 - - [06/Dec/2025:17:20:52 -0300] "GET / HTTP/1.0" 200 8522 "https://www.-------.gov.br/" "Mozilla/5.0 (Linux; Android 10; SM-A705MN) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.116 Mobile Safari/537.36"
The limitation works but those ips are only connecting one so nginx is not limiting them.
How are you trying to blocking them?
Using the log you posted, this is the ranking from country:
10 Indonesia
3 Russian Federation
2 Czechia
1 United States of America
1 Mexico
1 China
1 Cambodia
1 Bangladesh
1 Argentina
Is you send me a complete log, I can parse it and show a ranking of countries accesing your site.
It’s not ideal but you can block from offending countries. Or better, if your site is only accessed from one or just a few of countries, you can allow access only from those countries.
The key to defending yourself without help is ensuring that no requests reach PHP-FPM.
The first line of defense should be swift, sending everything through Cloudflare blocks the attack immediately.
After that, you should test all the options described here and run tests outside the Cloudflare proxy.
It’s not an easy battle, but if you want to test it, it’s perfect.
The first thing I would do if you don’t want to use CF would be to add the Indonesian IP address list that Hestia provides to Ipset and block ALL traffic with it.
Hello everyone, I need another bit of help. I’m almost solving the issue, but I have a major limitation: I can’t switch my nameservers to fully use Cloudflare. To do that, I would need to gather documentation from all my clients, and unfortunately many of them passed away during the pandemic. It would be an almost impossible task.
The attacks I’m facing are focused on port 443 on the domain with WWW
. So I’ve done the following:
https://api.bunny.net/system/edgeserverlist/plain
The problem is that the domain without WWW remains unprotected, since it doesn’t go through the CDN. My idea was to block direct access to the non-WWW domain, allowing only Bunny’s IPs for origin validation. That part is easy with iptables, but then another issue appears: Let’s Encrypt wouldn’t be able to validate the certificate, since access would be blocked.
Question: is there a way to adjust this setup so that Let’s Encrypt can still validate and issue a certificate for the non-WWW domain, even with the block in place? Bunny already generates it for the WWW
.
Any suggestions are really appreciated. If everything works out, I’ll write a full tutorial to help the community, because this has been a huge headache! ![]()
This same idea you had with ‘Bunny’ also works with Cloudflare. And it’s exactly the same, except you have to add the CNAME for the name servers. ns1.your-domain.com CNAME ‘you.domain-in-CF.com’ But I’d bet I just gave you the solution to your Bunny problem too.
To do it right, I suggest you consult your preferred AI, it will guide you much more easily.
And I ask myself
Wouldn’t a simple redirect in nginx from no www to www work?
Thanks for the reply!
About the nginx redirect: yes, I already have the redirect from non-www to www configured. The problem is that the redirect only works after the TCP/TLS connection is already established. So even with the redirect in place, the server still has to:
During a DDoS attack, the server gets overwhelmed at steps 1-2 before nginx even has a chance to redirect. The attacker doesn’t care about the HTTP response - they just want to exhaust server resources with connection floods.
That’s why I need to block port 443 entirely for the non-www domain at the firewall level (before it reaches nginx), allowing only Bunny’s IPs for origin validation. The challenge is: how do I let Let’s Encrypt validate the certificate if everything is blocked?
Even though you’ll get errors from my client if you access it without www because I won’t be able to redirect, lol, man, it’s tough.
Now I understand!
I don’t know how ‘Bunny’ works, but it must be very similar to CF.
In CF, once you have the proxy activated, you use CF’s certificates, but what’s more, the existing certificates on the server continue to be updated.
I repeat, I don’t know ‘Bunny’, but I think you’re thinking ahead without really understanding how the ‘Bunny’ proxy works.
I suggest you defend yourself against the attack now, and you’ll solve the next problem if it arises.
The attack won’t last forever; if you block it completely, they’ll forget about you.
Try to solve the www problem with the idea I gave you of using the CNAME on your name servers and not on the clients’ domains.
You’re right, and that’s exactly what I’m already doing with Bunny - using CNAME for www pointing to their CDN, without changing the client’s nameservers.
The www is now protected. My remaining concern was the apex domain (non-www), but you make a good point: I should focus on blocking the current attack first and deal with edge cases later.
Since the attack is on www:443 and that’s now behind Bunny’s CDN, I’ll monitor if the attack shifts to the apex. If it does, I’ll just block port 443 on the apex and keep only port 80 open for the redirect.
Thanks for the perspective - sometimes we overcomplicate things trying to solve problems that haven’t happened yet! ![]()
I can’t see more options than issuing the certificate using DNS validation. Does your DNS server allow the use of a DNS API to manipulate records? Even if it doesn’t provide an API, you could delegate the authorization to another domain and DNS server that does provide one. Of course, I’m saying this just in case you want to automate the renewal, you can always issue a certificate using DNS validation and add the required records manually.
I always use acme.sh client to issue my certificates.
Hi, I have protection against HTTP1 and HTTP1.1 requests configured in file2ban, this really helps, there are also exceptions in the configuration for TLS certificates and the IP panel and my PS. You can try it
jail.local
[DEFAULT]
ignoreip = 127.0.0.1/8 ::1 213.168.6.236 185.26.120.177 acme-v02.api.letsencrypt.org acme-staging-v02.api.letsencrypt.org cert.int-x3.letsencrypt.org oak.ct.letsencrypt.org sapling.ct.letsencrypt.org stg-e1.o.lencr.org stg-e2.o.lencr.org stg-e5.o.lencr.org stg-e6.o.lencr.org stg-e7.o.lencr.org stg-e8.o.lencr.org stg-e9.o.lencr.org stg-r3.o.lencr.org stg-r4.o.lencr.org stg-r10.o.lencr.org stg-r11.o.lencr.org stg-r12.o.lencr.org stg-r13.o.lencr.org stg-r14.o.lencr.org e1.o.lencr.org e2.o.lencr.org e5.o.lencr.org e6.o.lencr.org e7.o.lencr.org e8.o.lencr.org e9.o.lencr.org e1.o.lencr.org e2.o.lencr.org e5.o.lencr.org e6.o.lencr.org e7.o.lencr.org e8.o.lencr.org e9.o.lencr.org r3.o.lencr.org r4.o.lencr.org r10.o.lencr.org r11.o.lencr.org r12.o.lencr.org r13.o.lencr.org r14.o.lencr.org 54.236.1.13 www.pinterest.com/bot.html
[http1]
enabled = true
port = http,https
filter = http1
logpath = /var/log/hestia/nginx-access.log
/var/log/hestia/nginx-error.log
/var/log/nginx/domains/webmail.example.com.log
/var/log/nginx/domains/quantumtransition.example.com.log
/var/log/nginx/domains/opensource.example.com.log
/var/log/nginx/nextcloud.example.com.log
maxretry = 3
#findtime = 23h
bantime = 6h#
http1.conf
[Definition]
failregex = ^<HOST> -.*HTTP/1\.0.*$
^<HOST> -.*HTTP/1\.1.*$
ignoreregex =
You may want to reduce the bantime time to 15-30 minutes so that the processor doesn’t overheat)
Also try this