RAM issues - since upgrade?

Hi,

I’m not sure if this is the cause - but from the 24th of March, my backup software stopped working on my server. The server is pretty simple:

Ubuntu 20.04
nginx + apache
mariadb

It was working fine, and the backup process (using urbackup, server) worked perfectly. Now I seem to have an OOM error message when the backup process runs. It is processing 150gb of data from the host (client) server, and then copying over changed files. This keeps falling over at 50-60%, and kills off

20210328T132040: [105427.526305] Out of memory: Kill process 11062 (urbackupsrv) score 742 or sacrifice child
20210328T132040: [105427.528213] Killed process 11062 (urbackupsrv) total-vm:2922292kB, anon-rss:1511560kB, file-rss:0kB, shmem-rss:0kB

The only thing that I can see changed, is the update to Hesita v1.3.5. Could there be something in there that is now causing more RAM usage?

Here is a htop from the server while its running:

I’m running out of ideas as to what else to try :confused:

TIA

Andy

The last few updates where security only, no big changes.

You need to check whats urbackupsrv is - it isnt hestia related at all.

Hi,

hmm ok. It was working fine (and had done for months). Only thing that updated was Hestia. I know what urbackupsrv is ( https://www.urbackup.org/ , server edition). But it was running fine, and suddenly its giving OOM issues and being killed off :frowning:

My only other option I guess is to go to the next level up server, which has 4gb RAM (instead of 2), but I have a feeling even that will have this memory issue after a bit :confused:

Cheers

Andy

As written: There were no changes which could be related to the higher ram usage. I just see urbackup a lot of times in your list - so probaly its the source of your issue.

1 Like

Ok thanks. I’ll dig a bit more. fail2ban also seems to be using a lot of resources too (I’m seeing about 15 proceses, using a fair bit of ram)

Fail2ban does use up a lot of memory, try running a backup with fail2ban stopped.
Also you might want to add a 2G swapfile

1 Like

urbackup is not neccessarily easy on RAM usage as it hashes a lot to deduplicate and such. the usage can simply increase over time, as your backup storage and hashstore might grow.

fail2ban also can become a memory hog. while it uses caching and a database, you still might want to clear that from time to time to slim it down a bit.

however and much more important:

this. no swap = no breathing room → OOM killer. easy as that.

1 Like

Thanks. I’ve added a 2G swap file now. Lets see if that helps :slight_smile: It got to 95% and then died - how frustrating :confused:

Now the annoying part is that I need to run a full cleanup of urbackup, as it died so is a non-complete backup (so has 140gb worth of “dud” files that are using up space, but not linked).

Will post on here again once I’ve run it :sunglasses:

I had issues with fail2ban was eating my memory. The solution was that i recreated the jail.local file, then added rule by rule to find the one is cussing the mess.

1 Like

Yup I expect that wasn’t helping. Adding a 2GB swap drive has helped, and it runs through now. The only annoying part, is that it doesn’t clear the swap once the urbackup process has finished. I’m having to set a cron to clear it manually which isn’t ideal:

swapoff -a && swapon -a

I thought the whole point of a swap drive, is that it clears once the process has finished / isn’t needed any more?

Swap will over get written when it is non used any more… But it will remain in memory…

1 Like

if and how to use swap is a pretty religious topic :wink:

on a high level: IF something that got swapped out is really NEEDED again, the system will swap it back in. if it does not get moved back to the memory after urbackup has finished, that simply means the pages in swap are not really important. you might want to leave them there to be able to use your memory for filesystem caching which might be more beneficial…

people tend to think having stuff in swap is a bad thing. it is not :man_shrugging:t2:

if you want to reclaim memory which has been used heavily for cache while urbackup ran, you might want to set vm.zone_reclaim_mode to 1 via sysctl. eventually the system then will swap back in a bit more.

1 Like

Ah interesting. I always thought that once the swap was full, it was full until you rebooted (or it got cleared).

I have tweaked zone_reclaim_mode so will see how that does :slight_smile: Ideally once urbackup’s process has finished, I’d like it to clean up the swap file :slight_smile:

just don’t expect too much, as that essentially is not like swap works :wink:

while you ‘like to have a clean swap’ your system might see this differently and be happy too have old dirty memory pages with a highly likeliness to be never really needed again out of the way for better performance.

think of a small appartment as your memory and the things in it being the stored pages. wouldn’t you say it sometimes could help you if you have a basement as swap storage where you can put old things that you are not likely to need anytime soon?
and if you decide to put stuff there to make room for other things - how likely is it, that you pull them out again exactly the moment you get some free space again? just for the sake of having a cleaner basement… :wink:

of course that analogy lacks a lot of logic around swap and such, however I think it explains neatly why ‘wanting to have no swap usage’ does not make sense at all. it’s a optical preference that just makes you think all is clean.

for zone_reclaim let me remark that this does not influence directly. zone_reclaim only leads to the system reclaim memory faster that has been used for filecaching. if that is done during the process of urbackup running it might help avoid swapping a bit and also might lead to more memory freed up faster after the whole thing is done. however, especially the latter part does not automatically mean the system decides to get old sh*t back from the basement…

as said earlier - that’s probably a kind of religious topic and this all is just my view on it, anyone be my guest to see it differently. however I strongly suggest to let the system have it’s way and don’t be worried about swap being used. if at all try to carefully tune it a bit, depending on your underlying setup (hdd/ssd/etc) to help with the balance.

3 Likes

Thanks. What I have also done, is told urbackup to ignore a certain folder that has a lot of files in (that are not important), so that should also take the time and resources down. Its still running the latest full backup though (from yesterday), so not sure why its taking so long now :frowning:

Anyway - lets see how it goes. Otherwise we may just have to bite the bullet and update the server to the next level up, which has double the RAM

I think throwing more RAM at it is the reasonable way to go. I use urbackup myself (but only one client/system to be backed up) and agree that handling millions of files can be a bit more tricky or at least ressource hungry.

that it now takes longer might be related to the (now available) usage of swap. it does not die anymore because swap is available, but if there is ram needed for other tasks as well you have it swap in and out.

or using reclaim mode could also now have a negative impact, because urbackup might rely on the caching while the system now tries and reclaim memory used there too early and this then slows down urbackup remarkably.

as said above, tuning swap is about balance and by far not an easy task - if you have the chance to go for real RAM I can only say: do it! :wink:

1 Like