What is deployed on: Everything is launched via docker compose.
The ui container and sheduler use syslog, the bunkerweb container uses the docker json driver with rotation.
The redis config is like this:
maxmemory 4gb
maxmemory-policy allkeys-lru
save “”
appendonly yes
appendfilename “appendonly.aof”
appendfsync everysec
auto-aof-rewrite-percentage 50
auto-aof-rewrite-min-size 2gb
timeout 0
tcp-keepalive 60
maxclients 10000
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes
replica-lazy-flush yes
no-appendfsync-on-rewrite yes
logfile “/data/redis.log”
dir /data
It is also used by redis and vector (for generating metrics) . I have a relatively average workload for projects. 24,568 incident reports, total requests 5.7M, blocked requests
10.8k have accumulated in 2 days. If you do not go into the UI, the state of the containers is as follows:
As soon as I try to log into the web ui, the ui container starts eating up memory (while logging into the ui for a very long time): and I also disabled logging via syslog from the bunkerweb container, otherwise it ate 40GB in 2 days!!!
The same situation occurs if I switch pages with reports, each time I have to wait up to almost a minute to go to the next page, and memory is jumping all the time.
If I understand the logic of the work correctly: all reports are written only in redis, it puts them in aof just in case, and when you log in to the ui, this case starts loading first back into redis, then everything into the ui, and therefore the ui consumes 8 - 8 GB of memory and as soon as it digests everything, it releases memory and the input occurs.Is there any way to fix this? Why, for example, can’t redis add reports that are older than 24 hours to a shared database, for example? This should not affect the efficiency of logging in any way.
I wrote the text through a translator, I’m sorry)


