Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consumes more and more memory (and almost never releases it when idle) #7

Open
yaroslaff opened this issue Mar 17, 2023 · 0 comments
Open

Comments

@yaroslaff
Copy link

Hello!

First, thanks for nice project, I like it, especially docker version, and I'm using it in my Nude Crawler project.

Problem: when using docker image it almost never releases memory, and takes more and more (until will get our of memory).

How to reproduce problem:

# start and see how much memory it uses (tiny 11Mb)
$ sudo docker run -d --rm -p 9191:9191 --name aid --memory=1G opendating/adult-image-detector
$ sudo docker stats --no-stream
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O       BLOCK I/O     PIDS
6b22f14d33a3   aid       0.00%     11.27MiB / 1GiB     1.10%     5.11kB / 0B   11.6MB / 0B   6

# Analyse first file
$ curl -s -i -X POST -F "image=@/tmp/eropicture.jpg" http://localhost:9191/api/v1/detect > /dev/null
$ sudo docker stats --no-stream
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O         BLOCK I/O     PIDS
6b22f14d33a3   aid       0.00%     170MiB / 1GiB       16.60%    315kB / 1.9kB   35.9MB / 0B   10

# and two more
$ curl -s -i -X POST -F "image=@/tmp/eropicture.jpg" http://localhost:9191/api/v1/detect > /dev/null
$ curl -s -i -X POST -F "image=@/tmp/eropicture.jpg" http://localhost:9191/api/v1/detect > /dev/null

# Why now it needs extra 50Mb after two more images analysed?? Ok...  
$ sudo docker stats --no-stream
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O     PIDS
6b22f14d33a3   aid       0.00%     221.7MiB / 1GiB     21.65%    932kB / 5.48kB   35.9MB / 0B   10

# Lets analyse 100 more images
$ for i in `seq 1 100`; do curl -s -i -X POST -F "image=@/tmp/eropicture.jpg" http://localhost:9191/api/v1/detect > /dev/null ; done

# now 500Mb used for something...
$ sudo docker stats --no-stream
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O      PIDS
6b22f14d33a3   aid       0.00%     545.4MiB / 1GiB     53.26%    31.7MB / 174kB   36MB / 614kB   11

# Now it uses about all of available limit, sometimes 1022MiB, sometimes falls back to 976MiB, 
# but about at 700-800 curl request container crashes. 
# If run in foreground, no error messages displayed, last messages are:
2023/03/17 18:34:21 For file 2023-03-17T18:34:21Z_fabede53-3817-47f8-a077-86d712d51602.jpg, openNsfwScore=0.834231
2023/03/17 18:34:21 For file 2023-03-17T18:34:21Z_fabede53-3817-47f8-a077-86d712d51602.jpg, anAlgorithmForNudityDetection=true
2023/03/17 18:34:21 Uploaded file eropicture.jpg, saved as 2023-03-17T18:34:21Z_3c3370b3-ede2-4847-ac04-9ba5957765b0.jpg

Test image size is 299K. All tests I did on docker image I pulled today.

If set RAM limit to 300Mb, it goes to almost full memory much sooner, at about 20-30th request, but continues to work at this memory usage up to ~100th request. (then crashes). When I start it without memory limit, it runs much longer (I have 16Gb RAM), but in the and machine hits OOM anyway.

If stop sending new requests, it keeps almost full memory for long time (I tried to wait for few hours, memory usage is nearly same after few hours of idle time).

In theory, I could restart docker container after every N requests, but it looks ugly. And my script runs as ordinary user account, I would not like to give it root access even to analyze nude woman.

Maybe it's possible to tune up garbage collector or restart process? Or maybe make API call (which I would trigger after every N requests) which will release memory or restart program?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant