Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unraid + Transcoding/Hardware acceleration #28

Open
sigh-straits opened this issue Jan 4, 2024 · 5 comments
Open

Unraid + Transcoding/Hardware acceleration #28

sigh-straits opened this issue Jan 4, 2024 · 5 comments

Comments

@sigh-straits
Copy link

First of all, thanks for the dockerized version of the Stremio Service.
Was able to follow documentation and run it in Unraid (if you're not familiar, it is a Linux-based OS for self-hosting servers and NAS). Managed to install some addons and currently able to proxy stream using Stremio Web. So, the main functionality is working.

I saw that ffmpeg-jellyfin is being used, and in Unraid I also have a Jellyfin server docker running, with GPU transcoding capabilities (in my specific case using NVIDIA GPU and nvenc). Tried the same docker configuration as described here for stremio-server, but to no avail. I guess some additional configuration would be necessary, even if similar ffmpeg version is available.

Since I'm not that knowledgeable regarding ffmpeg and video decoding/transcoding aspects or how the Stremio Service works, I am reaching for extra help... When testing, only CPU is being used, at least I see persistent spiking in CPU usage when watching content, so is there a way to had GPU hardware acceleration/transcoding functionality? Or is this already available and I missed it?

Even if not specifically for Unraid, was someone able to get this working for Linux platform? Any help or feedback welcome.
Thanks.

@jaruba
Copy link
Member

jaruba commented Jan 4, 2024

i'm uncertain of how much i will be able to help as we never tested the case of nvenc in a docker container

the server itself does profiling to attempt to figure out what hwaccel methods are supported on the system

as far as i understand the issue with using hwaccel in docker is that the docker container does not normally have access to the GPU, so i expect the issue to be the fact that our Dockerfile does not give the required access to the GPU

there is a community PR with this minor change: https://github.com/Stremio/server-docker/pull/24/files

that i believe might be specific to giving the required access to use the nvenc hwaccel method, but we didn't test it yet, if u could test it and confirm that the change (including the steps u followed from the jellyfin docs) work in your case, then we could potentially merge it

@sigh-straits
Copy link
Author

sigh-straits commented Jan 8, 2024

@jaruba Thanks for the input

I tried taking a look at the community PR you mentioned, as well as the NVIDIA documentation for dockerized containers and other containers I have successfully managed to have hardware transcoding.

So, the change in the PR habilitates GPU capabilities for video purposes. Tried both NVIDIA_DRIVER_CAPABILITIES=video and NVIDIA_DRIVER_CAPABILITIES=all (something that works in other containers), even included my GPU id as supported device (NVIDIA_VISIBLE_DEVICES=GPU-xxxxxxxx or NVIDIA_VISIBLE_DEVICES=all in order to accept any).
This had the NVIDIA runtime available in the container, by running nvidia-smi I see my GPU information present.

When I tried to play some content (used Sintel magnet URL + Torrentio), the movie played without issues, but without using hardware acceleration, relying still in CPU.


Made a different test, accessed the container via bash and runned this ffmpeg command:
/usr/lib/jellyfin-ffmpeg/ffmpeg -c:v h264_cuvid -i "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" -map 0 -c:v hevc_nvenc -cq:v 19 -b:v 1491k -minrate 1043k -maxrate 1938k -bufsize 2983k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -max_muxing_queue_size 9999 -map -0:d output.mp4
(took the command from a transcoding tool i have that uses ffmpeg as well and GPU transcoding using NVENC)

When using this command inside the docker container, I was successfuly able to use hardware transcoding with my NVIDIA GPU.
Screenshot 2024-01-08 at 14 11 18
The process list appears empty, but the GPU utilization was consistently at 33% usage and the GPU process was visible in my Unraid dashboard.


Therefore, providing the dockerized container with GPU capabilities is possible, and ffmpeg-jellyfin can use nvenc as expected. Could the problem then be related with Stremio Service implementation? Is there something that could be configured in order to force ffmpeg to use nvenc profile or something similar to that?

Hope this helps, if more information needed, please let me know.
Thanks.

@Owen000
Copy link

Owen000 commented Jul 15, 2024

Stremio server needs to be open source, I'm working on fixing this but it would be so much easier if it was open source.

@Owen000
Copy link

Owen000 commented Aug 4, 2024

After some debugging I was able to track down whats causing the issue in my setup

Impossible to convert between the formats supported by the filter 'graph -1 input from stream 0:0' and the filter 'auto_scale_0' [vf#0:0 @ 0x55a3482e2140] Error reinitializing filters! [vf#0:0 @ 0x55a3482e2140] Task finished with error code: -38 (Function not implemented) [vf#0:0 @ 0x55a3482e2140] Terminating thread with return code -38 (Function not implemented) [vost#0:0/hevc_nvenc @ 0x55a3482d3b00] Could not open encoder before EOF [vost#0:0/hevc_nvenc @ 0x55a3482d3b00] Task finished with error code: -22 (Invalid argument) [vost#0:0/hevc_nvenc @ 0x55a3482d3b00] Terminating thread with return code -22 (Invalid argument) [out#0/matroska @ 0x55a3482d4e40] Nothing was written into output file, because at least one of its streams received no packets.

(This is using the same ffmpeg commands the test for verifying it's working is using)
I've looked at fixes and none seem to work for me.

@xeroxmalf
Copy link

I got it working by using the newer Nvidia container toolkit and this for my compose:

stremio:
    container_name: stremio
    image: stremio/server:latest
    restart: unless-stopped
    pull_policy: always
    runtime: nvidia
    network_mode: host
    volumes:
      - /docker-configs/stremio:/root/.stremio-server
    environment:
      - NO_CORS=1
      - CASTING_DISABLED=1
      - NVIDIA_DRIVER_CAPABILITIES=all
      - NVIDIA_VISIBLE_DEVICES=all
      - FFMPEG_BIN=/usr/lib/jellyfin-ffmpeg/ffmpeg
      - FFPROBE_BIN=/usr/lib/jellyfin-ffmpeg/ffprobe
      - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jellyfin-ffmpeg
hls-converter - Testing video hw accel for profile: nvenc-linux
-> GET /hlsv2/11470-nvenc-linux-video-hevc.mkv/video0.m3u8?mediaURL=http%3A%2F%2F192.168.250.125%3A11470%2Fsamples%2Fhevc.mkv&profile=nvenc-linux&maxWidth=1200
-> GET /samples/hevc.mkv
-> GET /samples/hevc.mkv bytes=0-
-> OPTIONS /get-https?authKey=snipped&ipAddress=192.168.250.125
-> GET /get-https?authKey=snipped&ipAddress=192.168.250.125
-> GET /hlsv2/11470-nvenc-linux-video-hevc.mkv/destroy
hls-converter 11470-nvenc-linux-video-hevc.mkv has been requested to be destroyed
hls-converter 11470-nvenc-linux-video-hevc.mkv destoyed
hls-converter - Tests passed for [video] hw accel profile: nvenc-linux
hls-converter - All tests passed for hw accel profile: nvenc-linux
docker exec -it stremio nvidia-smi
Fri Dec 13 05:21:57 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01              Driver Version: 565.57.01      CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1060 6GB    On  |   00000000:08:00.0 Off |                  N/A |
|  0%   32C    P8              9W /  150W |       3MiB /   6144MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants