Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

如何使用多gpu #48

Open
x2584179909 opened this issue Mar 14, 2025 · 2 comments
Open

如何使用多gpu #48

x2584179909 opened this issue Mar 14, 2025 · 2 comments

Comments

@x2584179909
Copy link

报错:
srts = asr_task(wavs, asr_type=model)
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.85 GiB. GPU 0 has a total capacity of 11.90 GiB of which 989.88 MiB is free. Process 1535834 has 3.21 GiB memory in use. Process 1536740 has 1.72 GiB memory in use. Including non-PyTorch memory, this process has 5.99 GiB memory in use. Of the allocated memory 5.56 GiB is allocated by PyTorch, and 272.01 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

我的机器有10张卡,我应该如何配置才能使用多张gpu

Fri Mar 14 10:19:23 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA TITAN Xp Off | 00000000:04:00.0 Off | N/A |
| 28% 53C P2 233W / 250W | 11619MiB / 12288MiB | 91% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA TITAN Xp Off | 00000000:05:00.0 Off | N/A |
| 23% 19C P8 8W / 250W | 4909MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA TITAN Xp Off | 00000000:06:00.0 Off | N/A |
| 23% 19C P8 8W / 250W | 4873MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA TITAN Xp Off | 00000000:07:00.0 Off | N/A |
| 23% 21C P8 8W / 250W | 4873MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA TITAN Xp Off | 00000000:08:00.0 Off | N/A |
| 23% 20C P8 8W / 250W | 4515MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA TITAN Xp Off | 00000000:0B:00.0 Off | N/A |
| 23% 17C P8 8W / 250W | 4307MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA TITAN Xp Off | 00000000:0C:00.0 Off | N/A |
| 23% 19C P8 8W / 250W | 4579MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA TITAN Xp Off | 00000000:0D:00.0 Off | N/A |
| 23% 19C P8 8W / 250W | 4579MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 8 NVIDIA TITAN Xp Off | 00000000:0E:00.0 Off | N/A |
| 23% 19C P8 7W / 250W | 4015MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 9 NVIDIA TITAN Xp Off | 00000000:0F:00.0 Off | N/A |
| 23% 18C P8 8W / 250W | 5003MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

@BrianApple
Copy link

同问,我单张卡16G VRam不够用,但是我设置CUDA_VISIBLE_DEVICES=0,1后,仍然只使用单张卡,且显存溢出

@csukuangfj
Copy link

放出来的推理代码,不修改的话,是不是只支持单卡,不支持多卡?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants