Skip to content

Commit

Permalink
Migrate slow self-hosted jobs to Amazon2023 AMI (pytorch#131771)
Browse files Browse the repository at this point in the history
A continuation of the migration started in
- pytorch#131250

(for tracking: signal on Aug 6: https://hud.pytorch.org/pytorch/pytorch/pull/131771?sha=38bc4755567527fad5279203ddef534ac132ea94)
Pull Request resolved: pytorch#131771
Approved by: https://github.com/seemethere
  • Loading branch information
ZainRizvi authored and pytorchmergebot committed Aug 8, 2024
1 parent 75eb66a commit ac95b2a
Showing 1 changed file with 14 additions and 10 deletions.
24 changes: 14 additions & 10 deletions .github/workflows/slow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,18 @@ jobs:
name: linux-focal-cuda12.1-py3-gcc9-slow-gradcheck
uses: ./.github/workflows/_linux-build.yml
with:
runner: "amz2023.linux.2xlarge"
build-environment: linux-focal-cuda12.1-py3-gcc9-slow-gradcheck
docker-image-name: pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9
cuda-arch-list: 8.6
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 2, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 3, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 4, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 5, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 6, num_shards: 6, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 1, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 2, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 3, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 4, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 5, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "default", shard: 6, num_shards: 6, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
]}
linux-focal-cuda12_1-py3-gcc9-slow-gradcheck-test:
Expand All @@ -78,13 +79,14 @@ jobs:
name: linux-focal-cuda12.1-py3.10-gcc9-sm86
uses: ./.github/workflows/_linux-build.yml
with:
runner: "amz2023.linux.2xlarge"
build-environment: linux-focal-cuda12.1-py3.10-gcc9-sm86
docker-image-name: pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9
cuda-arch-list: 8.6
test-matrix: |
{ include: [
{ config: "slow", shard: 1, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "slow", shard: 2, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "slow", shard: 1, num_shards: 2, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
{ config: "slow", shard: 2, num_shards: 2, runner: "amz2023.linux.g5.4xlarge.nvidia.gpu" },
]}
linux-focal-cuda12_1-py3_10-gcc9-sm86-test:
Expand All @@ -102,12 +104,13 @@ jobs:
name: linux-focal-py3.8-clang10
uses: ./.github/workflows/_linux-build.yml
with:
runner: "amz2023.linux.2xlarge"
build-environment: linux-focal-py3.8-clang10
docker-image-name: pytorch-linux-focal-py3.8-clang10
test-matrix: |
{ include: [
{ config: "slow", shard: 1, num_shards: 2, runner: "linux.2xlarge" },
{ config: "slow", shard: 2, num_shards: 2, runner: "linux.2xlarge" },
{ config: "slow", shard: 1, num_shards: 2, runner: "amz2023.linux.2xlarge" },
{ config: "slow", shard: 2, num_shards: 2, runner: "amz2023.linux.2xlarge" },
]}
linux-focal-py3_8-clang10-test:
Expand All @@ -125,6 +128,7 @@ jobs:
name: linux-focal-rocm6.1-py3.8
uses: ./.github/workflows/_linux-build.yml
with:
runner: "amz2023.linux.2xlarge"
build-environment: linux-focal-rocm6.1-py3.8
docker-image-name: pytorch-linux-focal-rocm-n-py3
test-matrix: |
Expand Down

0 comments on commit ac95b2a

Please sign in to comment.