Skip to content

Commit

Permalink
GHA: Remove caffe2 check in Windows shard 1 smoke tests (pytorch#70010)
Browse files Browse the repository at this point in the history
Summary:
Windows shard 1 hasn't actually been running any tests because the script that does so exited before running the python tests but did not report an error. This has been happening to all windows tests across the board, for example https://github.com/pytorch/pytorch/runs/4526170542?check_suite_focus=true

Removing the caffe2.python check passes the smoke tests now. You can observe that the run_test.py file is called in the windows cpu job now https://github.com/pytorch/pytorch/runs/4541331717?check_suite_focus=true

Pull Request resolved: pytorch#70010

Reviewed By: malfet, seemethere

Differential Revision: D33161291

Pulled By: janeyx99

fbshipit-source-id: 85024b0ebb3ac42297684467ee4d0898ecf394de
  • Loading branch information
janeyx99 authored and facebook-github-bot committed Dec 21, 2021
1 parent e6d9bb8 commit c555b7b
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 6 deletions.
4 changes: 0 additions & 4 deletions .jenkins/pytorch/win-test-helpers/run_python_nn_smoketests.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,6 @@
"Checking that torch is available",
"import torch",
),
(
"Checking that caffe2.python is available",
"from caffe2.python import core",
),
(
"Checking that MKL is available",
"import torch; exit(0 if torch.backends.mkl.is_available() else 1)",
Expand Down
3 changes: 2 additions & 1 deletion test/test_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -1639,7 +1639,8 @@ def test_norm_type_conversion(self):
def test_mem_get_info(self):
def _test(idx):
before_free_bytes, before_available_bytes = torch.cuda.mem_get_info(idx)
t = torch.randn(1024 * 1024, device='cuda:' + str(idx))
# increasing to 8MB to force acquiring a new block and overcome blocksize differences across platforms
t = torch.randn(1024 * 1024 * 8, device='cuda:' + str(idx))
after_free_bytes, after_available_bytes = torch.cuda.mem_get_info(idx)

self.assertTrue(after_free_bytes < before_free_bytes)
Expand Down
3 changes: 2 additions & 1 deletion test/test_linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -3357,7 +3357,7 @@ def run_test_singular_input(batch_dim, n):
@skipCPUIfNoLapack
@onlyNativeDeviceTypes # TODO: XLA doesn't raise exception
@skipCUDAIfRocm
@skipCUDAVersionIn([(11, 3)]) # https://github.com/pytorch/pytorch/issues/57482
@skipCUDAVersionIn([(11, 3), (11, 5)]) # https://github.com/pytorch/pytorch/issues/57482
@dtypes(*floating_and_complex_types())
def test_inverse_errors_large(self, device, dtype):
# Test batched inverse of singular matrices reports errors without crashing (gh-51930)
Expand Down Expand Up @@ -4947,6 +4947,7 @@ def test_linalg_solve_triangular(self, device, dtype):
@onlyCUDA
@skipCUDAIfNoMagma # Magma needed for the PLU decomposition
@skipCUDAIfRocm # There is a memory access bug in rocBLAS in the (non-batched) solve_triangular
@skipCUDAVersionIn([(11, 3), (11, 5)]) # Tracked in https://github.com/pytorch/pytorch/issues/70111
@dtypes(*floating_and_complex_types())
@precisionOverride({torch.float32: 1e-2, torch.complex64: 1e-2,
torch.float64: 1e-8, torch.complex128: 1e-8})
Expand Down

0 comments on commit c555b7b

Please sign in to comment.