Tags: AlbertDachiChen/FBGEMM
Tags
Clean up redundant AWS credentials (pytorch#2230) Summary: After pytorch/test-infra#4839, the credentials are not needed anymore. Pull Request resolved: pytorch#2230 Reviewed By: spcyppt Differential Revision: D52297582 Pulled By: huydhn fbshipit-source-id: 3077901d9b23800c1c36d4fb28dfe0ad07955f73
Revert D51999938: Multisect successfully blamed "D51999938: Add early… … exit in sparse_async_cumsum ops" for otest failure (pytorch#2208) Summary: Pull Request resolved: pytorch#2208 This diff is reverting D51999938 D51999938: Add early exit in sparse_async_cumsum ops by meremeev has been identified to be causing the following test failure: Tests affected: - [deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test - test_schema__test_asynchronous_complete_cumsum_2d (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest)](https://www.internalfb.com/intern/test/562950068047718/) Here's the Multisect link: https://www.internalfb.com/multisect/3757962 Here are the tasks that are relevant to this breakage: We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it. If you believe this diff has been generated in error you may Commandeer and Abandon it. Reviewed By: jasonjk-park Differential Revision: D52099677 fbshipit-source-id: 4e36745864148e5bb337465b0a9afcfe80846389
Fix jagged_test_index_select_2d that hangs in OSS and revert skip tes… …ts (pytorch#2036) Summary: Pull Request resolved: pytorch#2036 Before c++20, `std::atomic_flag` is initialized to an unspecified state, hence the loop `while (lock.test_and_set(std::memory_order_acquire)` is never broken and causes the test to hang in OSS. This diff properly initializes the `std::atomic_flag`. Reviewed By: q10, sryap Differential Revision: D49528661 fbshipit-source-id: ba2213cb9bf8c0abbd1e169db03f0e32dd2a7ebb
[FBGEMM][v0.5.0-rc3] Fix using package version and missing bash path
Skip PooledEmbeddingModulesTest until FailedHealthCheck is fixed (pyt… …orch#1999) Summary: Pull Request resolved: pytorch#1999 Hypothesis version 6.83.2 onwards introduces `HealthCheck.differing_executors` that causes tests in`permute_pooled_embedding_test.py` to fail with error: `The method PooledEmbeddingModulesTest.setUp was called from multiple different executors. This may lead to flaky tests and nonreproducible errors when replaying from database`. Currently, we're using the latest version of hypothesis on CI: https://github.com/pytorch/FBGEMM/actions/runs/6084855480/job/16515052387 Current hypothesis on FBCode is 6.70.1 which does not have `HealthCheck.differing_executors`. Reviewed By: shintaro-iwasaki Differential Revision: D49020046 fbshipit-source-id: 8ab1350411260c771baf05efe607f91c12df2385
PreviousNext