Tags: pauperonway/pytorch
Tags
Fix version handler in 1.1.0 docs. (pytorch#19977) Update the find & replace to be less restrictive. Will port this change to master to avoid problems in the future.
Remove unnecessary typing dependency. (pytorch#16776) Signed-off-by: Edward Z. Yang <[email protected]>
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_i… …d"" (pytorch#12232) Summary: Pull Request resolved: pytorch#12232 Original commit changeset: fca91fea58b7 This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396 Reviewed By: jerryzh168 Differential Revision: D10132473 fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_i… …d"" (pytorch#12232) Summary: Pull Request resolved: pytorch#12232 Original commit changeset: fca91fea58b7 This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396 Reviewed By: jerryzh168 Differential Revision: D10132473 fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b
Scopes 0.3.1 backport (pytorch#5153) * Introduce scopes during tracing (pytorch#3016) * Fix segfault during ONNX export * Further fix to tracing scope (pytorch#4558) * Set missing temporary scope in callPySymbolicMethod * Use expected traces in all scope tests * Fix tracking of tracing scopes during ONNX pass (pytorch#4524) * Fix tracking of tracing scopes during ONNX pass * Use ResourceGuard to manage setting a temporary current scope in Graph * Add tests for ONNX pass scopes * Remove unused num_classes argument * Expose node scopeName to python (pytorch#4200) * Inherit JIT scopes when cloning only when it's correct It's correct only when the new graph owns the same scope tree as the original one. We can end up with dangling pointers otherwise. * Fixes after cherry-picking, still one test to go * Fix for last failing test after scope cherry-pick * Fix linting issue
Backport transposes optimization to v0.3.0 (pytorch#3994) * Optimizer: optimize transposes in variety of circumstances (pytorch#3509) * Optimizer: Optimize transposes in variety of circumstances - No-op transposes - Consecutive transposes (fuse them) - Transposes into Gemm (fuse them into transA/transB parameter) * touch up out of date comment * Backporting optimizer changes
PreviousNext