Skip to content

Tags: pauperonway/pytorch

Tags

v1.2.0

Toggle v1.2.0's commit message
delete C_CONTIGUOUS assertions to be compatible with particular build…

…s of numpy

v1.1.0

Toggle v1.1.0's commit message
Fix version handler in 1.1.0 docs. (pytorch#19977)

Update the find & replace to be less restrictive. Will port this change
to master to avoid problems in the future.

v1.0.1

Toggle v1.0.1's commit message
Remove unnecessary typing dependency. (pytorch#16776)

Signed-off-by: Edward Z. Yang <[email protected]>

v1.0.0

Toggle v1.0.0's commit message
add fix for CUDA 10

v1.0rc1

Toggle v1.0rc1's commit message
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_i…

…d"" (pytorch#12232)

Summary:
Pull Request resolved: pytorch#12232

Original commit changeset: fca91fea58b7

This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396

Reviewed By: jerryzh168

Differential Revision: D10132473

fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b

v1.0rc0

Toggle v1.0rc0's commit message
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_i…

…d"" (pytorch#12232)

Summary:
Pull Request resolved: pytorch#12232

Original commit changeset: fca91fea58b7

This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396

Reviewed By: jerryzh168

Differential Revision: D10132473

fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b

v0.4.1

Toggle v0.4.1's commit message
fix lint

v0.4.0

Toggle v0.4.0's commit message
move to eigenteam github for eigen submodule

v0.3.1

Toggle v0.3.1's commit message
Scopes 0.3.1 backport (pytorch#5153)

* Introduce scopes during tracing (pytorch#3016)

* Fix segfault during ONNX export

* Further fix to tracing scope (pytorch#4558)

* Set missing temporary scope in callPySymbolicMethod

* Use expected traces in all scope tests

* Fix tracking of tracing scopes during ONNX pass (pytorch#4524)

* Fix tracking of tracing scopes during ONNX pass

* Use ResourceGuard to manage setting a temporary current scope in Graph

* Add tests for ONNX pass scopes

* Remove unused num_classes argument

* Expose node scopeName to python (pytorch#4200)

* Inherit JIT scopes when cloning only when it's correct

It's correct only when the new graph owns the same scope tree
as the original one. We can end up with dangling pointers otherwise.

* Fixes after cherry-picking, still one test to go

* Fix for last failing test after scope cherry-pick

* Fix linting issue

v0.3.0

Toggle v0.3.0's commit message
Backport transposes optimization to v0.3.0 (pytorch#3994)

* Optimizer: optimize transposes in variety of circumstances (pytorch#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes