Skip to content

Commit

Permalink
remove fp16 flag (pytorch#581)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#581

fp16 is no longer used in table batched embedding op.

Reviewed By: jspark1105

Differential Revision: D27380547

fbshipit-source-id: 944fa2eb8b492a2e162eb2b19fd6aa5e5fc95e2d
  • Loading branch information
jianyuh authored and facebook-github-bot committed Mar 30, 2021
1 parent 6c051e1 commit 2c2ae1f
Showing 1 changed file with 0 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,6 @@ def __init__( # noqa C901
cache_sets: int = 0,
cache_reserved_memory: float = 0.0,
cache_precision: SparseType = SparseType.FP32,
fp16: bool = False,
weights_precision: SparseType = SparseType.FP32,
enforce_hbm: bool = False, # place all weights/momentums in HBM when using cache
optimizer: OptimType = OptimType.EXACT_SGD,
Expand Down

0 comments on commit 2c2ae1f

Please sign in to comment.