Skip to content

Commit

Permalink
Create separate targets for training and inference (pytorch#1757)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#1757

- Create separate targets for training and inference

- Redefine the old `embedding_ops` and `embedding_ops` as an empty
target with `exported_defs` pointing to the new split targets

Reviewed By: sryap

Differential Revision: D45687293

fbshipit-source-id: 2adfaee5d0bdd075164749db9ebe7fc032c626ee
  • Loading branch information
q10 authored and facebook-github-bot committed May 18, 2023
1 parent 0a3380f commit 2ae07d0
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 2 deletions.
15 changes: 15 additions & 0 deletions fbgemm_gpu/codegen/embedding_ops_placeholder.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree.
*/

/*
This is placeholder code to force compilation and generation of an
`libdeeplearning_fbgemm_fbgemm_gpu_codegen_embedding_ops.so` file, which
allows downstream PyTorch code to contlinue loading the `embedding_ops“
and `embedding_ops_cpu` (now-)shim targets correctly.
*/
namespace fbgemm_gpu {}
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,15 @@ from .lookup_args import *


{% if is_fbcode %}
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")

# Provide compatibility to downstream packages for eventual migration to the split training / inference packages
try:
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cuda_training")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu_training")
except Exception:
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")

torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:cumem_utils")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops_cpu")
Expand Down
6 changes: 6 additions & 0 deletions fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,12 @@
from fbgemm_gpu.split_embedding_configs import EmbOptimType as OptimType, SparseType
from torch import nn, Tensor # usort:skip

try:
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu/codegen:embedding_ops_cpu")
except Exception:
pass

DEFAULT_ASSOC = 32 if torch.version.hip is None else 64
# Maximum number of times prefetch() can be called without
# a corresponding forward() call
Expand Down

0 comments on commit 2ae07d0

Please sign in to comment.