Skip to content

Commit

Permalink
add annotation in load_inference_model
Browse files Browse the repository at this point in the history
  • Loading branch information
seiriosPlus committed Aug 21, 2018
1 parent f6b06bd commit 0815291
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions python/paddle/fluid/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -691,6 +691,10 @@ def load_inference_model(dirname,
parameters were saved in a single binary
file. If parameters were saved in separate
files, set it as 'None'.
pserver_endpoints(list|None): This only need by distributed inference.
When use distributed look up table in training,
We also need it in inference.The parameter is
a list of pserver endpoints.
Returns:
tuple: The return of this function is a tuple with three elements:
Expand All @@ -709,12 +713,16 @@ def load_inference_model(dirname,
exe = fluid.Executor(fluid.CPUPlace())
path = "./infer_model"
endpoints = ["127.0.0.1:2023","127.0.0.1:2024"]
[inference_program, feed_target_names, fetch_targets] =
fluid.io.load_inference_model(dirname=path, executor=exe)
results = exe.run(inference_program,
feed={feed_target_names[0]: tensor_img},
fetch_list=fetch_targets)
# if we need lookup table, we will use:
fluid.io.load_inference_model(dirname=path, executor=exe, pserver_endpoints=endpoints)
# In this exsample, the inference program was saved in the
# "./infer_model/__model__" and parameters were saved in
# separate files in ""./infer_model".
Expand Down

0 comments on commit 0815291

Please sign in to comment.