From ebbd444013151232fb8f04c78333c060b4e5bcf7 Mon Sep 17 00:00:00 2001 From: harshbafna Date: Fri, 10 Jan 2020 16:42:47 +0530 Subject: [PATCH] updated handler definition --- examples/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/README.md b/examples/README.md index 443607553e..514ca737a7 100644 --- a/examples/README.md +++ b/examples/README.md @@ -10,7 +10,7 @@ Following are the steps to create a torch-model-archive (.mar) to execute an eag * serialized-file (.pt) : This file represents the state_dict in case of eager mode model. * model-file (.py) : This file contains model class extended from torch nn.modules representing the model architecture. This parameter is mandatory for eager mode models. This file must contain only one class definition extended from torch.nn.modules * index_to_name.json : This file contains the mapping of predicted index to class. The default TS handles returns the predicted index and probability. This file can be passed to model archiver using --extra-files parameter. - * handler : This file contains the mapping of predicted index to class. The default TS handles returns the predicted index and probability. This file can be passed to model archiver using --extra-files parameter. + * handler : TorchServe's default handler's name or path to custom inference handler(.py) * Syntax @@ -25,7 +25,7 @@ Following are the steps to create a torch-model-archive (.mar) to execute an eag * Pre-requisites to create a torch model archive (.mar) : * serialized-file (.pt) : This file represents the state_dict in case of eager mode model or an executable ScriptModule in case of TorchScript. * index_to_name.json : This file contains the mapping of predicted index to class. The default TS handles returns the predicted index and probability. This file can be passed to model archiver using --extra-files parameter. - * handler : This file contains the mapping of predicted index to class. The default TS handles returns the predicted index and probability. This file can be passed to model archiver using --extra-files parameter. + * handler : TorchServe's default handler's name or path to custom inference handler(.py) * Syntax