Skip to content

Commit

Permalink
[ImageClassifier] Add command line option for the model's expected in…
Browse files Browse the repository at this point in the history
…put image name

*Description*: Models may expect different names for the input image in ImageClassifier. We previously had a hack that allowed for two different names. This properly allows the model to have its image name specified.

*Testing*: updated and verified `run.sh` works

*Documentation*: Added.
  • Loading branch information
jfix71 committed Sep 5, 2018
1 parent 4e401ae commit 07723af
Show file tree
Hide file tree
Showing 4 changed files with 38 additions and 37 deletions.
9 changes: 4 additions & 5 deletions docs/AOT.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ This document demonstrates how to produce a bundle for the host CPU using the
directory.

```
$image-classifier image.png -image_mode=0to1 -m resnet50 -cpu -emit-bundle build/
$image-classifier image.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -cpu -emit-bundle build/
```

The command above would compile the neural network model described by the files
Expand Down Expand Up @@ -163,7 +163,7 @@ The makefile provides the following targets:
* `download_weights`: it downloads the Resnet50 network model in the Caffe2 format.
* `build/resnet50.o`: it generates the bundle files using the Glow image-classifier as described above.
The concrete command line looks like this:
`image-classifier tests/images/imagenet/cat_285.png -image_mode=0to1 -m resnet50 -cpu -emit-bundle build`
`image-classifier tests/images/imagenet/cat_285.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -cpu -emit-bundle build`
It reads the network model from `resnet50` and generates the `resnet50.o`
and `resnet50.weights` files into the `build` directory.
* `build/main.o`: it compiles the `resnet50_standalone.cpp` file, which is the main file of the project.
Expand All @@ -186,9 +186,8 @@ To build and run the example, you just need to execute:
This run performs almost the same steps as non-quantized Resnet50 version
except it emits bundle based on the quantization profile:
`image-classifier tests/images/imagenet/cat_285.png -image_mode=0to1 -m resnet50
-load_profile=profile.yml -cpu -emit-bundle build`
`image-classifier tests/images/imagenet/cat_285.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -load_profile=profile.yml -cpu -emit-bundle build`
The `profile.yml` itself is captured at a prior step by executing image-classifier with the `dump_profile` option:
`image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m resnet50 -dump_profile=profile.yml`.
`image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -dump_profile=profile.yml`.
See the makefile for details.
4 changes: 2 additions & 2 deletions docs/Quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ into the ```profile.yaml``` file.
This information can be used in the process of quantized conversion.
For example, you can run the following command to capture profile for Resnet50.
```
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -dump_profile="profile.yaml"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -dump_profile="profile.yaml"
```
By default, the loader will produce quantized results using asymmetric ranges.
That is ranges not necessarily centered on 0. The loader supports three modes
Expand Down Expand Up @@ -99,7 +99,7 @@ the graph.
For example, you can run the following command to load the profile and quantize
the graph.
```
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -load_profile="profile.yaml"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data -load_profile="profile.yaml"
```

## Compiler Optimizations
Expand Down
36 changes: 18 additions & 18 deletions tests/images/run.sh
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
#!/usr/bin/env bash

./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=neg128to127 -m=vgg19 "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=neg128to127 -m=squeezenet "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=zfnet512 "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=densenet121 "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=shufflenet "$@"
./bin/image-classifier tests/images/mnist/*.png -image_mode=0to1 -m=lenet_mnist "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=inception_v1 "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=bvlc_alexnet "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=resnet50 -model_input_name=gpu_0/data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=neg128to127 -m=vgg19 -model_input_name=data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=neg128to127 -m=squeezenet -model_input_name=data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=zfnet512 -model_input_name=gpu_0/data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=densenet121 -model_input_name=data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to1 -m=shufflenet -model_input_name=gpu_0/data "$@"
./bin/image-classifier tests/images/mnist/*.png -image_mode=0to1 -m=lenet_mnist -model_input_name=data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=inception_v1 -model_input_name=data "$@"
./bin/image-classifier tests/images/imagenet/*.png -image_mode=0to256 -m=bvlc_alexnet -model_input_name=data "$@"
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=resnet50/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=resnet50/model.onnx -model_input_name=gpu_0/data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=neg128to127 -m=vgg19/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=neg128to127 -m=vgg19/model.onnx -model_input_name=data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=neg128to127 -m=squeezenet/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=neg128to127 -m=squeezenet/model.onnx -model_input_name=data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=zfnet512/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=zfnet512/model.onnx -model_input_name=gpu_0/data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=densenet121/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=densenet121/model.onnx -model_input_name=data_0 "$@"
done
./bin/image-classifier tests/images/imagenet/zebra_340.png -image_mode=0to1 -m=shufflenet/model.onnx "$@"
./bin/image-classifier tests/images/imagenet/zebra_340.png -image_mode=0to1 -m=shufflenet/model.onnx -model_input_name=gpu_0/data_0 "$@"
for png_filename in tests/images/mnist/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=mnist.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to1 -m=mnist.onnx -model_input_name=data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=inception_v1/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=inception_v1/model.onnx -model_input_name=data_0 "$@"
done
for png_filename in tests/images/imagenet/*.png; do
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=bvlc_alexnet/model.onnx "$@"
./bin/image-classifier "$png_filename" -image_mode=0to256 -m=bvlc_alexnet/model.onnx -model_input_name=data_0 "$@"
done
26 changes: 14 additions & 12 deletions tools/loader/ImageClassifier.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,12 @@ llvm::cl::opt<ImageNormalizationMode> imageMode(
llvm::cl::alias imageModeA("i", llvm::cl::desc("Alias for -image_mode"),
llvm::cl::aliasopt(imageMode),
llvm::cl::cat(imageLoaderCat));

llvm::cl::opt<std::string> modelInputName(
"model_input_name",
llvm::cl::desc("The name of the variable for the model's input image."),
llvm::cl::value_desc("string_name"), llvm::cl::Required,
llvm::cl::cat(imageLoaderCat));
} // namespace

/// Loads and normalizes all PNGs into a tensor in the NCHW 3x224x224 format.
Expand Down Expand Up @@ -137,20 +143,19 @@ int main(int argc, char **argv) {
Tensor data;
loadImagesAndPreprocess(inputImageFilenames, &data, imageMode);

// All of our ONNX and Caffe2 models may use one of two names for input.
constexpr char const *c2Inputs[2] = {"gpu_0/data", "data"};
constexpr char const *onnxInputs[2] = {"gpu_0/data_0", "data_0"};
// The image name that the model expects must be passed on the command line.
const char *inputName = modelInputName.c_str();

// Create the model based on the input model format.
std::unique_ptr<ProtobufLoader> LD;
bool c2Model = !loader.getCaffe2NetDescFilename().empty();
if (c2Model) {
LD.reset(new caffe2ModelLoader(
loader.getCaffe2NetDescFilename(), loader.getCaffe2NetWeightFilename(),
c2Inputs, {&data, &data}, *loader.getFunction()));
{inputName}, {&data}, *loader.getFunction()));
} else {
LD.reset(new ONNXModelLoader(loader.getOnnxModelFilename(), onnxInputs,
{&data, &data}, *loader.getFunction()));
LD.reset(new ONNXModelLoader(loader.getOnnxModelFilename(), {inputName},
{&data}, *loader.getFunction()));
}
// Get the Variable that the final expected Softmax writes into at the end of
// image inference.
Expand All @@ -159,19 +164,16 @@ int main(int argc, char **argv) {
// Create Variables for both possible input names for flexibility for the
// input model. The input data is mapped to both names. Whichever Variable is
// unused will be removed in compile().
Variable *i0 = LD->getVariableByName(c2Model ? c2Inputs[0] : onnxInputs[0]);
Variable *i1 = LD->getVariableByName(c2Model ? c2Inputs[1] : onnxInputs[1]);

assert(i0->getVisibilityKind() == VisibilityKind::Public);
assert(i1->getVisibilityKind() == VisibilityKind::Public);
Variable *inputImage = LD->getVariableByName(inputName);
assert(inputImage->getVisibilityKind() == VisibilityKind::Public);

// Compile the model, and perform quantization/emit a bundle/dump debug info
// if requested from command line.
loader.compile();

// If in bundle mode, do not run inference.
if (!emittingBundle()) {
loader.runInference({i0, i1}, {&data, &data});
loader.runInference({inputImage}, {&data});

// Print out the inferred image classification.
Tensor &res = SMVar->getPayload();
Expand Down

0 comments on commit 07723af

Please sign in to comment.