Skip to content
/ rscnn Public
forked from gWOLF3/rscnn

A Convolutional Neural Network framework on the Android platform, implemented with RenderScript and Java, supporting MobileNet-SSD and faster-rcnn, integrated to accept models from Watson Visual Recognition. No JNI/C++ dependencies.

License

Notifications You must be signed in to change notification settings

matt-ny/rscnn

 
 

Repository files navigation

Watson Visual Recognition on Android

This repo demonstrates how to deploy custom object detection models from Watson Visual Recognition to an Android device. It uses RenderScript CNN with a MobileNetv1 + SSD implementation.

Note that the IBM Watson Visual Recognition cloud API will be discontinued. You can still use this repo to deploy object detection models on Android, but you will no longer be able to use Watson to create the models.

Before you begin

Make sure that you have an up-to-date version of Android Studio and the Android SDK (API level 28 or later).

Create a custom object detection model

You can create a custom object detection model by using the API or Watson Studio. Or you can use the already trained model.

Tip: Use the Lego Detection with Watson Visual Recognition project to create an example model in less than 10 minutes

Download the model

If you created a model, use the API to download it.

  1. Create a directory in demo/src/main/assets/ for your model, for example demo/src/main/assets/mymodel. If you have multiple models, create a separate directory for each.
  2. Use the Get a model method in the API to download the model. You might need to use the List collections method to find the collection ID.
  3. Extract the downloaded .zip model file to your new directory (mymodel).

Deploy the app

To use the app on the Android Emulator, copy images that you want to analyze to the virtual device. For example, copy the images in the images/testimages directory if you're using the included LegoPersonModel.

Run the app

When you run the app, you select your object detection model that you downloaded.

  1. On your device or the emulator, select your object detection model. It is identified by the directory name that you created for the model and the collection ID.

    Tip: To use the included model, select LegoPersonModel

  2. Click Select Image and choose an image from the device.

  3. About the controls:

    • Set the Confidence threshold, which is the minimum score that a feature must have to be returned.
    • Set the Soft NMS Sigma, which controls how much the score of one bounding box is penalized by an overlapping box with the same label and higher score. A higher setting might display extra boxes around an object. Use a lower setting when you don’t expect much overlap of objects of the same name.
    • After changing a control value, you must re-apply the model via the Select Image button to see updated results.

Trained model

You can use the already trained model in this project by itself or with your own models. It is trained from the Lego Detection with Watson Visual Recognition project.

The model is in the demo/src/main/assets/LegoPersonModel directory and is deployed with your app. If you don't want this model in your app, delete that directory before you deploy it.

You can test the model with the images in the images/testimages directory. Make sure to copy them to your device or virtual device.

About

A Convolutional Neural Network framework on the Android platform, implemented with RenderScript and Java, supporting MobileNet-SSD and faster-rcnn, integrated to accept models from Watson Visual Recognition. No JNI/C++ dependencies.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 85.3%
  • RenderScript 13.2%
  • Python 1.5%