Organ Classification on Abdominal Ultrasound using Javascript
SIIM 2019 Innovation Challenge Winner
Uses ConvNetJS.
Training File: usanotai.html
- Training images are in \train folder (named as {organ}-00##.png)
- Test images are in \test folder (named as {organ}-005#.png)
- Once images are loaded, start training by typing this in the console:
doTrain() //trains one epoch by default
doTrain(5) //trains 5 epochs
- Script will automatically run test images through the trained model at the end of doTrain
- To output the trained model as JSON.stringify:
doNet()
Live Classification: live.html
- Accepts a video feed from the ultrasound machine into the computer via video capture
- Crops the video feed to dimensions that the model uses
- Runs images (every 100ms) through the model and outputs predictions
- Model is hard-coded as string (from doNet() above)
I wrote this without knowing anything about machine learning, therefore:
- Epochs are called "repetitions".
- My "Test Images" is essentially the validation set. And the live video feed is the actual test set.
- No evaluation of training and validation losses.
- Model was defined with ONLY Conv/Pool layers WITHOUT a Fully-Connected layer before the Softmax layer. It just happens to work because ConvNetJS (apparently) automatically adds an FC layer just before the Softmax layer, even though I didn't define it.
Pitch video (live.html can be seen)