Skip to content
/ USAnotAI Public

Organ Classification on Abdominal Ultrasound using Javascript

Notifications You must be signed in to change notification settings

ftsvd/USAnotAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

USAnotAI

Organ Classification on Abdominal Ultrasound using Javascript

SIIM 2019 Innovation Challenge Winner

Uses ConvNetJS.

Training File: usanotai.html

  1. Training images are in \train folder (named as {organ}-00##.png)
  2. Test images are in \test folder (named as {organ}-005#.png)
  3. Once images are loaded, start training by typing this in the console:
    doTrain() //trains one epoch by default
    doTrain(5) //trains 5 epochs
  4. Script will automatically run test images through the trained model at the end of doTrain
  5. To output the trained model as JSON.stringify:
    doNet()

Live Classification: live.html

  1. Accepts a video feed from the ultrasound machine into the computer via video capture
  2. Crops the video feed to dimensions that the model uses
  3. Runs images (every 100ms) through the model and outputs predictions
  4. Model is hard-coded as string (from doNet() above)

I wrote this without knowing anything about machine learning, therefore:

  1. Epochs are called "repetitions".
  2. My "Test Images" is essentially the validation set. And the live video feed is the actual test set.
  3. No evaluation of training and validation losses.
  4. Model was defined with ONLY Conv/Pool layers WITHOUT a Fully-Connected layer before the Softmax layer. It just happens to work because ConvNetJS (apparently) automatically adds an FC layer just before the Softmax layer, even though I didn't define it.

Pitch video (live.html can be seen)

About

Organ Classification on Abdominal Ultrasound using Javascript

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages