This is an experimental repository to see how to go about training models to recognise gestures from data collected over BLE from the BangleJS watch.
Visit https://paulcockrell.github.io/banglejs-tensorflow-example/ to begin, or to run locally clone and run a local webserver to serve the project e.g https://www.npmjs.com/package/http-server. There are no server-side processes to be concerned with, this is purely all done in the browser (Google Chrome).
You must record a minimum of two gestures for this to work
Data is stored in the browsers Local-storage, and this is cleared when ever you visit the Record gestures
page!
- Visit the
Index
page, and clickBegin
- You will be taken to the
Recording
page - In the form enter a
Gesture name
(e.gPunch
) and select aSample size
(5 is fine if the gesture movements are very different - but the more you record, the better the predictions). - Click
Record gesture
.. the page will automatically scroll down to the feed back section, wait for the feed back bar to turn red (recording mode) and its message to say itsReady
. - Perform your gestures (first always fails as it needs the raw data to be cleaned..). Make them clear actions lasting between 1-2 seconds.
- Once the feedback bar goes
Green
, scroll back up to the top of the page, and enter a NEW gesture name (e.gSlap
), keep the sample value the same as before. - Perform steps
2-3
again. - Now you will have recorded TWO gestures (you can confirm this in the bangle gesture readings section at the bottom of page)
- Click
Start Training
in the form at the top of the page, you will now be theTraining page
.
- Click the
Train
button - A side-bar will appear showing you details on the prediction quality as the model trains etc (its multi-tabbed so look around)
- Once the training is completed, the
Next
button will become active - Click
Next
to be taken to thePredictor page
.
This is where the fun begins, simply click the Start button
, wait for the feedback bar to say its ready, then perform your gestures.
It will render the predicted gesture label in the center of the screen, changing colour each prediction (to identify a new prediction should you perform the same gesture repeatedly!).
- This is done in vanilla JS. I didn't want to bog the Bangle community down with frameworks.
- This is a spike, the code quality reflects this
- The UI could do with a rethink, but is presentable for this spike
- We want to take the model generated and convert it to a TFLite model so that it can be loaded into the Bangle watch.