A demo application that makes your micro:bit (2020 model) respond to your voice, built with Edge Impulse. This demo uses Machine Learning to analyze the audio feed coming from the microphone, then showing a smiley on screen when it hears "microbit".
Video tutorial:
-
Install CMake, Python 2.7 and the GNU ARM Embedded Toolchain. Make sure
arm-none-eabi-gcc
is in your PATH. -
Clone this repository:
$ git clone https://github.com/edgeimpulse/voice-activated-microbit
-
Build the project:
$ python build.py
-
And flash the binary to your micro:bit, by dragging
MICROBIT.hex
onto theMICROBIT
disk drive.
You can build new models using Edge Impulse.
-
Sign up for an account and open your project.
-
Download the base dataset - this contains both 'noise' and 'unknown' data that you can use.
-
Go to Data acquisition, and click the 'Upload' icon.
-
Choose all the WAV items in the dataset and leave all other settings as-is.
-
Go to Devices and add your mobile phone.
-
Go back to Data acquisition and now record your new keyword many times using your phone at frequency 11000Hz.
-
After uploading click the three dots, select Split sample and click Split to slice your data in 1 second chunks.
-
Follow these steps to train your model.
Note: use window length 999 instead of 1000!
Once you've trained a model go to Deployment, and select C++ Library. Then:
- Remove
source/edge-impulse-sdk
,source/model-parameters
andsource/tflite-model
. - Drag the content of the ZIP file into the
source
folder. - If you've picked a different keyword, change this in source/MicrophoneInferenceTest.cpp.
- Rebuild your application.
- Your micro:bit now responds to your own keyword 🚀.