Note that Filip Jerga is listed as an author due to our usage of his boilerplate code (https://github.com/Jerga99/electron-react-boilerplate).
A desktop application that allows users to execute computer commands with the use of air gestures through movement recognition models. Hack the North 2020++ ๐! Current iteration is not perfect, so use at your own risk ๐.
- Set up Python (see below)
- (Optional) If want to have voice recognition feature, follow instructions below.
- Install ui dependencies:
npm install
- In one terminal:
npm run watch
to compile react code - In another terminal:
npm start
to start Electron app
- src\assets: Change the file paths for thepath1, ..., thepath6 (gesture demonstrations)
- src\components\GestureMatch.js: Change the file path for the logo
Libraries (which you may not have yet) to install:
- mediapipe
- opencv-python
- scikit-learn
- keyboard
- pyautogui
- azure-cognitiveservices-speech
To run just the python portion of the computer control, go to the Server folder and run the following: python hello.p
To run a demo of the gesture recognition without computer control, go to the Server folder and run the following: python gesture_detector.py
- Create Azure Speech Service resource (Speech Service setup example).
- In the Server folder, create a copy of settings_template.json. Rename it to settings.json.
- Add key into the json file.
- Improve robustness and reliability of gesture detection
- Connect config settings from UI to Python
- Add an exit program feature
- Add post-hackathon comments/documentation