Releases: isir/greta
FlipperDemo and GPT3
Please download the precompiled GRETA project using the following link:
https://drive.google.com/file/d/14k66BNWRrdRABc5RbFxpRTg8VdogV3iI/view?usp=share_link
Then you just need to use the run.bat to use it. It will start maryTTS and GRETA
You need to install
- java8
- python3
- python's openAI module
For GPT3 you will need to modify the bin/gpt3.py file in order to insert your own key. (you can create a key from your openIA account) . Once done you can use the gpt3 configuration and use the textfield on the left to write your questions and then send it using the "Send" button.
On the right text field the answer of gpt3 that GRETA will use to generate gestures (NVBG) or just as speech.
Precompiled version with NVBG,MaryTTS and OpenFace
Fix voices
New release (too big for github) with OpenFace available at :https://drive.google.com/file/d/1xsD3WyNaKUdYPDxloOSquip6BVHHXN23/view?usp=sharing.
(to add audio to video capture, you can
install ffmpeg https://drive.google.com/file/d/1AftSLewXyv4jGgF1GmK_fYVxPSBRf1ML/view?usp=sharing and be able to use ffmpeg in cmd.
A ffmpeg process will be called to merge audio and video. If ffmpeg fails, another process will merge audio and video using xuggle. (ffmpeg is a little bit better in term of quality
)
The videos will be created in the folder bin/video/ (if you use AUParserFileReader, the video will not be created anymore in the same folder of the csv input file)