Effortlessly add openai/whisper AI generated transcription subtitles into provided
video.
You can choose whether to perform X->X speech recognition or X->English translation.
What is whisper?
Whisper is an automatic State-of-the-Art speech recognition system from OpenAI that has been trained on 680,000 hours
of multilingual and multitask supervised data collected from the web. This large and diverse dataset leads to improved
robustness to accents, background noise and technical language.
- Python 3.9+
- ffmpeg
pip install git+https://github.com/dsymbol/decipher
General command line usage help:
$ decipher transcribe --help
usage: decipher transcribe [-h] -i [-o] [--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large}] [--task {transcribe,translate}] [--subs {add,burn}]
optional arguments:
-h, --help show this help message and exit
-i , --input input video file path e.g. video.mp4
-o , --output output directory path
--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large}
name of the whisper model to use
--task {transcribe,translate}
whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')
--subs {add,burn}, -s {add,burn}
whether to perform subtitle add or burn action
$ decipher subtitle --help
usage: decipher subtitle [-h] -i [-o] --subs [--task {add,burn}]
optional arguments:
-h, --help show this help message and exit
-i , --input input video file path e.g. video.mp4
-o , --output output directory path
--subs , -s input subtitles path e.g. subtitle.srt
--task {add,burn} whether to perform subtitle add or burn action
Generate SRT subtitles for video.mp4
decipher transcribe -i video.mp4 --model small
Burn generated subtitles into video.mp4
decipher subtitle -i video.mp4 -s video.srt --task burn
Together without validating transcribed content
decipher transcribe -i video.mp4 --model small --subs burn