If you are interested in contributing to ESPnet, your contributions will fall into three categories:
-
You want to propose a new feature and implement it post about your intended feature at the issues, or you can contact [email protected]. We shall discuss the design and implementation. Once we agree that the plan looks good, go ahead and implement it. You can find ongoing major development plans at https://github.com/espnet/espnet/milestones
-
You want to implement a minor feature or bug-fix for an issue, please first take a look at the existing issues (https://github.com/espnet/espnet/pulls) and/or pull requests (https://github.com/espnet/espnet/pulls) Pick an issue and comment on the task that you want to work on this feature If you need more context on a particular issue, please ask us and then we shall provide more information. We also welcome if you found some bugs and make a PR to fix them.
-
ESPnet provides and maintains a lot of reproducible examples similar to Kaldi (called
recipe
). The recipe creation/update/bug-fix is one of our major development items, and we really encourage you to work on it.- When you port a Kaldi recipe to ESPnet, see https://github.com/espnet/espnet/wiki/How-to-port-the-Kaldi-recipe-to-the-ESPnet-recipe%3F
- We also encourage you to report your results with your detailed environmental info and upload the model for the reproducibility
(e.g., see https://github.com/espnet/espnet/blob/master/egs/tedlium2/asr1/RESULTS.md)
To make a report for
RESULTS.md
- execute
get_sys_info.sh
at a recipe main directory (whererun.sh
is located), as follows. You'll get environmental information in a markdown format.$ get_sys_info.sh
- execute
pack_model.sh
at a recipe main directory as follows. You'll get model information and results in a markdown format$ pack_model.sh --lm <language model> --results "<dev result.txt> <test.result.txt>" <tr_conf> <dec_conf> <cmvn> <e2e>
- please update your result in
RESULTS.md
based on the markdown documents generated by the above scripts. pack_model.sh
also produces a packed espnet model (model.tar.gz
). If you upload this model to somewhere with a download link, please put the link information toRESULTS.md
.- please contact Shinji Watanabe [email protected] if you want a web storage to put your model files.
- execute
Once you finish implementing a feature or bugfix, please send a Pull Request to https://github.com/espnet/espnet
If you are not familiar with creating a Pull Request, here are some guides:
- http://stackoverflow.com/questions/14680711/how-to-do-a-github-pull-request
- https://help.github.com/articles/creating-a-pull-request/
We basically maintain the master
and v.0.X.0
branches for our major developments.
-
We will keep the first version digit
0
until we have some super major changes in the project organization level. -
The second version digit will be updated when we have major updates including new functions and refactoring, and their related bug fix and recipe changes. This version update will be done roughly at every half year so far (but it depends on the development plan). This is developed at the
v.0.X.0
branch to avoid confusions in themaster
branch. -
The third version digit will be updated when we fix serious bugs or accumulate some minor changes including recipe related changes periodically (every two months or so). This is developed at the
master
branch, and these changes are also reflected to thev.0.X.0
branch frequently.
TBD
ESPnet's testing is located under test/
. You can install additional packages for testing as follows:
$ cd <espnet_root>
$ pip install -e ".[test]"
Then you can run the entire test suite with
$ pytest
To create new test file. write functions named like def test_yyy(...)
in files like test_xxx.py
under test/
.
Pytest will automatically test them.
We also recommend you to follow our coding style that can be checked as
$ flake8 espnet test
$ autopep8 -r espnet test --global-config .pep8 --diff --max-line-length 120 | tee check_autopep8
$ test ! -s check_autopep8
You can find pytest fixtures in test/conftest.py
. They finalize unit tests.
You can also test the scripts in utils
with bats-core and shellcheck.
To test:
./ci/test_bash.sh
- setup.cfg configures pytest and flake8.
- .travis.yml configures Travis-CI.
See doc.
Pack your trained models using utils/pack_model.sh
and upload it here (You require permission).
Add the shared link to utils/recog_wav.sh
as follows:
"tedlium.demo") share_url="https://drive.google.com/open?id=1UqIY6WJMZ4sxNxSugUqp3mrGb3j6h7xe" ;;
The model name is arbitrary for now.
See matplotlib's guideline https://matplotlib.org/devel/portable_code.html We do not block your PR even if it is not portable.
- read log from PR checks > details
- read log from PR checks > details
- turn on Rerun workflow > Rerun job with SSH
- open your local terminal and
ssh -p xxx xxx
(check circle ci log for the exact address) - try anything you can to pass the CI