This folder holds example jupyter scripts for the Encoder-LSTM-Decoder network. Also, you can use the python scripts to train your own LSTM model oder improve the existing approach.
Predicts an 8th sequence based on 7 input sequences of a MIDI file
Predicts the next 4 sequences based on 4 input sequences of a MIDI file
To use the examples please activate virtual environment and start jupyter notebook from root of this project by:
jupyter notebook
Then navigate to this folder and open Predicter_VAE_LSTM_Many2One or Predicter_VAE_LSTM_Many2One notebook for the corresponding model. Note that you need a song in MIDI format to feed to the network.