- About The Project
- Features
- Project Structure
- Getting Started
- Training
- Fine-Tuning
- Some Examples
- License
- About Me
MGTLR offers a streamlined, yet comprehensive, implementation of music generation using Transformer, LSTM, and RNN architectures. This project is designed to provide a clear, structured approach to neural network development for music generation, making it suitable for both educational and practical applications.
The methodologies developed could be applied to other types of audio and songs by adapting the input of the transformer (specifically, the tokenizer) to new formats.
- Modular Design: Clear separation of components such as data processing, model architecture, and training scripts.
- Visualization of Annotations: Enables checking and testing annotations.
- Download Capability: Allows users to download their favorite generated songs.
- Customizable: Easily adapt the architecture and data pipelines for different datasets and applications.
- Poetry for Dependency Management: Utilizes Poetry for straightforward and dependable package management.
MGTLR
│
├── generated_songs
├── music_generator
│ ├── app
│ ├── core
│ ├── data
│ ├── model
│ │ ├── LSTM
│ │ ├── TRF
│ │ └── RNN
│ └── src
│ ├── checkpoints
│ ├── dataset
│ └── tokenizer
├── tests
│ ├── model
│ └── tokenizer
├── tokenizing
│ └── tokenizer
│ ├── __init__.py
│ ├── KerasTokenizer.py
│ ├── MGTokenizer.py
│ ├── NaiveTokenizer.py
│ ├── train_keras_tokenizer.py
│ └── train_mg_tokenizer.py
├── finetune.py
├── train.py
├── .gitignore
└── README.md
Follow these simple steps to get a local copy up and running.
- Clone the repository
git clone https://github.com/benisalla/mg-transformer.git
- Install dependencies using Poetry
poetry install
- Activate the Poetry shell to set up your virtual environment
poetry shell
To launch the Streamlit application, execute the following command:
poetry run streamlit run music_generator/app/main.py
To train the model using the default configuration, execute the following command:
poetry run python train.py
To fine-tune a pre-trained model:
poetry run python finetune.py
Here are some examples of songs generated by the MGTLR model:
abs_song_2024-07-09_11-05-19_2.mp4
abs_song_2024-07-09_11-05-20_5.mp4
nice-nice.mp4
1.mp4
2.mp4
3.mp4
4.mp4
5.mp4
6.mp4
7.mp4
This project is made available under fair use guidelines. While there is no formal license associated with the repository, users are encouraged to credit the source if they utilize or adapt the code in their work. This approach promotes ethical practices and contributions to the open-source community. For citation purposes, please use the following:
@misc{mg_transformer_2024,
title={MGTLR: Music Generator using Transformer, LSTM, and RNN},
author={Ben Alla Ismail},
year={2024},
url={https://github.com/benisalla/mg-transformer}
}
🎓 Ismail Ben Alla - Neural Network Enthusiast
As a dedicated advocate for artificial intelligence, I am deeply committed to exploring its potential to address complex challenges and to further our understanding of the universe. My academic and professional pursuits reflect a relentless dedication to advancing knowledge in AI, deep learning, and machine learning technologies.
- Innovation in AI: Driven to expand the frontiers of technology and unlock novel insights.
- Lifelong Learning: Actively engaged in mastering the latest technological developments.
- Future-Oriented Vision: Fueled by the transformative potential and future prospects of AI.
I am profoundly passionate about my work and optimistic about the future contributions of AI and machine learning.
Let's connect and explore the vast potential of artificial intelligence together!