Skip to content

In this repository, I've crafted an advanced music generator using RNNs, LSTMs, and Transformers. The aim was to explore and understand the intricacies and efficacy of these neural network architectures, providing insights into their potential applications in the realm of creative AI.

Notifications You must be signed in to change notification settings

benisalla/MGTLR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MGTLR: Music Generator using Transformer, LSTM, and RNN

Implementing a music generation model using Transformer, LSTM, and RNN architectures from scratch.


Table of Contents 📘


About The Project

Interface of our app

MGTLR offers a streamlined, yet comprehensive, implementation of music generation using Transformer, LSTM, and RNN architectures. This project is designed to provide a clear, structured approach to neural network development for music generation, making it suitable for both educational and practical applications.

The methodologies developed could be applied to other types of audio and songs by adapting the input of the transformer (specifically, the tokenizer) to new formats.


Features

  • Modular Design: Clear separation of components such as data processing, model architecture, and training scripts.
  • Visualization of Annotations: Enables checking and testing annotations.
  • Download Capability: Allows users to download their favorite generated songs.
  • Customizable: Easily adapt the architecture and data pipelines for different datasets and applications.
  • Poetry for Dependency Management: Utilizes Poetry for straightforward and dependable package management.

Project Structure

MGTLR
│
├── generated_songs          
├── music_generator           
│   ├── app              
│   ├── core                
│   ├── data                
│   ├── model                 
│   │   ├── LSTM
│   │   ├── TRF
│   │   └── RNN
│   └── src                
│       ├── checkpoints     
│       ├── dataset           
│       └── tokenizer         
├── tests                    
│   ├── model                 
│   └── tokenizer             
├── tokenizing               
│   └── tokenizer           
│       ├── __init__.py       
│       ├── KerasTokenizer.py 
│       ├── MGTokenizer.py    
│       ├── NaiveTokenizer.py 
│       ├── train_keras_tokenizer.py 
│       └── train_mg_tokenizer.py    
├── finetune.py              
├── train.py                  
├── .gitignore                
└── README.md                 

Getting Started

Follow these simple steps to get a local copy up and running.

Installation

  1. Clone the repository
    git clone https://github.com/benisalla/mg-transformer.git
  2. Install dependencies using Poetry
    poetry install
  3. Activate the Poetry shell to set up your virtual environment
    poetry shell

Running the Application

To launch the Streamlit application, execute the following command:

poetry run streamlit run music_generator/app/main.py

Training

How to Run Training

To train the model using the default configuration, execute the following command:

poetry run python train.py

Results of Training Different Models

  1. Transformer (TRF)

    Transformer Training Results

  2. Recurrent Neural Network (RNN)

    RNN Training Results

  3. Long Short-Term Memory (LSTM)

    LSTM Training Results

Fine-Tuning

To fine-tune a pre-trained model:

poetry run python finetune.py

Examples of Songs Generated

Here are some examples of songs generated by the MGTLR model:

abs_song_2024-07-09_11-05-19_2.mp4
abs_song_2024-07-09_11-05-20_5.mp4
nice-nice.mp4
1.mp4
2.mp4
3.mp4
4.mp4
5.mp4
6.mp4
7.mp4

License

This project is made available under fair use guidelines. While there is no formal license associated with the repository, users are encouraged to credit the source if they utilize or adapt the code in their work. This approach promotes ethical practices and contributions to the open-source community. For citation purposes, please use the following:

@misc{mg_transformer_2024,
  title={MGTLR: Music Generator using Transformer, LSTM, and RNN},
  author={Ben Alla Ismail},
  year={2024},
  url={https://github.com/benisalla/mg-transformer}
}

About Me

🎓 Ismail Ben Alla - Neural Network Enthusiast

As a dedicated advocate for artificial intelligence, I am deeply committed to exploring its potential to address complex challenges and to further our understanding of the universe. My academic and professional pursuits reflect a relentless dedication to advancing knowledge in AI, deep learning, and machine learning technologies.

Core Motivations

  • Innovation in AI: Driven to expand the frontiers of technology and unlock novel insights.
  • Lifelong Learning: Actively engaged in mastering the latest technological developments.
  • Future-Oriented Vision: Fueled by the transformative potential and future prospects of AI.

I am profoundly passionate about my work and optimistic about the future contributions of AI and machine learning.

Let's connect and explore the vast potential of artificial intelligence together!


🎵✨🎶 Hit play and let the magic begin—watch notes turn into symphonies! 🎵✨🎶

About

In this repository, I've crafted an advanced music generator using RNNs, LSTMs, and Transformers. The aim was to explore and understand the intricacies and efficacy of these neural network architectures, providing insights into their potential applications in the realm of creative AI.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published