Hello many worlds!
This repo will be a handy reference throughout my capstone project. For this project I shall aim to implement novel sound generation using Quantum Generative Adverserial Networks (QGANs). In order to implement this I shall be looking at Qiskit's existing implementation of QGANs, which has been applied to finance.
I will be updating literature periodically as well as adding developments to the project whenever there is progress. More information can be found in the project proposals.
-
Van den Oord, Aaron and Dieleman, Sander. “WaveNet: A generative model for raw audio.” DeepMind.com, 8 Sep. 2016, https://deepmind.com/blog/article/wavenet-generative-model-raw-audio.
-
Putz, Volkmar, and Karl Svozil. “Quantum Music.” Soft Computing 21.6 (2015): 1467–1471. Crossref. Web.
-
Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders." 2017.
-
Engel, Jesse, et al. “GANSynth: Adversarial Neural Audio Synthesis.” ArXiv.org, 15 Apr. 2019, arxiv.org/abs/1902.08710.
-
Guide to Unconventional Computing for Music, by Eduardo Reck Miranda, Springer, 2017.
-
Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne and Douglas Eck. “A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music.” ArXiv.org, 11 Nov. 2019, arxiv.org/abs/1803.05428.
-
Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford and Ilya Sutskever. “Jukebox: A Generative Model for Music.” ArXiv.org, 30 Apr. 2020, https://arxiv.org/abs/2005.00341.
-
Velardo, Valerio. “Deep Learning (For Audio) With Python.” Github Repository. 5 Feb. 2020, https://github.com/musikalkemist/DeepLearningForAudioWithPython.
-
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang and Yi-Hsuan Yang. “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment.” ArXiv.org, 19 Sep. 2017, https://arxiv.org/abs/1709.06298.