Abstract
Despite their impressive offline results, deep learning models for symbolic music generation are not widely used in live performances due to a deficit of musically meaningful control parameters and a lack of structured musical form in their outputs. To address these issues we introduce LooperGP, a method for steering a Transformer-XL model towards generating loopable musical phrases of a specified number of bars and time signature, enabling a tool for live coding performances. We show that by training LooperGP on a dataset of 93,681 musical loops extracted from the DadaGP dataset [22], we are able to steer its generative output towards generating 3x as many loopable phrases as our baseline. In a subjective listening test conducted by 31 participants, LooperGP loops achieved positive median ratings in originality, musical coherence and loop smoothness, demonstrating its potential as a performance tool.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
We focus here on loops where the exact same content is repeated, but it is worth noting that a more general definition could encompass loops where certain types of musical variations can occur across repetitions (e.g. modulation).
- 2.
Link to listening test excerpts: https://drive.google.com/drive/folders/1I0MCPYjj8nXqKkmDN-d-C2ETOHJpCZyn?usp=share_link.
References
Statsmodels (2022). https://github.com/statsmodels/statsmodels. Accessed 14 Aug 2022
Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for Boltzmann machines. Cognit. Sci. 147–169 (1985)
Ariza, C.: The interrogator as critic: the turing test and the evaluation of generative music. Comput. Music. J. 33, 48–70 (2009)
Bishop, P.A., Herron, R.L.: Use and misuse of the likert item responses and other ordinal measures. Int. J. Exer. Sci. 8, 297–302 (2015)
Briot, J.P., Hadjeres, G., Pachet, F.D.: Deep Learning Techniques for Music Generation, vol. 1. Springer, Cham (2020). doi: https://doi.org/10.1007/978-3-319-70163-9
Brown, A.R., Sorensen, A.: Interacting with generative music through live coding. Contemp. Music. Rev. 28, 17–29 (2009)
Chandna, P., Ramires, A., Serra, X., Gómez, E.: Loopnet: Musical loop synthesis conditioned on intuitive musical parameters. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3395–3399. IEEE (2021)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)
Hadjeres, G., Pachet, F., Nielsen, F.: Deepbach: a steerable model for bach chorales generation. In: International Conference on Machine Learning, pp. 1362–1371 (2017)
Hsu, J.L., Liu, C.C., Chen, A.L.: Discovering nontrivial repeating patterns in music data. IEEE Trans. Multimedia 3, 311–325 (2001)
Huang, Y.S., Yang, Y.H.: Pop music transformer: beat-based modeling and generation of expressive pop piano compositions. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1180–1188 (2020)
Ji, S., Luo, J., Yang, X.: A comprehensive survey on deep music generation: Multi-level representations, algorithms, evaluations, and future directions. arXiv preprint arXiv:2011.06801 (2020)
Lan, Q., Tørresen, J., Jensenius, A.R.: Raveforce: a deep reinforcement learning environment for music. In: Proceedings of the SMC conferences, pp. 217–222. Society for Sound and Music Computing (2019)
Magnusson, T.: Sonic Writing: Technologies of Material, Symbolic, and Signal Inscriptions. Bloomsbury Publishing USA (2019)
McCartney, J.: Supercollider: A new real-time sound synthesis language. In: Proceedings of the International Computer Music Conference, pp. 257–258 (1996)
McLean, A., Wiggins, G.: Tidal-pattern language for the live coding of music. In: Proceedings of the 7th Sound and Music Computing Conference, pp. 331–334 (2010)
Meteyard, L., Davies, R.A.: Best practice guidance for linear mixed-effects models in psychological science. J. Mem. Lang. 112, 104092 (2020)
Mueller, A.: Word cloud (2022). https://github.com/amueller/word_cloud. Accessed: 14 Aug 2022
Müllensiefen, D., Gingras, B., Musil, J., Stewart, L.: The musicality of non-musicians: an index for assessing musical sophistication in the general population. PLoS ONE 9(2), e89642 (2014)
Nilson, C.: Live coding practice. In: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, pp. 112–117 (2007)
Ramires, A., et al.: The freesound loop dataset and annotation tool. arXiv preprint arXiv:2008.11507 (2020)
Sarmento, P., Kumar, A., Carr, C., Zukowski, Z., Barthet, M., Yang, Y.H.: DadaGP: a dataset of tokenized GuitarPro songs for sequence models. In: Proceedings of the 22nd International Social for Music Information Retrieval Conference (2021)
Shih, Y.J., Wu, S.L., Zalkow, F., Muller, M., Yang, Y.H.: Theme transformer: symbolic music generation with theme-conditioned transformer. IEEE Trans. Multimedia 1–1 (2022)
Stewart, J., Lawson, S.: CIBO: an autonomous tidalcyles performer. In: Proceedings of the Fourth International Conference on Live Coding, p. 353 (2019)
Sullivan, G.M., Artino, A.R.: Analyzing and interpreting data from likert-type scales. J. Graduate Med. Educ. 5, 541–542 (2013)
Wu, C.H.: An empirical study on the transformation of likert-scale data to numerical scores. Appl. Math. Sci. 1, 2851–2862 (2007)
Acknowledgements
This work has been partly supported by the EPSRC UKRI Centre for Doctoral Training in Artificial Intelligence and Music (Grant no. EP/S022694/1).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Adkins, S., Sarmento, P., Barthet, M. (2023). LooperGP: A Loopable Sequence Model for Live Coding Performance Using GuitarPro Tablature. In: Johnson, C., Rodríguez-Fernández, N., Rebelo, S.M. (eds) Artificial Intelligence in Music, Sound, Art and Design. EvoMUSART 2023. Lecture Notes in Computer Science, vol 13988. Springer, Cham. https://doi.org/10.1007/978-3-031-29956-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-29956-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-29955-1
Online ISBN: 978-3-031-29956-8
eBook Packages: Computer ScienceComputer Science (R0)