Abstract
Loopable music generation systems enable diverse applications, but they often lack controllability and customization capabilities. We argue that enhancing controllability can enrich these models, with emotional expression being a crucial aspect for both creators and listeners. Hence, building upon LooperGP, a loopable tablature generation model, this paper explores endowing systems with control over conveyed emotions. To enable such conditional generation, we propose integrating musical knowledge by utilizing multi-granular semantic and musical features during model training and inference. Specifically, we incorporate song-level features (Emotion Labels, Tempo, and Mode) and bar-level features (Tonal Tension) together to guide emotional expression. Through algorithmic and human evaluations, we demonstrate the approach’s effectiveness in producing music conveying two contrasting target emotions, happiness and sadness. An ablation study is also conducted to clarify the contributing factors behind our approach’s results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Only major and minor modes were considered in this study.
- 2.
new_measure is the token representing the start of a new bar.
- 3.
There is also a score for low valence/arousal in the final layer.
References
Adkins, S., Sarmento, P., Barthet, M.: LooperGP: a loopable sequence model for live coding performance using guitarpro tablature. In: Johnson, C., Rodríguez-Fernández, N., Rebelo, S.M. (eds.) EvoMUSART 2023. LNCS, vol. 13988, pp. 3–19. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-29956-8_1
Alain, G., Chevalier-Boisvert, M., Osterrath, F., Piche-Taillefer, R.: Deepdrummer: generating drum loops using deep learning and a human in the loop. In: The 2020 Joint Conference on AI Music Creativity (2020)
Blood, A.J., Zatorre, R.J., Bermudez, P., Evans, A.C.: Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat. Neurosci. 2(4), 382–387 (1999)
Chew, E., et al.: Mathematical and computational modeling of tonality. AMC 10(12), 141 (2014)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988. Association for Computational Linguistics, Florence, Italy (2019). https://doi.org/10.18653/v1/P19-1285, https://aclanthology.org/P19-1285
Dalla Bella, S., Peretz, I., Rousseau, L., Gosselin, N.: A developmental study of the affective value of tempo and mode in music. Cognition 80(3), B1–B10 (2001)
Daynes, H.: Listeners’ perceptual and emotional responses to tonal and atonal music. Psychol. Music 39(4), 468–502 (2011)
Fernández-Sotos, A., Fernández-Caballero, A., Latorre, J.M.: Influence of tempo and rhythmic unit in musical emotion regulation. Front. Comput. Neurosci. 10, 80 (2016)
Ferreira, L.N., Whitehead, J.: Learning to generate music with sentiment. In: Proceedings of the 20th International Society for Music Information Retrieval Conference, pp. 384–390 (2019)
Grekow, J., Dimitrova-Grekow, T.: Monophonic music generation with a given emotion using conditional variational autoencoder. IEEE Access 9, 129088–129101 (2021)
Han, S., Ihm, H., Lee, M., Lim, W.: Symbolic music loop generation with neural discrete representations. Proceedings of the 23th International Society for Music Information Retrieval Conference (2022)
Han, S., Ihm, H., Lim, W.: Symbolic music loop generation with VQ-VAE. arXiv preprint arXiv:2111.07657 (2021)
Herremans, D., Chew, E., et al.: Tension ribbons: quantifying and visualising tonal tension. (2016)
Hsu, J.L., Liu, C.C., Chen, A.L.: Discovering nontrivial repeating patterns in music data. IEEE Trans. Multimedia 3(3), 311–325 (2001)
Huang, C.F., Huang, C.Y.: Emotion-based AI music generation system with CVAE-GAN. In: 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), pp. 220–222. IEEE (2020)
Hung, T.M., Chen, B.Y., Yeh, Y.T., Yang, Y.H.: A benchmarking initiative for audio-domain music generation using the freesound loop dataset. Proceedings of the 22th International Society for Music Information Retrieval Conference (2021)
Hutchings, P.E., McCormack, J.: Adaptive music composition for games. IEEE Trans. Games 12(3), 270–280 (2019)
Juslin, P.N.: Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26(6), 1797 (2000)
Kalansooriya, P., Ganepola, G.D., Thalagala, T.: Affective gaming in real-time emotion detection and smart computing music emotion recognition: implementation approach with electroencephalogram. In: 2020 International Research Conference on Smart Computing and Systems Engineering (SCSE), pp. 111–116. IEEE (2020)
Keskar, N.S., McCann, B., Varshney, L.R., Xiong, C., Socher, R.: Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019)
Loth, J., Sarmento, P., Carr, C., Zukowski, Z., Barthet, M.: Proggp: from guitarpro tablature neural generation to progressive metal production. The 16th International Symposium on Computer Music Multidisciplinary Research (2023)
Madhok, R., Goel, S., Garg, S.: Sentimozart: music generation based on emotions. In: ICAART (2), pp. 501–506 (2018)
McVicar, M., Fukayama, S., Goto, M.: Autoleadguitar: automatic generation of guitar solo phrases in the tablature space. In: 2014 12th International Conference on Signal Processing (ICSP), pp. 599–604. IEEE (2014)
Panda, R., Redinho, H., Gonçalves, C., Malheiro, R., Paiva, R.P.: How does the spotify api compare to the music emotion recognition state-of-the-art? In: 18th Sound and Music Computing Conference (SMC 2021), pp. 238–245. Axea sas/SMC Network (2021)
Ruiguo-Bio: Ruiguo-bio/midi-miner: Python midi track classifier and tonal tension calculation based on spiral array theory (2023). https://github.com/ruiguo-bio/midi-miner
Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)
Sarmento, P., Holmqvist, O., Barthet, M., et al.: Ubiquitous music in smart city: musification of air pollution and user context (2022)
Sarmento, P., Kumar, A., Carr, C., Zukowski, Z., Barthet, M., Yang, Y.H.: DadaGP: a dataset of tokenized guitarpro songs for sequence models. In: Proceedings of the 22th International Society for Music Information Retrieval Conference, pp. 610–618 (2021)
Sarmento, P., Kumar, A., Chen, Y.H., Carr, C., Zukowski, Z., Barthet, M.: GTR-CTRL: instrument and genre conditioning for guitar-focused music generation with transformers. In: Johnson, C., Rodríguez-Fernández, N., Rebelo, S.M. (eds.) EvoMUSART 2023. LNCS, vol. 13988, pp. 260–275. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-29956-8_17
Sarmento, P., Kumar, A., Xie, D., Carr, C., Zukowski, Z., Barthet, M.: Shredgp: guitarist style-conditioned tablature generation. In: Proceedings of the 16th International Symposium on Computer Music Multidisciplinary Research (CMMR) 2023. (2023)
Sulun, S., Davies, M.E., Viana, P.: Symbolic music generation conditioned on continuous-valued emotions. IEEE Access 10, 44617–44626 (2022)
Takahashi, T., Barthet, M.: Emotion-driven harmonisation and tempo arrangement of melodies using transfer learning
Tan, H.H., Herremans, D.: Music fadernets: controllable music generation based on high-level features via low-level feature modelling. Proceedings of the 21th International Society for Music Information Retrieval Conference (2020)
Tan, X., Antony, M., Kong, H.: Automated music generation for visual art through emotion. In: ICCC, pp. 247–250 (2020)
Tripodi, I.J.: Setting the rhythm scene: deep learning-based drum loop generation from arbitrary language cues. arXiv preprint arXiv:2209.10016 (2022)
Webster, G.D., Weir, C.G.: Emotional responses to music: interactive effects of mode, texture, and tempo. Motiv. Emot. 29, 19–39 (2005)
Williams, D., Kirke, A., Miranda, E.R., Roesch, E., Daly, I., Nasuto, S.: Investigating affect in algorithmic composition systems. Psychol. Music 43(6), 831–854 (2015)
Yang, S., Reed, C.N., Chew, E., Barthet, M.: Examining emotion perception agreement in live music performance. IEEE Trans. Affect. Comput. 14(02), 1442–1460 (2023). https://doi.org/10.1109/TAFFC.2021.3093787
Yeh, Y.T., Chen, B.Y., Yang, Y.H.: Exploiting pre-trained feature networks for generative adversarial networks in audio-domain loop generation. In: Proceedings of the 23th International Society for Music Information Retrieval Conference (2022)
Acknowledgement
This work is supported by the EPSRC UKRI Centre for Doctoral Training in Artificial Intelligence and Music (Grant no. EP/S022694/1).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cui, W., Sarmento, P., Barthet, M. (2024). MoodLoopGP: Generating Emotion-Conditioned Loop Tablature Music with Multi-granular Features. In: Johnson, C., Rebelo, S.M., Santos, I. (eds) Artificial Intelligence in Music, Sound, Art and Design. EvoMUSART 2024. Lecture Notes in Computer Science, vol 14633. Springer, Cham. https://doi.org/10.1007/978-3-031-56992-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-56992-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-56991-3
Online ISBN: 978-3-031-56992-0
eBook Packages: Computer ScienceComputer Science (R0)