About

Cutting edge embedded systems have always been a part of NIME’s practices. Low-resource computing hardware, such as microcontrollers or single-board computers, can be embedded into digital musical instruments or interfaces to perform specific functions such as real-time digital signal processing of sensor data and sound [1][2][3][4][5]. Simultaneously, an interest in exploiting the creative potentials of Artificial Intelligence for instrument design and musical expression has been growing within the NIME community in the past years [6][7][8][9][10][11].

Recent advancements in embedded computing have allowed for faster and more intensive computation capabilities [12]. However, the deployment of machine learning or symbolic AI techniques still presents several technical challenges (e.g., data bandwidth, memory handling) and higher-level design constraints [13][14][15][16]. Some of these challenges are general to embedded systems, while others are specific to musical interaction, particularly questions regarding real-time performance and latency [17].

With this workshop we aim to:

  1. Bring together a body of research practitioners that face such challenges in the context of NIME.
  2. Articulate these challenges and identify the tools and technologies being currently used to overcome them.
  3. Forge a community using embedded AI for NIME.
  4. Discuss critical approaches on the use of embedded AI for musical expression.

References

  1. Sullivan, J., Vanasse, J., Guastavino, C., & Wanderley, M. (2020). Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians. In R. Michon & F. Schroeder (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 5–10). Birmingham, UK: Birmingham City University. https://doi.org/10.5281/zenodo.4813166
  2. Meneses, E., Wang, J., Freire, S., & Wanderley, M. (2019). A Comparison of OpenSource Linux Frameworks for an Augmented Musical Instrument Implementation. In M. Queiroz & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 222–227). Porto Alegre, Brazil: UFRGS. https://doi.org/10.5281/zenodo.3672934
  3. Momeni, A., McNamara, D., & Stiles, J. (2018). MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments. In T. M. Luke Dahl Douglas Bowman (Ed.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 65–71). Blacksburg, Virginia, USA: Virginia Tech. https://doi.org/10.5281/zenodo.1302681
  4. Berdahl, E., Salazar, S., & Borins, M. (2013). Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 325–330). Daejeon, Republic of Korea: Graduate School of Culture Technology, KAIST. https://doi.org/10.5281/zenodo.1178476
  5. Schmeder, A., & Freed, A. (2008). uOSC : The Open Sound Control Reference Platform for Embedded Devices. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 175–180). Genoa, Italy. https://doi.org/10.5281/zenodo.1179627
  6. Fiebrink, R., & Sonami, L. (2020). Reflections on Eight Years of Instrument Creation with Machine Learning. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 237–242). Birmingham, UK. https://doi.org/10.5281/zenodo.4813334
  7. Martin, C., Morreale, F., Benedikte, W., & Hugo, S. (2021). Workshop in Critical Perspectives on AI/ML in Musical Interfaces. In Proceedings of the International Conference on New Interfaces for Musical Expression. Shanghai, China.
  8. Tahiroğlu, K., Kastemaa, M., & Koli, O. (2020). Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 337–342). Birmingham, UK. https://doi.org/10.5281/zenodo.4813402
  9. Yaremchuk, V., Medeiros, C. B., & Wanderley, M. (2019). Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument). In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 150–155). Porto Alegre, Brazil. https://doi.org/10.5281/zenodo.3672904
  10. Hantrakul, L. (2018). GestureRNN: A neural gesture system for the Roli Lightpad Block. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 132–137). Blacksburg, Virginia, USA. https://doi.org/10.5281/zenodo.1302703
  11. Caramiaux, B., & Tanaka, A. (2013). Machine Learning of Musical Gestures. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 513–518). Daejeon, Republic of Korea. https://doi.org/10.5281/zenodo.1178490
  12. Wang, F., Zhang, M., Wang, X., Ma, X., & Liu, J. (2020). Deep learning for edge computing applications: A state-of-the-art survey. IEEE Access, 8, 58322–58336.
  13. Situnayake, D., & Plunkett, J. (2022, December). AI at the Edge: Solving Real World Problems with Embedded Machine Learning. O’Reilly Media, Inc. Retrieved from https://learning.oreilly.com/library/view/ai-at-the/9781098120191/
  14. Blasch, E., Pham, T., Chong, C.-Y., Koch, W., Leung, H., Braines, D., & Abdelzaher, T. (2021). Machine learning/artificial intelligence for sensor data fusion– opportunities and challenges. IEEE Aerospace and Electronic Systems Magazine, 36(7), 80–93.
  15. Shafique, M., Theocharides, T., Reddy, V. J., & Murmann, B. (2021). TinyML: Current Progress, Research Challenges, and Future Roadmap. In 2021 58th ACM/IEEE Design Automation Conference (DAC) (pp. 1303–1306).
  16. David, R., Duke, J., Jain, A., Reddi, V. J., Jeffries, N., Li, J., … Warden, P. (2020). TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems. CoRR, abs/2010.08678. Retrieved from https://arxiv.org/abs/2010.08678
  17. McPherson, A., Jack, R., & Moro, G. (2016). Action-Sound Latency: Are Our Tools Fast Enough? In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 20–25). Brisbane, Australia: Queensland Conservatorium Griffith University. ttps://doi.org/10.5281/zenodo.3964611