Organisers

Teresa Pelinski

Centre for Digital Music, Queen Mary University of London, t.pelinskiramos@qmul.ac.uk

I am a PhD researcher at the Artificial Intelligence and Music CDT and a member of the Augmented Instruments Lab at the Centre for Digital Music, QMUL. I hold a BSc in Physics from Universidad Autónoma de Madrid and a MSc in Sound and Music Computing from Universitat Pompeu Fabra, in Barcelona. My PhD project is in collaboration with Bela and deals with capturing nuanced, high-bandwidth interaction using deep learning techniques in embedded platforms in the context of digital musical instruments.

Victor Shepardson

Intelligent Instruments Lab, Iceland University of the Arts, victor@lhi.is

I am a doctoral student in the Intelligent Instruments Lab at LHI. Previously I worked on neural models of speech as a machine learning engineer and data scientist. Before that I was an MA student in Digital Musics at Dartmouth College and BA student in Computer Science at the University of Virginia. My interests include machine learning, artificial intelligence, generative art, audiovisual music and improvisation. My current project involves building an AI augmented looping instrument and asking what AI means to people, anyway.

Steve Symons

University of Sussex, Brighton, s.symons@sussex.ac.uk

I am a PhD researcher at the Leverhulme Trust funded be.AI Centre at Sussex University where my research is hosted by the School of Media, Arts and Humanities. I have spent many years as a maker of embedded locative audio systems and of making and improvising music with NIMEs. I am interested in enactive interfaces, woodwork and finding new metaphors for collaborative instruments.

Franco S. Caspe

Centre for Digital Music, Queen Mary University of London, f.s.caspe@qmul.ac.uk

I am a PhD researcher at the Artificial Intelligence and Music CDT and a member of the Augmented Instruments Laboratory of the Centre for Digital Music, QMUL. I hold an Electronic Engineer degree, and MSc in Image Processing and Computer Vision. I worked on R&D of real-time systems for audio, communications, and image classification on diverse platforms from micro-controllers to FPGAs. My PhD project is about musical instrument expression modelling using AI, for informed timbre transfer and instrument retargeting.

Adan L. Benito

Centre for Digital Music, Queen Mary University of London, a.benitotemprano@qmul.ac.uk

I am a PhD researcher from the Augmented Instruments Laboratory at the Centre for Digital Music in QMUL, and a member of the Artificial Intelligence and Music CDT. I am also part of the active development team of the Bela platform and have an active interest in the development of new hardware tools for music-making. I graduated as a Telecommunications Engineer from the University of Cantabria with an MSc in Radio Communications and hold an MSc in Sound and Music Computing from QMUL. My current research focuses on the creation of gestural models fusing representations from sensor and audio domains and their application to instrument augmentation. I also have an interest in all guitar-related technologies and the cultures that surround them

Jack Armitage

Intelligent Instruments Lab, Iceland University of the Arts, jack@lhi.is

I am a postdoctoral research fellow in the Intelligent Instruments Lab. I have a doctorate in Media and Arts Technologies from Queen Mary University of London, where I studied in Prof. Andrew McPherson's Augmented Instruments Lab. During my PhD I was a Visiting Scholar at Georgia Tech under Prof. Jason Freeman. Before then, I was a Research Engineer at ROLI after graduating with a BSc in Music, Multimedia & Electronics from the University of Leeds. My research interests include embodied interaction, craft practice and design cognition. I also produce, perform and live code music as Lil Data, as part of the PC Music record label.

Chris Kiefer

Experimental Music Technologies Lab, Department of Music, University of Sussex, c.kiefer@sussex.ac.uk

I am a computer-musician, musical instrument designer and Senior Lecturer in Music Technology at the University of Sussex. As a live-coder I perform under the name ‘Luuma’. Recently I have been playing an augmented self-resonating cello as half of improv-duo Feedback Cell, and with the feedback-drone-quartet ‘Brain Dead Ensemble’. I co-run the AHRC Feedback Musicianship Network. My research specialises in musician-computer interaction, physical computing, machine learning and complex systems.

Rebecca Fiebrink

Creative Computing Institute, University of the Arts London, r.fiebrink@arts.ac.uk

I am a Professor of Creative Computing at UAL. My research focuses largely on exploring how machine learning can be used to augment human creative practices in music and beyond. My students, collaborators, and I have developed a number of widely used tools for creative end-user machine learning, including Wekinator and InteractML.

Thor Magnusson

Intelligent Instruments Lab, Iceland University of the Arts, thor.magnusson@lhi.is

I am a professor of future music in the Music Department at the University of Sussex and a research professor at the Iceland University of the Arts. I’ve recently served as an Edgard-Varèse guest professor at the Technische Universität Berlin. My research interests include musical performance, improvisation, new technologies for musical expression, live coding, musical notation, artificial intelligence and computational creativity.

Andrew McPherson

Centre for Digital Music, Queen Mary University of London, a.mcpherson@qmul.ac.uk

I am a Professor of Musical Interaction in QMUL’s Centre for Digital Music, where I lead the Augmented Instruments Laboratory. I am also a founder of Bela, an embedded platform for rich, low-latency interaction with audio and sensors. With a background in music composition and electronic engineering, my interests include augmented instruments, performer-instrument interaction and foundational technologies for creating new digital musical instruments.