PS4 Dataset Released

I’m pleased to share PS4, the largest open-source dataset for protein secondary structure prediction. Along with the new dataset, I’m also sharing PS4-Mega and PS4-Conv, the new state-of-the-art algorithms for predicting protein secondary structure.

If you've ever worked with protein secondary structure and machine learning, you know that the datasets are fragmented and redundant against each other, making it hard to know how reliable your evaluation results are. The included proteins aren't even identified in many cases, making this issue really tricky for researchers to solve. In PS4, all proteins are identified by PDB code and non-redundancy is guaranteed, including against CB513, another major benchmark in SS prediction.

All code and data is fully open-sourced, to facilitate reproducibility and empower the bioinformatics community to develop ideas further.

Presenting the JS Fake Chorales dataset at SMC 2022

My paper, JS Fake Chorales: a Synthetic Dataset of Polyphonic Music with Human Annotation, has been accepted for publication in this year’s Sound and Music Computing (SMC 2022) conference proceedings. SMC will take place in Saint-Étienne this year and I’ll be presenting a short poster introducing the paper.

To celebrate, I’ve also added a new feature to the JS Fake Chorale Generator web app - you can now mint your generated music as NFTs on the Polygon blockchain as a nice keepsake! It’s been great to see the dataset grow with 150 new community additions, and this new Web3 integration could open more doors for your favourite generated pieces down the line.

GANkyoku I, II & III made into NFTs

gankyoku_3.png

GANkyoku’, the first ever pieces of contemporary concert music to be entirely generated by a deep neural network, have just been minted as NFTs on catalog.

The three original recordings, performed by Shawn Head, have now taken their place as unique tokens on the blockchain, creating a permanent record of an exciting moment in the evolving relationship between artists an artificial intelligence systems.

Check them out on catalog!

GANyoku I
GANkyoku II
GANkyoku III

Can you tell which music was written by AI?

Do you think you can distinguish between music written by history’s greatest composers, or written by AI? Whether you’re an experienced musician or you consider yourself completely tone deaf, I’d like you to have a try!

Over much of 2020, I developed a neural-network-based algorithm (nicknamed KS_Chorus) to generate the notes for pieces of polyphonic music. I generated several hundred pieces in a row with KS_Chorus and made a website where you can try to tell these pieces apart from real, human-composed pieces of music.

These pieces were all added to the test without curation; the goal is to assess the algorithm’s output without any cherry picking. You can play as many times as you want, and all the pieces you’ll hear will be chosen randomly at each round. How long will it take you to score 5 correct answers?

Interested in what you might come across? click here to begin and find out.

Presenting TonicNet at ISMIR 2020

My paper, Improving Polyphonic Music Models with Feature-Rich Encoding, has been accepted for publication in this year’s International Society for Music Information Retrieval Conference (ISMIR 2020) proceedings. The conference will take place virtually this year due to COVID-19, and I’ll be giving two remote presentations on 12 and 13 October. If you’re attending ISMIR, come visit my paper’s Slack channel. If not, check out this page with the paper, a poster and a short video presentation about the research and findings.

Vox I to be performed at the Roundhouse

‘Vox I: Echo Chamber’ will be performed by the Roundhouse Choir in this year’s ‘In the Round’ series. The piece will open the set for Reykjavik-based singer songwriter, John Grant’s sold out performance on 29 January. Three members of the Roundhouse choir will feature as soloists, and I will also be making a cameo in the accompanying chorus for the evening! Places on the waiting list for ticket returns are available.

Vox II to premiere in Chicago

My first performance of the year is an exciting premiere by Chicago-based trombonist, Riley Leitch. ‘Vox II: Ad Homines’ is an interactive audience participatory piece combining AI, AR and the Internet of Things. A custom AI model will listen to Riley’s solo playing in real time and respond to his material changes by sending a signal to the cloud. This signal will trigger the audience members’ phones to load a new sound, which they can perform with using a purpose-built AR interface, all neatly contained in the free Vox II iOS application. The result will be a flexible and free jam session between the entire audience with Riley at the helm as soloist/conductor, empowered by the AI system.

The premiere will take place on Sunday 12 Jan at Constellation, Chicago. For more information and tickets, please see the link here.

State-of-the-art AI Model for polyphonic music

My latest open source research project on modelling polyphonic music with artificial intelligence has achieved state-of-the-art results on the popular Bach chorales dataset, surpassing the previous best reported performance while using a much smaller neural network. A paper detailing my research is available here, and all python code is here so results can be easily recreated and the model/methods adapted for any further projects. I hope my findings can help improve results across a wide range of approaches towards computational modelling of music.

Presenting Humtap at MediMex festival

I will be co-leading a technological workshop on Music and Artificial Intelligence at this year’s MEDIMEX festival in Taranto, Italy. Alongside me will be Cliff Fluet, partner at Lewis Silkin LLP and a leading figure in the AI Music legal space. The workshop will take place on 8th June and my phase will focus on Humtap’s product and technology.

Presenting GANkyoku at ICMC-NYCEMF 2019

My paper, GANkyoku: a Generative Adversarial Network for Shakuhachi Music, co-authored with shakuhachi master and composer Shawn Head, has been accepted for publication in the proceedings of the ICMC-NYCEMF this year. The paper discusses the process of training the deep neural network, GANkyoku,  which I used to generate the three pieces with the same name. I will give a presentation about the neural network and pieces, which we believe are the first contemporary classical pieces to ever be entirely generated by a deep neural network, at New York University during the ICMC-NYCEMF conference in June.

Interviews with UnTwelve and Shawn Head

After placing second in their 2018 composition competition with ‘Colour Etude I’, I caught up with UnTwelve about microtones, AI and Arabic classical music. Read the full interview here.

I also had the chance to speak with Shakuhachi shihan, Shawn Renzoh Head at length following on from our recordings of the A.I. solo works he commissioned from me,’ GANkyoku’, where we discuss the process of writing Contemporary music with artificial intelligence. Watch the entire video interview below:

Performing 'Fractured' at SMC2018

Looking forward to making my first trip to Cyprus this summer for 2018's Sound & Music Computing Conference. I'll be giving a concert rendition of my iOS work, Fractured, as a piece of performative electroacoustic music. SMC takes place at Cyprus University of Technology, Limassol, from July 4-7. More details of my performance to follow.

Search for 'Fractured' on the App Store to experience the piece for yourself from the comfort of your own home (or anywhere else, for that matter)!

IMPULS 2019

Excited to have just confirmed I will be at impuls next February in Graz, Austria. I'll be doing collaborative work with other composers and performers, under the guidance of some fantastic musicians and mentors. In particular I will be exploring possibilities of new spaces for distributing and interacting with Classical Music, in a project led by Jorge Sánchez-Chiong.

Ambient iOS performance at Daylight Music

Daylight Music, Union Chapel's series of Saturday concerts, will be holding their Piano Day on 31 March, curated by Xenia Pestova. I'll be performing an ambient set using some of my iOS apps which have a keyboard flavour to them, including the free Colour Etude II: Live Electronics app. We're looking at getting the audience involved too, so if you're interested in the event and own an iOS device, why not prepare by downloading the app for free!

The event will also feature a performance of Colour Etude II by myself and Késia Decoté. More details will be announced as they arise.