Brainharmonic - Generating music from brain signal data using deep learning
Mariano Cabezas
Aria Nguyen Brendan Harris
Brainhack Australasia
This project aims to develop a tool to generate music from brain signals using deep learning models. There has been a lot of work on generating new music from a large collection of music using deep AI models, as well as some work on generating music from brain EEG/fMRI signal through algorithmic or rule based approaches. However there has not been work done on using deep learning models to generate music from EEG/fMRI signal.
In this project, we will use deep generative models to allow EEG/fMRI data to be used as an input to generate music. More details of the design can be found on our Github page.
https://github.com/marianocabezas/brainharmonic
Develop a tool to generate songs from EGG/fMRI brain signals using deep learning
Currently we have a pipeline to generate musical motif from a time series and a deep learning model to output a song from the generated motif. This was tested with a list of EEG signals from multiple subjects. We can build upon the existing work in three aspects
https://mattermost.brainhack.org/brainhack/channels/brainharmonic
Python: intermediate Pytorch: beginner EEG/fMRI data processing: beginner
https://github.com/marianocabezas/brainharmonic
EEG/fRMI data preprocessing Deep learning generative models Existing music-to-music deep AI frameworks
No response
3
Project contributors are credited on Readme file on project Github page
Leave this text if you don’t have an image yet.
coding_methods, method_development, pipeline_development
1_basic structure
deep_learning
fMRIPrep, Freesurfer, Jupyter
Julia, Python
EEG, fMRI
1_commit_push
No response
Hi @brainhackorg/project-monitors my project is ready!