Using Explainable Artificial Intelligence (XAI) to create a real-time closed-loop stimulation
Syed Hussain Ather (Twitter: @SHussainAther)
Like similar work in other fields (e.g., computer vision, ML, etc.) on established datasets, we propose a project to create a pipeline explainable artificial intelligence (XAI) on a neurostimulation experimental and theoretical procedure. Given input recording of brain signals from some source (EEG data most likely), there’s research geared toward using existing or novel XAI techniques to a known neurostimulation paradigm to provide explanatory power to close-loop neurobehavioral modulation (e.g., counter-factual probes). We hope this can be a step toward more innovative future work in creating a real time, closed-loop stimulation for deep brain stimulation (DBS). We hope to use this pipeline in improving research that can be used to modulate neural activity in real time. These types of frameworks can be used to advance work in intelligent computational approaches able to sense, interpret, and modulate a large amount of data from behaviorally relevant neural circuits at the speed of thoughts.
The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better-informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases, it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches.
Using one of these GUI tools (https://github.com/anguyen8/XAI-papers, most likely DeepVis), we hope to create a functioning pipeline that provides this explanatory power. Create a working model just like Figure 2. (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)
Install them as required by Deep Visualization (https://github.com/yosinski/deep-visualization-toolbox) there are
Goal: Create a functional, working pipeline that follows the three steps (pre-modelling, modelling, and post-modelling steps) from Figure 2 of Fellous et al., (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)
I imagine that, at Brainhack 2021, like other or previous Brainhacks, we all learn skills in collaboration, organization, communication, team and project management, and other areas that can benefit any researcher interested in AI or similar fields related to programming and data.
Project contributors are listed on the project README using all-contributors github bot.
Leave this text if you don’t have an image yet.
Hi @brainhackorg/project-monitors my project is ready!