Using Explainable Artificial Intelligence (XAI) to create a real-time closed-loop stimulation

Title

Using Explainable Artificial Intelligence (XAI) to create a real-time closed-loop stimulation

Leaders

Syed Hussain Ather (Twitter: @SHussainAther)

Collaborators

No response

Brainhack Global 2021 Event

BrainHack Toronto

Project Description

Like similar work in other fields (e.g., computer vision, ML, etc.) on established datasets, we propose a project to create a pipeline explainable artificial intelligence (XAI) on a neurostimulation experimental and theoretical procedure. Given input recording of brain signals from some source (EEG data most likely), there’s research geared toward using existing or novel XAI techniques to a known neurostimulation paradigm to provide explanatory power to close-loop neurobehavioral modulation (e.g., counter-factual probes). We hope this can be a step toward more innovative future work in creating a real time, closed-loop stimulation for deep brain stimulation (DBS). We hope to use this pipeline in improving research that can be used to modulate neural activity in real time. These types of frameworks can be used to advance work in intelligent computational approaches able to sense, interpret, and modulate a large amount of data from behaviorally relevant neural circuits at the speed of thoughts.

The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better-informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases, it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches.

Using one of these GUI tools (https://github.com/anguyen8/XAI-papers, most likely DeepVis), we hope to create a functioning pipeline that provides this explanatory power. Create a working model just like Figure 2. (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)

Install them as required by Deep Visualization (https://github.com/yosinski/deep-visualization-toolbox) there are

https://github.com/HussainAther/XAI

Goals for Brainhack Global

Goal: Create a functional, working pipeline that follows the three steps (pre-modelling, modelling, and post-modelling steps) from Figure 2 of Fellous et al., (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)

Good first issues

  1. Issue one:

  2. issue two:

Communication channels

https://mattermost.brainhack.org/brainhack/channels/brainhack-toronto

Skills

Onboarding documentation

No response

What will participants learn?

I imagine that, at Brainhack 2021, like other or previous Brainhacks, we all learn skills in collaboration, organization, communication, team and project management, and other areas that can benefit any researcher interested in AI or similar fields related to programming and data.

Data to use

No response

Number of collaborators

3

Credit to collaborators

Project contributors are listed on the project README using all-contributors github bot.

Image

Leave this text if you don’t have an image yet.

Type

pipeline_development

Development status

0_concept_no_content

Topic

data_visualisation

Tools

other

Programming language

Python

Modalities

fMRI

Git skills

0_no_git_skills

Anything else?

No response

Things to do after the project is submitted and ready to review.


Date
Jan 1, 0001 12:00 AM