Data Driven Spiking Neural Network Optimization in Julia


Data-Driven Spiking Neural Network Optimization in Julia



Seeking collaborators

Brainhack Global 2021 Event

Brainhack Australasia

Project Description

What are you doing, for whom, and why?

What makes your project special and exciting?

It has a basis in Julia-lang which might be novel and exciting for many people, yet, the scope of the project is meta-package, meaning that many of the goals of the project are beginner-friendly.

How to get started?

git clone

Data fitting a spiking neural network by exploring the effect of the parameter that controls connectome graph structure:

cd examples
julia sdo_network.jl

Single cell data fitting against spike times:

cd test
julia single_cell_opt_adexp.jl 
julia single_cell_opt_izhi.jl

Where to find key resources?

Goals for Brainhack Global

Good first issues

  1. Most fun issue: Benchmark execution speed and memory consumption for similar-sized networks on the same network models, for different Julia SNN backends. These three approaches need to openly be battled out, against each other for execution speed and memory efficiency.
  1. Make package installable via Project.toml, etc.

  2. Refactor optimizer design from bespoke specific examples to a general user interface.

  3. Make scatter plot animation of optimizer succeeding.

  4. Use existing Python/BluePyOpt code to draw the GA evaluated error surface.

Properly cite this code and borrow from BPO /notebook here: Python code Cell 26 draws the error surface.

  1. Convert Python Sciunit relative difference score to Julia relative difference score.

Sciunit scoring has tools, for scaling and normalizing feature measurements, some of these are trivial and some are elaborate.

Implement in Julia sciunits RelativeDifferenceScore, naming convention, and implementation. Note Julia is not Object orientated, so skip over Python’s inheritance, if it seems necessary to use a container use Julia struct. It might be helpful to re-implement multiple sciunit scores in Julia, but the most immediately useful one is RelativeDifferenceScore

Communication channels


Onboarding documentation

I will create a in the interim.

What will participants learn?

Data to use

At the moment single-cell model fitting occurs on Allen Brain Observatory data (these are cached in JLD files), I have written a Python API included in the Julia code repository (python is called from Julia).

This project would really benefit from experimental multicellular spike train data.

Number of collaborators


Credit to collaborators

I have started using the all contributors tool which makes all contributions to the repository visible (including raising issues via GitHub, or even ideas conveyed outside of GitHub). Also, I can write a reference letter for substantive contribution.



Why Julia? Python package management is already complicated, reproducible model optimization is made worse by combining Python with external simulators.

Image source:


coding_methods, documentation, method_development, visualization

Development status

1_basic structure


data_visualisation, neural_networks, reproducible_scientific_methods, single_neuron_models, other


Python space: NetworkUnit, NeuronDataWithoutBorders, Neo Analog Signal. (note to be pragmatic, this project still uses some Python)

Julia space: ClearStacktrace.jl (makes error messages much easier to read) SignalAnalysis.jl, Evolutionary.jl, SpikingNeuralNetworks.jl, SpikeSynchrony.jl, PyCall.jl

Programming language

Julia, Python.



Git skills

0_no_git_skills, 1_commit_push, 2_branches_PRs, 3_continuous_integration

Anything else?

Although some Python is used to corroborate/validate optimized models, crucially no Python is used in the optimization loops, as calling Python with PyCall is not fast.

Things to do after the project is submitted.

Twitter sized summary

Julia has enough tools to support fitting spiking neural network models to data. Python speed necessitates external simulators to do network simulation. It would be more developer-friendly to do fast, efficient data fitting of spike trains to network models in one language, let us try to do that here.

Jan 1, 0001 12:00 AM