Niviz: Configurable quality control image generation and rating


Niviz: Configurable quality control image generation and rating


Jerrold Jeyachandra @jerdra



Brainhack Global 2021 Event

BrainHack Toronto

Project Description

The process of QCing is universally boring, terrible and inefficient.

The Problem with QC

  1. Most pipelines people write and use don’t generate QC images, especially those that are as user-friendly as widely established pipelines such as fMRIPREP

  2. Even then, the QC images that are generated do not necessarily match how users end up QC’ing and rating images.

  3. Most of the time users must figure out their own way to record and organize their QC results, this is incredibly variable across individuals. Your collaborator might use differing definitions, organizational principles, and file formats for storing their QC results than you.

  4. Comparing rated images is often slow, manual and therefore painful. Often-times users have doubt about their ratings and would want to compare it to other images with the same rating. Doing this is often a very manual process (i.e lookup similar QC ratings on your spreadsheet, find file, open both images and compare)

The Solution - Niviz

Niviz is a simple, configurable Python-based tool that:

  1. Enables researchers to generate QC images using a simple YAML file for any pipeline that outputs NIFTI, GIFTI or CIFTI images
  2. Provides a small web application using niviz-rater that collects generated QC images (or any set of images organized in a BIDS-style dataset!!!) into an interactive QC interface. In addition, QC can be configured to suit the user’s needs using (yet another) simple YAML file.

QC image generation

QC web application

Goals for Brainhack Global

Both niviz and niviz-rater are relatively new projects and therefore require a bit of maintenance and organizational effort. The primary goals are as follows:


  1. Bug squashing
  2. Implementing a YAML schema validation step
  3. Documentation: Getting started, tutorials (including OSF dataset)
  4. Writing unit-tests

Niviz Rater

  1. Unit tests
  2. Several UX feature additions
  3. Basic feature additions to one day support collaborative QC rating and analysis (i.e inter-rater reliability)
  4. Maintenance (packaging)
  5. Documentation: Tutorials, getting started, API

Good first issues

Issues can be found under:

Look for the good first issue label for easy topics!

Communication channels

We’ll probably create our own channel if this picks up interest :)


The repositories are primarily written in Python and Javascript, these components are mostly independent from one another so you don’t need to know both!






Familiarity with Svelte framework is preferred. I’m still learning myself!

Onboarding documentation


What will participants learn?

Depending on which repository you contribute to:


  1. Python unit testing
  2. nipype interface development
  3. niworkflows image generation


  1. Svelte javascript framework
  2. Bottle for building python web applications
  3. Light database work with peewee
  4. Python packaging
  5. Unit testing

Data to use

As part of contributing to the documentation efforts of this project, we’d like to host some OSF sample data.


Some image data from a pipeline like fMRIPrep


Some QC image data so that users can play around with writing a YAML specification file and using the QC interface

Number of collaborators


Credit to collaborators

Project contributers will be included using the GitHub allcontributors bot. I’m still setting this up 🙈


Leave this text if you don’t have an image yet.


coding_methods, documentation, visualization

Development status

1_basic structure


data_visualisation, other


Nipype, other

Programming language

documentation, Python, html_css, javascript



Git skills

1_commit_push, 2_branches_PRs

Anything else?

No response

Things to do after the project is submitted and ready to review.

Jan 1, 0001 12:00 AM