biomag2016

Data Analysis Competition

The schedule outline for the challenge is as follows:
• Kickoff: 9th May
• Submission deadline: 31st August 20th September
• Evaluation of results: 25th September
Competition 1: Does Pre-Stimulus Brain Activity Predict Conscious Awareness?

Organizers: Mike X Cohen, Karim Jerbi, Matias Palva

Brain activity before a sensory stimulus was traditionally considered "noise" to be averaged out. It has become clear, however, that this view is not tenable: there are meaningful pre-stimulus dynamics that predict behavioral and neural responses to upcoming stimuli. Posterior alpha oscillations are one prominent example of how pre-stimulus activity predicts stimulus-related responses; are there other features of pre-stimulus brain dynamics that we don't (but should) know about?

The goal of this data analysis competition is to identify features of pre-stimulus MEG data that are relevant for predicting conscious awareness of sensory stimulation. We welcome improvements over existing methods and/or novel methods.

Data from a recently published MEG study are provided. In the study, weak somatosensory stimulation was provided, and human participants reported when they became aware of the stimulation. Both epoched and continuous data are available, at the sensor and source levels. Submissions must include analyses of at least one dataset as proof-of-principle, and optionally may include analyses of all 12 datasets as demonstration of reproducibility. Each of the 12 datasets comprises 2 x 30 minutes of continuous data (test + retest).

Competition 2: A simulation benchmark of EEG and MEG based brain connectivity estimation pipelines

Organizers: Stefan Haufe, Arne Ewald

Brain connectivity estimation using electro-/magnetoencephalography (EEG/MEG) is an emerging field with important potential applications in basic neuroscience and clinical research. However, we are currently observing a mismatch between the vastness of studies conducted and the degree to which the employed analyses are theoretically understood and empirically validated. We developed a Matlab-based framework for simulating realistic pseudo EEG/MEG data with known ground truth. This framework allows one to validate connectivity estimation pipelines, which may be composed of separate pre-processing, inverse source reconstruction and connectivity estimation steps, as a whole.

For BIOMAG 2016, we propose a data analysis challenge, where the goal is to detect the simplest non-trivial case of brain interaction from simulated data, however, under a realistic physical model. We restrict ourselves to the presence of two alpha-band sources that are randomly placed in two different brain octants. Sources vary in their location on the cortical manifold, spatial extent and depth. With equal probability, sources are either non-interacting or exhibit uni-directional linear information flow. Realistic biological noise with 1/f spectra is added.

We generated 100 instances of a combined EEG/MEG dataset and pose the following questions.

1. Localization: In which two brain octants are the alpha sources located?
2. Interaction: Does the dataset contain interaction between the two alpha sources at all?
3. Directionality: If so, which estimated source octant contains the sending and which one the receiving source?
    Participants of the challenge need to provide their answers as a Matlab structure.

The performance will be measured and announced at the challenge .

Participants submitting their executable Matlab code and a description of their approach alongside the results are eligible for small prizes, which will be awarded in the categories EEG-only, MEG-only, and combined EEG/MEG. With this challenge we hope to contribute to establishing a culture of validation in EEG/MEG based brain connectivity estimation, and to help elucidate the potentials and limitations of EEG/MEG based connectivity analysis in general and specific approaches in particular. Note that for this benchmark we wanted to achieve a tradeoff between a model that is simple enough to be easily validated yet complex enough to include a non-trivial case of brain interaction and comply with what we know about the physics of volume conduction. Alternative ways to model EEG/MEG data, noise and interactions can however be easily included in our simulation framework for future studies.



Competition 3: Single-trial classification of event-related fields: detection of happy faces

Organizers: Hubert Cecotti, Jose Sanchez Bornot, and Girijesh Prasad

In this data analysis competition, we consider an experiment using MEG recordings where subjects did pay attention to a stream of images (presentation rate=1 Hz ( stimulus onset asynchrony=1000 ms, stimulus duration=333 ms)) (each block of 12 images contains faces from the same person, but with different facial expressions corresponding to 6 different classes: anger, disgust, fear, neutrality, sadness, and happiness). The goal of the task was to detect the presence of faces with happiness, by pressing a button. The goal of the data analysis is to detect the presence of a face with happiness by using only the MEG signal and the stimulus onsets.

The data from four healthy adult subjects (mean age=33.8, 3 males) is provided and pre-processed using bandpass filtering and maxfilter.
The original data was acquired at 1 kHz with a 306 channel MEG system (Elekta Neuromag), which is comprised of 204 planar gradiometers and 102 magnetometers. The signal was bandpassed between 0.1 and 41 Hz, and downsampled to 125 Hz.
The data is processed and available directly in the Matlab format as a matrix containing:

the signal (planardat)
the stimulus onsets from the images and the behavioral responses (triggers)
t1: Anger (non-target)
t2: Disgust (non-target)
t3: Fear (non-target)
t4: Happiness (target)
t5: Neutrality (non-target)
t6: Sadness (non-target)
behavioral responses
test: triggers relative to the test data (unknown labels)

For the first part, only half of the labels, for each subject, will be available for training a classifier, the second half of the labels will be used to assess the methods. The evaluation will be performed by using the area under the ROC curve (AUC).

Participants will have to provide a complete description of the methods.
Therefore, the output should be a vector of real numbers corresponding to the classifier outputs for the test triggers (in the same order).
Participants are therefore not required to provide the program or code but they are required to provide a detailed description of the method that was used for both training and the test.