Extended Data Fig. 3: Motion-correction procedure. | Nature

Extended Data Fig. 3: Motion-correction procedure.

From: Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons

Extended Data Fig. 3

We developed a custom motion-correction procedure to compensate for both non-rigid slow drift of the field of view (timescale of tens of minutes) as well as non-rigid fast motion (timescale of tens of milliseconds). Importantly, the procedure avoids any use of interpolation, which can produce artefacts. The procedure consists of the following main steps. (1) Blue box. The entire movie is divided in non-overlapping 50-s clips; in each clip we perform rigid motion correction using standard cross-correlation methods (on the red channel). The template for each clip is calculated by dividing the clip into non-overlapping sections of 100 frames, calculating the mean image of each section, and obtaining the median of the mean images. (2) Red box. We use a non-rigid algorithm for image registration to align all the templates. The algorithm outputs shift parameters for every pixel and template. Separately, we manually draw patches that include neurons of interest in the first template. For each template, we use the shift parameters of all the pixels in each patch to estimate the average motion of the patch. We use that information to crop the patch from each 50-s clip of the movie. (3) Orange box. We perform rigid motion correction (as above) on the concatenated patch movies, down-sample by a factor of two (to increase the signal strength) and then perform rigid motion correction again. (4) Green box. We extract the patch templates by using the mean projection, and hand-draw ROIs of the neurons. See Methods for a detailed explanation of the motion-correction algorithm, and Supplementary Video 2 for an example video before and after correction. Code is available at https://github.com/benengx/Deep-Brain-Motion-Corr.

Back to article page