Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Real-time 3D movement correction for two-photon imaging in behaving animals

Abstract

Two-photon microscopy is widely used to investigate brain function across multiple spatial scales. However, measurements of neural activity are compromised by brain movement in behaving animals. Brain motion-induced artifacts are typically corrected using post hoc processing of two-dimensional images, but this approach is slow and does not correct for axial movements. Moreover, the deleterious effects of brain movement on high-speed imaging of small regions of interest and photostimulation cannot be corrected post hoc. To address this problem, we combined random-access three-dimensional (3D) laser scanning using an acousto-optic lens and rapid closed-loop field programmable gate array processing to track 3D brain movement and correct motion artifacts in real time at up to 1 kHz. Our recordings from synapses, dendrites and large neuronal populations in behaving mice and zebrafish demonstrate real-time movement-corrected 3D two-photon imaging with submicrometer precision.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: Design and performance of RT-3DMC system.
Fig. 2: Monitoring and compensating for brain movement in behaving mice.
Fig. 3: Longitudinal imaging using RT-3DMC.
Fig. 4: High-speed recordings of somatic, dendritic and spine activity during locomotion.
Fig. 5: Monitoring and compensating for brain movement in partially tethered larval zebrafish.
Fig. 6: Functional imaging in behaving zebrafish larvae.

Data availability

Source data for Figs. 16 are available online. Processed data and code required to generate Figs. 16 and Extended Data Figs. 47 and 9 is available on FigShare49. Raw unprocessed data have a complex data structure (>20,000 files, >1,000 folders, >100 GB), but original data can be made available upon request.

Code availability

The SilverLab Imaging Software together with an application programming interface (API) is available on GitHub at https://github.com/SilverLabUCL/SilverLab-Microscope.

References

  1. 1.

    Helmchen, F. & Denk, W. Deep tissue two-photon microscopy. Nat. Methods 2, 932–940 (2005).

    CAS  PubMed  Google Scholar 

  2. 2.

    Svoboda, K. & Yasuda, R. Principles of two-photon excitation microscopy and its applications to neuroscience. Neuron 50, 823–839 (2006).

    CAS  PubMed  Google Scholar 

  3. 3.

    Froudarakis, E. et al. Population code in mouse V1 facilitates readout of natural scenes through increased sparseness. Nat. Neurosci. 17, 851–857 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  4. 4.

    Nadella, K. M. N. S. et al. Random-access scanning microscopy for 3D imaging in awake behaving animals. Nat. Methods 13, 1001–1004 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Iyer, V., Hoogland, T. M. & Saggau, P. Fast functional imaging of single neurons using random-access multiphoton (RAMP) microscopy. J. Neurophysiol. 95, 535–545 (2006).

    PubMed  Google Scholar 

  6. 6.

    Katona, G. et al. Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes. Nat. Methods 9, 201–208 (2012).

    CAS  PubMed  Google Scholar 

  7. 7.

    Szalay, G. et al. Fast 3D imaging of spine, dendritic, and neuronal assemblies in behaving animals. Neuron 92, 723–738 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Cotton, R. J., Froudarakis, E., Storer, P., Saggau, P. & Tolias, A. S. Three-dimensional mapping of microcircuit correlation structure. Front. Neural Circuits 7, 151 (2013).

    PubMed  PubMed Central  Google Scholar 

  9. 9.

    Yang, W. et al. Simultaneous multi-plane imaging of neural circuits. Neuron 89, 269–284 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  10. 10.

    Ji, N., Freeman, J. & Smith, S. L. Technologies for imaging neural activity in large volumes. Nat. Neurosci. 19, 1154–1164 (2016).

    PubMed  PubMed Central  Google Scholar 

  11. 11.

    Kong, L. et al. Continuous volumetric imaging via an optical phase-locked ultrasound lens. Nat. Methods 12, 759–762 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  12. 12.

    Fernández-Alfonso, T. et al. Monitoring synaptic and neuronal activity in 3D with synthetic and genetic indicators using a compact acousto-optic lens two-photon microscope. J. Neurosci. Methods 222, 69–81 (2014).

    PubMed  PubMed Central  Google Scholar 

  13. 13.

    Dombeck, D. A., Khabbaz, A. N., Collman, F., Adelman, T. L. & Tank, D. W. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron 56, 43–57 (2007).

    CAS  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Greenberg, D. S. & Kerr, J. N. D. Automated correction of fast motion artifacts for two-photon imaging of awake animals. J. Neurosci. Methods 176, 1–15 (2009).

    PubMed  Google Scholar 

  15. 15.

    Chen, J. L., Pfäffli, O. A., Voigt, F. F., Margolis, D. J. & Helmchen, F. Online correction of licking-induced brain motion during two-photon imaging with a tunable lens. J. Physiol. 591, 4689–4698 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Ahrens, M. B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  17. 17.

    Paukert, M. & Bergles, D. E. Reduction of motion artifacts during in vivo two-photon imaging of brain through heartbeat triggered scanning. J. Physiol. 590, 2955–2963 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  18. 18.

    Laffray, S. et al. Adaptive movement compensation for in vivo imaging of fast cellular dynamics within a moving tissue. PLoS ONE 6, e19928 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Guizar-Sicairos, M., Thurman, S. T. & Fienup, J. R. Efficient subpixel image registration algorithms. Opt. Lett. 33, 156–158 (2008).

    PubMed  Google Scholar 

  20. 20.

    Yang, W. & Yuste, R. Holographic imaging and photostimulation of neural activity. Curr. Opin. Neurobiol. 50, 211–221 (2018).

    CAS  PubMed  Google Scholar 

  21. 21.

    Packer, A. M., Russell, L. E., Dalgleish, H. W. P. & Häusser, M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12, 140–146 (2015).

    CAS  PubMed  Google Scholar 

  22. 22.

    Rickgauer, J. P., Deisseroth, K. & Tank, D. W. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields. Nat. Neurosci. 17, 1816–1824 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Mardinly, A. R. et al. Precise multimodal optical control of neural ensemble activity. Nat. Neurosci. 21, 881–893 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  24. 24.

    Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, 83–94 (2017).

    CAS  PubMed  Google Scholar 

  25. 25.

    Mitani, A. & Komiyama, T. Real-time processing of two-photon calcium imaging data including lateral motion artifact correction. Front. Neuroinform. 12, 1–13 (2018).

    Google Scholar 

  26. 26.

    Haesemeyer, M., Robson, D. N., Li, J. M., Schier, A. F. & Engert, F. A brain-wide circuit model of heat-evoked swimming behavior in larval zebrafish. Neuron 98, 817–831 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  27. 27.

    Karagyozov, D., Mihovilovic Skanata, M., Lesar, A. & Gershow, M. Recording neural activity in unrestrained animals with three-dimensional tracking two-photon microscopy. Cell Rep. 25, 1371–1383 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Kirkby, P. A., Srinivas Nadella, K. M. N. & Silver, R. A. A compact acousto-optic lens for 2D and 3D femtosecond-based 2-photon microscopy. Opt. Express. 18, 13720–13745 (2010).

    CAS  Google Scholar 

  29. 29.

    Konstantinou, G. et al. Dynamic wavefront shaping with an acousto-optic lens for laser scanning microscopy. Opt. Express. 24, 6283–6299 (2016).

    PubMed  PubMed Central  Google Scholar 

  30. 30.

    Chen, T. W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295–300 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  31. 31.

    Kim, D. H. et al. Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish. Nat. Methods 14, 1107–1114 (2017).

    CAS  PubMed  Google Scholar 

  32. 32.

    Nguyen, J. P. et al. Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans. Proc. Natl Acad. Sci. USA 113, E1074–E1081 (2016).

    CAS  PubMed  Google Scholar 

  33. 33.

    Venkatachalam, V. et al. Pan-neuronal imaging in roaming Caenorhabditis elegans. Proc. Natl Acad. Sci. USA 113, E1082–E1088 (2016).

    CAS  PubMed  Google Scholar 

  34. 34.

    Yang, H. H. H. et al. Subcellular imaging of voltage and calcium signals reveals neural processing in vivo. Cell 166, 245–257 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  35. 35.

    Badura, A., Sun, X. R., Giovannucci, A., Lynch, L. A. & Wang, S. S.-H. Fast calcium sensor proteins for monitoring neural activity. Neurophotonics 1, 025008 (2014).

    PubMed  PubMed Central  Google Scholar 

  36. 36.

    Helassa, N. et al. Ultrafast glutamate sensors resolve high-frequency release at schaffer collateral synapses. Proc. Natl Acad. Sci. USA 115, 5594–5599 (2018).

    CAS  PubMed  Google Scholar 

  37. 37.

    Marvin, J. S. et al. Stability, affinity, and chromatic variants of the glutamate sensor iGluSnFR. Nat. Methods 15, 936–939 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  38. 38.

    Noguchi, J. et al. In vivo two-photon uncaging of glutamate revealing the structure–function relationships of dendritic spines in the neocortex of adult mice. J. Physiol. 589, 2447–2457 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Hernandez, O. et al. Three-dimensional spatiotemporal focusing of holographic patterns. Nat. Commun. 7, 11928 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  40. 40.

    Zhang, Z., Russell, L. E., Packer, A. M., Gauld, O. M. & Häusser, M. Closed-loop all-optical interrogation of neural circuits in vivo. Nat. Methods 15, 1037–1040 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  41. 41.

    Vladimirov, N. et al. Light-sheet functional imaging in fictively behaving zebrafish. Nat. Methods 11, 883–884 (2014).

    CAS  PubMed  Google Scholar 

  42. 42.

    Lister, J. A., Robertson, C. P., Lepage, T., Johnson, S. L. & Raible, D. W. Nacre encodes a zebrafish microphthalmia-related protein that regulates neural-crest-derived pigment cell fate. Development 126, 3757–3767 (1999).

    CAS  PubMed  Google Scholar 

  43. 43.

    Bianco, I. H. & Engert, F. Visuomotor transformations underlying hunting behavior in zebrafish. Curr. Biol. 25, 831–846 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  44. 44.

    Brainard, D. H. The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).

    CAS  Google Scholar 

  45. 45.

    Bianco, I. H., Kampff, A. R. & Engert, F. Prey capture behavior evoked by simple visual stimuli in larval zebrafish. Front. Syst. Neurosci. 5, 1–13 (2011).

    Google Scholar 

  46. 46.

    Ziegler, J. G. & Nichols, N. B. Optimum settings for automatic controllers. Trans. ASME 64, 759–768 (1942).

    Google Scholar 

  47. 47.

    Deneux, T. et al. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo. Nat. Commun. 7, 12190 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  48. 48.

    Friedrich, J., Zhou, P. & Paninski, L. Fast online deconvolution of calcium imaging data. PLoS Comput. Biol. 13, e1005423 (2017).

    PubMed  PubMed Central  Google Scholar 

  49. 49.

    Valera, A., Griffiths, V. & Silver, A. Real-time 3D movement correction for two-photon imaging in behaving animals. https://doi.org/10.5522/04/11949063 (2020).

Download references

Acknowledgements

Research reported in this publication was supported by the Wellcome Trust (095667; 203048 to R.A.S.), the ERC (294667 to R.A.S.) and the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under award numbers U01NS099689 and U01NS113273 (to R.A.S.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. R.A.S. is in receipt of a Wellcome Trust Principal Research Fellowship in Basic Biomedical Science (203048). V.G. was supported in part by a UCL/EPSRC Case studentship with Gooch and Housego. I.H.B. and J.Y.N.L. were funded by the Wellcome Trust and a Royal Society Sir Henry Dale Fellowship (101195). T.J.Y. was supported by postdoctoral fellowships from the Human Frontier Science Program and the Marie Skłodowska-Curie Actions of the EC (707511). C.B. was funded by the Wellcome Trust PhD programme (097266). B.M. was funded by grant 2018/20277-0, São Paulo Research Foundation (FAPESP). We acknowledge the GENIE Program and the Janelia Research Campus, Howard Hughes Medical Institute for making the GCaMP6 material available. We thank T. Fernández-Alfonso, H. Gurnani and L. Justus for comments on the manuscript.

Author information

Affiliations

Authors

Contributions

V.G. and R.A.S. conceived the project and R.A.S. supervised the work. V.G., G.K., K.M.N.S.N., P.A.K. and R.A.S. designed the RT-3DMC system. V.G. and G.K. developed the closed-loop FPGA code. A.V., B.M. and G.E. developed the MATLAB imaging software; P.A.K., K.M.N.S.N., G.K. and R.A.S. designed and built the AOL microscope. K.M.N.S.N., T.K. and S.A.P. developed and tested the LabVIEW imaging software. V.G., A.V. and R.A.S. designed the experiments on mice. D.C., T.Y., H.R. and C.B. performed viral transduction and surgeries. A.V, H.R., T.Y. and C.B. performed the mouse experiments. J.Y.N.L., A.V., I.H.B. and R.A.S. designed the experiments on fish. J.Y.N.L. and A.V. performed the fish experiments. A.V. and V.G. analyzed the data and A.V. created most of the figures. V.G., A.V. and R.A.S. wrote the manuscript with contributions from all authors.

Corresponding author

Correspondence to R. Angus Silver.

Ethics declarations

Competing interests

A.V., J.Y.N.L., B.M., I.H.B., H.R., C.B., D.C., G.J.E., T.J.Y. and T.K. declare no competing interests. P.A.K., K.M.N.S.N. and R.A.S. are named inventors on patents owned by UCL Business relating to linear and nonlinear AOL 3D laser scanning technology. P.A.K., K.M.N.S.N., G.K. and V.G. have a financial interest in Agile Diffraction Ltd., which aims to commercialize the AOL and real-time 3DMC technology.

Additional information

Peer review information Nina Vogt was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Two-photon microscope and real-time 3D movement correction system.

a Schematic diagram of acousto-optic lens (AOL) microscope and FPGA-based closed-loop control and acquisition system for real-time 3D movement correction (RT-3DMC). Scanning instructions are sent from the host PC to the AOL controller for imaging and reference scans. The acquisition FPGA estimates the brain movement with centroid analysis and implements a proportional–integral–derivative controller (PID). A fast serial link sends estimated movement information to the controller which uses this to modify the AOL acoustic drives to correct the imaging for brain motion. b Architecture of acquisition FPGA (blue) and AOL control system FPGA (green) for reference tracking and motion corrected imaging. The acquisition FPGA contains a state machine to integrate pixels from image data and motion tracking reference frames. The averaging logic down-samples four 1.25 ns samples to a single value at 200 MHz and pixel integration adds these over the pixel dwell time (controlled by the state machine). During acquisition the imaging data is sent to the host PC via the PXIe interface. When reference frames are being acquired the pixels are scaled and thresholded prior to a centroid analysis by the movement correction (MC) logic (dotted line box). Updated offset errors were fed to a PID controller that estimated the optimal sub-pixel offset to send to the AOL controller via a serial protocol interface (SPI). The AOL controller is initialized by sending record parameters for each point or line scan via a Gb Ethernet interface. The Record Load Logic parses the record protocol to store the records for reference imaging or functional imaging in on-chip memory. The host configures the AOL controller to perform the desired mode of operation such as random access line scanning or pointing with or without RT-3DMC. On receiving a trigger from the acquisition FPGA, the AOL Command Control logic block on the AOL control FPGA reads records from on-chip memory in the configured pattern. The records are then passed to the Movement Correction Pixel to Frequency Conversion block, which adjusts the records to compensate for 3D motion, when in RT-3DMC mode. The radio frequency (RF) generator block uses the records to generate the required acoustic frequency drive waveform, which are then sent to the four AOL crystals after amplification. In RT-3DMC mode, an interrupt from the reference scan handshake signal causes a reference frame to be scanned. Also, in RT-3DMC mode, when a new offset is received via the X, Y, Z pixel correction serial interface, the new offset is applied to each record prior to syntheses by the Movement Correction Pixel to Frequency Conversion block, that calculates the required frequency offset for each line scan or point to correctly track the lateral and axial motion.

Extended Data Fig. 2 Real time 3D movement correction performance and AOL microscope field of view.

a Example of 1- and 5-μm fluorescent beads distributed in agarose in a 400 × 400 μm FOV using the maximum scan angle for the AOL with an Olympus XLUMPlanFLN 20X objective lens. Fall off in intensity of image in corners due to reduced AOL light transmission efficiency at large angles. b Example of maximum intensity projection of a 400 × 400 × 400 μm Z-stack of localised expression of TdTomato (Magenta) and GCaMP6f (Green) in L2/3 pyramidal cells in motor cortex. Note that the expression levels were higher at the center of the FOV, contributing to a larger fall off in brightness (n = 4 mice). c The trade-off between the imaging overhead of RT-3DMC and the feedback period for small 10 x10 pixel reference patches with a single axial scan and larger 18 x18 pixel patches with 3 axial scans. The dotted lines show the overhead for a reference scan cycle of 2 ms can vary between 17–30%. d Relationship between the maximum error and the feedback time in mouse and zebrafish. The graph assumes a maximum brain speed of 0.34 μm/ms in the mouse and 1.02 μm/ms maximum speed for 99.5% of swimming bouts in the zebrafish. To maintain a sub-micrometer error (dotted line) the feedback time should be less than 3 ms and 1 ms in the mouse and zebrafish, respectively.

Extended Data Fig. 3 Characterisation of XY movement and frequency spectrum of brain movement.

a Image of cerebellar molecular layer interneurons expressing GFP used to determine brain movement. The white square shows the selected soma that was used as a reference. b X (green) and Y (purple) movement of soma. X-axis is roughly aligned to the rostral-caudal plane (similar results were obtained from 9 experiments on n = 1 mice). c Power spectrum analysis of X (green) and Y (purple) motion of cerebellum from head-fixed mouse free to run on a treadmill. Inset shows frequencies of < 1 Hz (similar results were obtained from 9 experiments on n = 1 mice). d Image of pyramidal cells in L2/3 of visual cortex expressing tdTomato. The white frame shows a soma used to track movement. e as for b but for visual cortex (similar results were obtained from 13 experiments on n = 1 mice). f As for c but for visual cortex (similar results were obtained from 13 experiments on n = 1 mice).

Extended Data Fig. 4 Performance of real time 3D movement correction with a 0.8 NA 40X objective.

Left: Tracking of a 1-μm diameter bead on a piezoelectric stage driven with a sinusoidal drive at 5 Hz. The absolute reference bead displacement (blue) and the Intercycle Reference Displacement (IRD, red) during RT-3DMC for lateral oscillations (top). Right: Relationship between mean IRD and bead oscillation frequency for different peak-to-peak amplitudes of sinusoidal lateral (blue) and axial (grey) displacements (n = 5 different reference beads, mean ± s.e.m.). Note that for the lateral motion of 40 μm at 20 Hz, 3 out of 5 reference beads lost tracking.

Extended Data Fig. 5 Performance of real-time 3D movement correction for 3D random access point measurements.

a Top: Schematic diagram of imaging 1-μm fluorescent beads embedded in agarose, mounted on a piezoelectric driven microscope stage oscillating at 5 to 20 Hz in the axial direction. Bottom: Random access point measurements for 10 beads (light traces) in the 3D volume, and the average signal (dark trace) without (red) and with (black) RT-3DMC. Dotted blue line corresponds to background fluorescence level, indicating when the imaging is not pointing at the bead. b Top: as for a, but for lateral XY oscillations. Bottom: as for a, but for lateral XY oscillations. Without RT-3DMC (red) the beads move in and out of the focused laser beam. With RT-3DMC (black) the laser beam foci move with the beads thereby giving an intensity signal above background. The residual noise with RT-3DMC reflects small residual movements combined with the sharp intensity falloff of 1-μm beads.

Extended Data Fig. 6 Axial correction of movement with 20X and 40X objective lenses.

a Using a 20X lens: Left: Distribution of residual Z displacement as estimated by tracking the center of the cell with RT-3DMC on (black) or off (red) and mean residual movement during periods of locomotion (thick lines, n = 5 mice). Dotted lines indicate the distance below which 95% of the values are located. Right: similar to left but grouped into 1 µm bins to quantify large infrequent movements (values indicate mean ± s.e.m.). With RT-3DMC, 87.9% of timepoints have a residual error of < 1 µm whereas without RT-3DMC about 59 % of the timepoints < 1 µm (p = 0.004, Wilcoxon test, n = 11 experiments, 5 animals). Grey circles indicate individual measurements. b Using a 40X lens: Left: Distribution of residual Z displacement as estimated by tracking the center of the cell with RT-3DMC on (black) or off (red) and mean residual movement during periods of locomotion (thick lines, n = 4 mice). Dotted lines indicate the distance below which 95% of the values are located. Right: similar to left but grouped into 1-µm bins to quantify large infrequent movements (values indicate mean ± s.e.m.). With RT-3DMC, 95% of timepoints have a residual error of < 1 µm whereas without RT-3DMC about 89% of the timepoints have mean errors < 1 µm. Grey circles indicate individual measurements.

Extended Data Fig. 7 Comparison of beads and somas as reference objects.

a Comparison of tracking performance for different fluorescence reference objects with different intensities during locomotion (20X objective, n = 4 mice); 4-μm, red fluorescent beads (blue), activity-dependent GCaMP6f soma (green) and activity-independent tdTomato soma (magenta). The grey region indicates the range of fluorescence of the object below which it cannot be resolved, due to a lack of contrast with the background. The transparent green shaded region indicates the dynamic range of GCaMP6f fluorescence and the horizontal black dashed line indicates the 1-μm uncorrected displacement error (UDE) calculated with post-hoc motion detection on 9–10 features. Beads consistently give the best performance and were typically brighter than somata. Tracking was unstable over time or impossible with some soma, indicated on the top of the graph. (n = 4 mice) b Comparison of tracking performance as a function of depth from the pia for different fluorescence reference objects during locomotion (20X objective, n = 4 mice). Data as for a (n = 4 mice).

Extended Data Fig. 8 Example of 3D random access point measurement from spines.

a Example of high contrast 3D projection of layer 2/3 pyramidal cell and image of a single selected spine (n = 1). b An example of ΔF/F traces from random access point measurement from 13 spines with (black) and without (red) RT-3DMC together with locomotion speed below each (grey).

Extended Data Fig. 9 Brain movement and real time 3D movement correction during licking and perioral movements.

a Cartoon of head fixed mouse and arrangement of water spout. b Power spectrum of X (green) and Y (purple) motion of the motor cortex during licking bouts. Green and purple thin lines show the average per mouse and the thick green and purple lines show averages across 6 mice. The inset shows frequencies of < 1 Hz. c Top: Images of 4 dendrites (15 × 15 μm) at two timepoints (red star and triangle) during a licking bout illustrating process moving out of focus. Bottom: Average tdTomato signal from the dendrites show intensity fluctuations during licking bout. d GCaMP6f recordings during licking bouts. Top: Motion index extracted from the perioral region (blue trace). Visually identified licking bouts (red lines below). Centre: GCaMP6f florescence extracted from active spines (grey ROIs in the middle, GCaMP6f in green, TdTomato in magenta) with RT-3DMC off (left, red) or on (right, black). Bottom: XY UDE (for RT-3DMC off, purple and green lines; left) and RT-3DMC (X, Y, Z blue, orange, grey, respectively; right) indicates the amount of brain movement during periods of licking (1 of 4 mice). e Distribution of XY UDE values with RT-3DMC on (black) or off (red) and mean (thick lines) during periods of licking (n = 13 experiments, 4 mice). Dotted lines indicate the distance below which 95% of the values are located. f Similar to e but grouped into 1 µm bins to quantify large infrequent movements. With RT-3DMC, 95.6% of timepoints have a residual error of < 1 µm whereas only 81.9% without (n = 4 mice, mean ± s.e.m.). Grey circles indicate individual measurements. g: Same as e for XY UDE during all perioral movements (n = 4 mice). h same as f for XY UDE during all perioral movements. With RT-3DMC, 95.2% of timepoints have a residual error of < 1 µm whereas only 76.3% without. (n = 4 mice, mean ± s.e.m.). Grey circles indicate individual measurements. i: Same as e for Z UDE during all licking (n = 4 mice). j same as f for Z UDE during licking movements. With RT-3DMC, 91.4% of timepoints have a residual error of < 1 µm whereas 89.3.% without. (n = 4 mice, mean ± s.e.m.). Grey circles indicate individual measurements. k: Same as g for Z UDE during all perioral movements (n = 4 mice). l same as h for Z UDE during all perioral movements. With RT-3DMC, 91.0% of timepoints have a residual error of < 1 µm whereas 87.8.% without. (n = 4 mice, mean ± s.e.m.). Grey circles indicate individual measurements.

Extended Data Fig. 10 Speed improvements with real-time 3D movement correction.

Schematic diagrams comparing the size of patch versus subvolume needed to keep the ROIs in the imaging frame with RT-3DMC (left) and without RT-3DMC (right): a Imaging dendrites in a mouse with a 10 × 25 μm patch with 0.5 μm pixels, each with a dwell time of 0.4 μs takes Ny*(24.5 + Nx*dwell) μs = 890 μs with RT-3DMC on. Without RT-3DMC the dendrite may move both laterally and axially. Assuming lateral motion of ±5 μm and axial motion ±4 μm, imaging would require a volume scan of: 5 × (20 × 35) μm patches taking 10,500 μs, to keep the dendrite within the FOV so that post-hoc XY and Z correction can be applied. That is about 12X slower than with RT-3DMC. b For zebrafish, patches over selected soma were typically 15 × 15 μm with a pixel size of 1 μm. With RT-3DMC and a dwell time of 0.4 μs a single patch would take 458 μs. Without RT-3DMC, typical maximum displacements are in the order of ±20 μm both laterally and axially requiring an imaging volume of 55 × 55 × 40 μm to monitor soma activity. The time to image a suitable sub-volume to keep the ROI in the FOV is 53,708 μs, about 117 times slower than with RT-3DMC.

Supplementary information

Supplementary Information

Supplementary Table 1 and 2 and Supplementary Notes 1–4.

Reporting Summary

Supplementary Video 1

Lateral motion correction: video of a 1-µm fluorescent bead with 20-µm peak-to-peak stage oscillations in the Y axis at 5 Hz (video speed: 1/20 of acquisition rate) with RT-3DMC on (top) or off (bottom). Same experimental setup as for Fig. 1g,h. Scale bar, 1 µm. Single example from five experiments.

Supplementary Video 2

Axial motion correction: imaging of a 1-µm fluorescent bead with 40-µm peak-to-peak stage oscillations in the Z axis at 5 Hz (video speed: 1/20 of acquisition rate) with RT-3DMC on (left) or off (right). Same experimental setup as for Fig. 1g,h. Scale bar, 1 µm. Single example from five experiments.

Supplementary Video 3

Correction for axial movement in vivo: plane by plane rendering of a volume imaging of a tdTomato-expressing cortical neuron with RT-3DMC on (top) or off (bottom) during mouse locomotion. The cell was initially centered in the volume along the Z axis, with the brightest plane being located at the center of the neuron. Z motion appears as changes in the brightest plane. Scale bar, 5 µm, video speed 3×. Single example from 11 experiments in five animals.

Supplementary Video 4

Range of RT-3DMC correction: video of pyramidal cells in motor cortex expressing GCaMP6f and tdTomato with RT-3DMC on (left) and RT-3DMC off (right) during large (approximately ±50 μm) manual XYZ stage displacements. MC would break at ±74 μm in this example. Scale bar, 20 μm. One example of five measurements.

Supplementary Video 5

RT-3DMC imaging of small structures in vivo: plane imaging of dendrite expressing GCaMP6f with RT-3DMC on (top) or off (bottom) during mouse locomotion. The sample was chosen from seven patch recordings and it exhibited little Z motion. Scale bar, 5 µm, video speed 3×.

Supplementary Video 6

Z stack of GCaMP6f expressing pyramidal cells (green) and containing fluorescent 4-µm beads (red) obtained during locomotion, with (left) or without RT-3DMC (right). Note an equalization filter was applied to reveal dim structures. The animal was running during the acquisition for both stacks. Z spacing between planes is 1 µm, 24 frames average per plane, illumination is 920 nm. Scale bar, 20 µm. This is an example chosen from one of four mice expressing GCaMP6f (green) and tdTomato (magenta) with Z stacks taken every other day for 9 days and 1 month later.

Supplementary Video 7

RT-3DMC imaging of dendritic spines: small 4-µm patches centered around dendritic spines used for 3D-RAP measurement, as in Supplementary Fig. 8. RT-3DMC on (top) or off (bottom) during mouse locomotion. Scale bar, 1 µm. Representative example taken from six experiments on five mice.

Supplementary Video 8

RT-3DMC imaging in behaving zebrafish. Twenty patches of 15 µm were selected across a zebrafish larva forebrain and imaged with RT-3DMC on (left) or off (right). Video shows two swimming events of similar amplitude (speed 0.5×). Scale bar, 10 µm. An example taken from similar experiments on three zebrafish.

Supplementary Video 9

Ten minutes of RT-3DMC imaging using 3D-RAP in behaving zebrafish (22 Hz, 8 µs per soma, 13–17 mW). Points located on 500 soma in the forebrain were selected and imaged with RT-3DMC on. Video (speed 10×) shows the top (left) and side view (top right) of the fish forebrain. Visual stimuli orientation is represented by a white arrow. Center right: swimming activity. Bottom right: X (blue) Y (orange) and Z (gray) MC. Single experiment taken from a single zebrafish.

Source data

Source Data Fig. 1

Values to regenerate Fig. 1f,i j.

Source Data Fig. 2

Values to regenerate Fig. 2d–h.

Source Data Fig. 3

Values to regenerate Fig. 3b,d.

Source Data Fig. 4

Values to regenerate Fig. 4e,h.

Source Data Fig. 5

Values to regenerate Fig. 5c,e,f,h,i.

Source Data Fig. 6

Values to regenerate Fig. 6d.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Griffiths, V.A., Valera, A.M., Lau, J.Y. et al. Real-time 3D movement correction for two-photon imaging in behaving animals. Nat Methods 17, 741–748 (2020). https://doi.org/10.1038/s41592-020-0851-7

Download citation

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing