Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Letter
  • Published:

Coherent encoding of subjective spatial position in visual cortex and hippocampus

Abstract

A major role of vision is to guide navigation, and navigation is strongly driven by vision1,2,3,4. Indeed, the brain’s visual and navigational systems are known to interact5,6, and signals related to position in the environment have been suggested to appear as early as in the visual cortex6,7. Here, to establish the nature of these signals, we recorded in the primary visual cortex (V1) and hippocampal area CA1 while mice traversed a corridor in virtual reality. The corridor contained identical visual landmarks in two positions, so that a purely visual neuron would respond similarly at those positions. Most V1 neurons, however, responded solely or more strongly to the landmarks in one position rather than the other. This modulation of visual responses by spatial location was not explained by factors such as running speed. To assess whether the modulation is related to navigational signals and to the animal’s subjective estimate of position, we trained the mice to lick for a water reward upon reaching a reward zone in the corridor. Neuronal populations in both CA1 and V1 encoded the animal’s position along the corridor, and the errors in their representations were correlated. Moreover, both representations reflected the animal’s subjective estimate of position, inferred from the animal’s licks, better than its actual position. When animals licked in a given location—whether correctly or incorrectly—neural populations in both V1 and CA1 placed the animal in the reward zone. We conclude that visual responses in V1 are controlled by navigational signals, which are coherent with those encoded in hippocampus and reflect the animal’s subjective position. The presence of such navigational signals as early as a primary sensory area suggests that they permeate sensory processing in the cortex.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Responses in V1 are modulated by spatial position.
Fig. 2: V1 and CA1 neural populations represent spatial positions in the virtual corridor and make correlated errors.
Fig. 3: Positions encoded by visual cortex and hippocampus correlate with animal’s spatial decisions.

Similar content being viewed by others

Data availability

The data from this study are available from the corresponding author upon reasonable request.

References

  1. Muller, R. U. & Kubie, J. L. The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J. Neurosci. 7, 1951–1968 (1987).

    Article  CAS  Google Scholar 

  2. Wiener, S. I., Korshunov, V. A., Garcia, R. & Berthoz, A. Inertial, substratal and landmark cue control of hippocampal CA1 place cell activity. Eur. J. Neurosci. 7, 2206–2219 (1995).

    Article  CAS  Google Scholar 

  3. Chen, G., King, J. A., Burgess, N. & O’Keefe, J. How vision and movement combine in the hippocampal place code. Proc. Natl Acad. Sci. USA 110, 378–383 (2013).

    Article  ADS  CAS  Google Scholar 

  4. Geiller, T., Fattahi, M., Choi, J.-S. & Royer, S. Place cells are more strongly tied to landmarks in deep than in superficial CA1. Nat. Commun. 8, 14531 (2017).

    Article  ADS  CAS  PubMed Central  Google Scholar 

  5. Ji, D. & Wilson, M. A. Coordinated memory replay in the visual cortex and hippocampus during sleep. Nat. Neurosci. 10, 100–107 (2007).

    Article  CAS  Google Scholar 

  6. Haggerty, D. C. & Ji, D. Activities of visual cortical and hippocampal neurons co-fluctuate in freely moving rats during spatial behavior. eLife 4, e08902 (2015).

    Article  PubMed Central  Google Scholar 

  7. Fiser, A. et al. Experience-dependent spatial expectations in mouse visual cortex. Nat. Neurosci. 19, 1658–1664 (2016).

    Article  CAS  Google Scholar 

  8. Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).

    Article  CAS  PubMed Central  Google Scholar 

  9. Saleem, A. B., Ayaz, A., Jeffery, K. J., Harris, K. D. & Carandini, M. Integration of visual motion and locomotion in mouse visual cortex. Nat. Neurosci. 16, 1864–1869 (2013).

    Article  CAS  PubMed Central  Google Scholar 

  10. Ravassard, P. et al. Multisensory control of hippocampal spatiotemporal selectivity. Science 340, 1342–1346 (2013).

    Article  ADS  CAS  PubMed Central  Google Scholar 

  11. Harvey, C. D., Collman, F., Dombeck, D. A. & Tank, D. W. Intracellular dynamics of hippocampal place cells during virtual navigation. Nature 461, 941–946 (2009).

    Article  ADS  CAS  PubMed Central  Google Scholar 

  12. McNaughton, B. L., Barnes, C. A. & O’Keefe, J. The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Exp. Brain Res. 52, 41–49 (1983).

    Article  CAS  Google Scholar 

  13. Wiener, S. I., Paul, C. A. & Eichenbaum, H. Spatial and behavioral correlates of hippocampal neuronal activity. J. Neurosci. 9, 2737–2763 (1989).

    Article  CAS  Google Scholar 

  14. Czurkó, A., Hirase, H., Csicsvari, J. & Buzsáki, G. Sustained activation of hippocampal pyramidal cells by ‘space clamping’ in a running wheel. Eur. J. Neurosci. 11, 344–352 (1999).

    Article  Google Scholar 

  15. O’Keefe, J. & Speakman, A. Single unit activity in the rat hippocampus during a spatial memory task. Exp. Brain Res. 68, 1–27 (1987).

    Google Scholar 

  16. Lenck-Santini, P. P., Save, E. & Poucet, B. Evidence for a relationship between place-cell spatial firing and spatial memory performance. Hippocampus 11, 377–390 (2001).

    Article  CAS  Google Scholar 

  17. Lenck-Santini, P.-P., Muller, R. U., Save, E. & Poucet, B. Relationships between place cell firing fields and navigational decisions by rats. J. Neurosci. 22, 9035–9047 (2002).

    Article  CAS  Google Scholar 

  18. Hok, V. et al. Goal-related activity in hippocampal place cells. J. Neurosci. 27, 472–482 (2007).

    Article  CAS  Google Scholar 

  19. Rosenzweig, E. S., Redish, A. D., McNaughton, B. L. & Barnes, C. A. Hippocampal map realignment and spatial learning. Nat. Neurosci. 6, 609–615 (2003).

    Article  CAS  Google Scholar 

  20. Makino, H. & Komiyama, T. Learning enhances the relative impact of top-down processing in the visual cortex. Nat. Neurosci. 18, 1116–1122 (2015).

    Article  CAS  PubMed Central  Google Scholar 

  21. Poort, J. et al. Learning enhances sensory and multiple non-sensory representations in primary visual cortex. Neuron 86, 1478–1490 (2015).

    Article  CAS  PubMed Central  Google Scholar 

  22. Jurjut, O., Georgieva, P., Busse, L. & Katzner, S. Learning enhances sensory processing in mouse V1 before improving behavior. J. Neurosci. 37, 6460–6474 (2017).

    Article  CAS  Google Scholar 

  23. Witter, M. P. et al. Cortico-hippocampal communication by way of parallel parahippocampal-subicular pathways. Hippocampus 10, 398–410 (2000).

    Article  CAS  Google Scholar 

  24. Wang, Q., Gao, E. & Burkhalter, A. Gateways of ventral and dorsal streams in mouse visual cortex. J. Neurosci. 31, 1905–1918 (2011).

    Article  CAS  PubMed Central  Google Scholar 

  25. Moser, E. I., Kropff, E. & Moser, M.-B. Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. 31, 69–89 (2008).

    Article  CAS  PubMed Central  Google Scholar 

  26. Grieves, R. M. & Jeffery, K. J. The representation of space in the brain. Behav. Processes 135, 113–131 (2017).

    Article  Google Scholar 

  27. Eichenbaum, H. Time cells in the hippocampus: a new dimension for mapping memories. Nat. Rev. Neurosci. 15, 732–744 (2014).

    Article  CAS  PubMed Central  Google Scholar 

  28. Cushman, J. D. et al. Multisensory control of multimodal behavior: do the legs know what the tongue is doing? PLoS One 8, e80465 (2013).

    Article  ADS  CAS  PubMed Central  Google Scholar 

  29. Aronov, D. & Tank, D. W. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron 84, 442–456 (2014).

    Article  CAS  PubMed Central  Google Scholar 

  30. Chen, G., King, J. A., Lu, Y., Cacucci, F. & Burgess, N. Spatial cell firing during virtual navigation of open arenas by head-restrained mice. eLife 7, e34789 (2018).

    Article  PubMed Central  Google Scholar 

  31. Madisen, L. et al. Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance. Neuron 85, 942–958 (2015).

    Article  CAS  PubMed Central  Google Scholar 

  32. Wekselblatt, J. B., Flister, E. D., Piscopo, D. M. & Niell, C. M. Large-scale imaging of cortical dynamics during sensory perception and behavior. J. Neurophysiol. 115, 2852–2866 (2016).

    Article  CAS  PubMed Central  Google Scholar 

  33. Steinmetz, N. A. et al. Aberrant cortical activity in multiple GCaMP6-expressing transgenic mouse lines. eNeuro https://doi.org/10.1523/ENEURO.0207-17.2017 (2017).

    Article  PubMed Central  Google Scholar 

  34. Ayaz, A., Saleem, A. B., Schölvinck, M. L. & Carandini, M. Locomotion controls spatial integration in mouse visual cortex. Curr. Biol. 23, 890–894 (2013).

    Article  CAS  PubMed Central  Google Scholar 

  35. Ratzlaff, E. H. & Grinvald, A. A tandem-lens epifluorescence macroscope: hundred-fold brightness advantage for wide-field imaging. J. Neurosci. Methods 36, 127–137 (1991).

    Article  CAS  Google Scholar 

  36. Carandini, M. et al. Imaging the awake visual cortex with a genetically encoded voltage indicator. J. Neurosci. 35, 53–63 (2015).

    Article  CAS  PubMed Central  Google Scholar 

  37. Kalatsky, V. A. & Stryker, M. P. New paradigm for optical imaging: temporally encoded maps of intrinsic signal. Neuron 38, 529–545 (2003).

    Article  CAS  Google Scholar 

  38. Yang, Z., Heeger, D. J. & Seidemann, E. Rapid and precise retinotopic mapping of the visual cortex obtained by voltage-sensitive dye imaging in the behaving monkey. J. Neurophysiol. 98, 1002–1014 (2007).

    Article  PubMed Central  Google Scholar 

  39. Pologruto, T. A., Sabatini, B. L. & Svoboda, K. ScanImage: flexible software for operating laser scanning microscopes. Biomed. Eng. Online 2, 13 (2003).

    Article  PubMed Central  Google Scholar 

  40. Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Preprint at https://www.biorxiv.org/content/early/2017/07/20/061507 (2016).

  41. Peron, S. P., Freeman, J., Iyer, V., Guo, C. & Svoboda, K. A cellular resolution map of barrel cortex activity during tactile behavior. Neuron 86, 783–799 (2015).

    Article  CAS  Google Scholar 

  42. Dipoppa, M. et al. Vision and locomotion shape the interactions between neuron types in mouse visual cortex. Neuron 98, 602–615.e8 (2018).

    Article  CAS  PubMed Central  Google Scholar 

  43. Rossant, C. et al. Spike sorting for large, dense electrode arrays. Nat. Neurosci. 19, 634–641 (2016).

    Article  CAS  PubMed Central  Google Scholar 

  44. Kadir, S. N., Goodman, D. F. M. & Harris, K. D. High-dimensional cluster analysis with the masked EM algorithm. Neural Comput. 26, 2379–2394 (2014).

    Article  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank N. Burgess and B. Haider for helpful discussions, and C. Reddy, S. Schroeder, and M. Krumin for help with experiments. This work was funded by a Sir Henry Dale Fellowship, awarded by the Wellcome Trust/Royal Society (grant 200501) to A.B.S., EPSRC PhD award F500351/1351 to E.M.D., Human Frontier Science Program and EC Horizon 2020 grants to J.F. (grant 709030), the Wellcome Trust (grants 205093 and 108726) to M.C. and K.D.H., the Simons Collaboration on the Global Brain (grant 325512) to M.C. and K.D.H. M.C. holds the GlaxoSmithKline/Fight for Sight Chair in Visual Neuroscience.

Reviewer information

Nature thanks M. Mehta and the other anonymous reviewer(s) for their contribution to the peer review of this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the design of the study. A.B.S. carried out the electrophysiology experiments and E.M.D. the imaging experiments; A.B.S., E.M.D. and J.F. analysed the data. All authors wrote the paper.

Corresponding author

Correspondence to Aman B. Saleem.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Design of virtual environment with two visually matching segments.

a, The virtual corridor had four prominent landmarks. Two visual patterns (grating and plaid) were repeated at two positions, 40 cm apart, to create two visually matching segments in the room, from 10 cm to 50 cm and from 50 cm to 90 cm (red and blue bars in the left panel), as illustrated in the right panel. b, Example screenshots of the right visual field displayed in the environment when the animal is at different positions. Each row displays screen images at positions approximately 40 cm apart.

Extended Data Fig. 2 Spatial averaging of visual cortical activity confirms the difference in response between visually matching locations.

a, Mean response of V1 neurons as a function of the distance from the peak response location (2,422 cells with peak response between 15 and 85 cm along the corridor). To ensure that the average captured reliable, spatially specific responses, the peak response location for each cell was estimated only from odd trials, whereas the mean response was computed only from even trials. b, Population average of responses shown in a. Lower values of the side peaks compared to central peak indicate strong preference of V1 neurons for one segment of the corridor over the other visually matching segment (40 cm from peak response).

Extended Data Fig. 3 Simulation of purely visual responses to position in VR.

af, Responses of six simulated neurons with purely visual responses, produced by a complex cell model with varying spatial frequency, orientation, or receptive field location. The images on the left of each panel show the quadrature pair of complex cell filters; traces on the right show the cell’s simulated response as a function of position in the virtual environment. Simulation parameters matched those that are commonly observed in mouse V1 (spatial frequency: 0.04, 0.05, 0.06 or 0.07 cycles per degree; orientation: uniform between 0° and 179° but with twice as many cells for cardinal orientations; receptive field positions 40°, 50°, 70° and 80°, similar to the V1 neurons we considered for analysis. In rare cases (as in f) when the receptive fields do not match the features of the environment, there is little selectivity along the corridor. These cases lead to lower spatial modulation ratios. g, The spatial modulation ratios calculated for the complex cell simulations are close to 1 (0.97 ± 0.17), and different from the ratios calculated for V1 neurons. Black curve is the same as in Fig  1g.

Extended Data Fig. 4 The spatial modulation of V1 responses is not due to end-of-corridor visual cues.

a, Diagram of the 200-cm virtual corridor, containing the same grating and plaid as the regular corridor, repeated four times instead of twice. b, Visual scenes from locations within the first 100 cm of the extended corridor, separated by 40 cm, are visually (pixel-to-pixel) identical. c, Cumulative distribution of the spatial modulation ratio across the two mice that were placed in the long corridor (7 sessions, 2 mice; median ± m.a.d: 0.62 ± 0.26; 1,044 neurons, black line). Grey line shows the spatial modulation ratio predicted by the non-spatial model (which predicts activity from the visual scene, trial onset and offset, speed, reward, pupil size and displacement from the central position of the eye; see Extended Data Fig. 7, non-spatial model). The two distributions are significantly different (two-sided Wilcoxon rank sum test; P < 10−14).

Extended Data Fig. 5 The spatial modulation of V1 responses cannot be explained by speed.

a, Speed–position plots for all single-trial trajectories in three example recording sessions. b, Response profile of example V1 cells in each session as a function of position in the corridor, stratified for three speed ranges corresponding to the shading bands in a. c, Two-dimensional response profiles of the same example neurons showing activity as a function of position and running speed for speeds higher than 1 cm s−1.

Extended Data Fig. 6 The spatial modulation of V1 responses cannot be explained by reward, pupil position or diameter.

a, Normalized response as a function of position in the virtual corridor, for sessions without reward (1,173 neurons). Data come from two out of four mice that ran the environment without reward (8 sessions, 2 mice). Responses in even trials (right) are ordered according to the position of maximum activity measured in odd trials (left). b, Distribution of spatial modulation ratio for unrewarded sessions (8 sessions; median ± m.a.d. = 0.57 ± 0.37; cyan) and for modelled ratios obtained from the non-spatial model on the same sessions (black, see Extended Data Fig. 7). The two distributions are significantly different (two-sided Wilcoxon rank sum test; P < 10−8). c, Pupil position as a function of location in the virtual corridor, for an example session with steady eye position. Sessions with steady eye positions were defined as those with no significant difference in eye positions between visually matching positions 40 cm apart (with unpaired t-test, P < 0.01). Thin red curves: position trajectories on individual trials; thick curves, average. Top and bottom panels: x and y coordinates of the pupil, respectively. d, Distribution of spatial modulation ratio for sessions with steady eye position (10 sessions; median ± m.a.d. = 0.63 ± 0.33; 1,154 neurons, red) and for modelled ratios obtained from the non-spatial model on the same sessions (black). The two distributions are significantly different (two-sided Wilcoxon rank sum test; P < 10−14). e, Pupil size as a function of position for an example session with steady pupil size. f, Distribution of spatial modulation ratio for sessions with steady pupil size (5 sessions; median ± m.a.d. = 0.63 ± 0.33; 1,069 neurons, red) and for modelled ratios obtained from the non-spatial model on the same sessions (black). The two distributions are significantly different (two-sided Wilcoxon rank sum test; P < 10−13).

Extended Data Fig. 7 Observed values of spatial modulation ratio can be modelled only using spatial position.

a, b, We constructed three models to predict the activity of individual V1 neurons from successively larger sets of predictor variables. In the simplest, the visual model, activity is required to depend only on the visual scene visible from the mouse’s current location, and is thus constrained to be a function of space that repeats in the visually matching section of the corridor. The second, non-spatial model, also includes modulation by behavioural factors that can differ within and across trials: speed, reward times, pupil size, and eye position. Because these variables can differ between the first and second halves of the track, modelled responses need no longer be exactly symmetrical; however, this model does not explicitly use space as a predictor. The final, spatial model, extends the previous model by also allowing responses to the two matching segments to vary in amplitude, thereby explicitly including space as a predictor. Example single-trial predictions are shown as a function of time in a, together with measured fluorescence. Spatial profiles derived from these predictions are shown in b. c, Cumulative distributions of spatial modulation ratio for the three models (purple). For comparison, the black curve shows the ratio of peaks derived from the data (even trials) (median ± m.a.d: visual model, 0.99 ± 0.03; P < 10−40, two-sided Wilcoxon rank sum test; non-spatial model, 0.83 ± 0.18; P < 10−40; spatial model, 0.60 ± 0.27; P = 0.09, n = 2,422 neurons). d, Measured spatial modulation ratio versus predictions of the three models. Each point represents a cell; red ellipse represents best fit Gaussian, dotted line measures its slope. The purely visual model (top) does poorly, and is improved only slightly by adding predictions from speed, reward, pupil size, and eye position (middle). Adding an explicit prediction from space provides a much better match to the data (bottom). r, Pearson’s correlation coefficient, n = 2,422 neurons; θ, orientation of the major axis of the fitted ellipsoid.

Extended Data Fig. 8 Behavioural performance in the task.

a, Illustration of the virtual reality environment with four prominent landmarks, a reward zone, and the zones that define trial types: early, correct and late. b, Percentage of trials during which the animal makes behavioural errors, by licking either too early or too late at three different contrast levels: 18% (low), 60% (medium) or 72% (high). c, Illustration of performance on all trials of one example recording session. Each row represents a trial, black dots represent positions where the animal licked, and cyan dots indicate the delivery of a water reward. Coloured bars indicate the outcome of the trial (red, early; green, correct; blue, late). df, Successful performance relies on vision. d, The mouse did not lick when the room was presented at zero contrast. e, On some trials, we changed the gain between the animals’ physical movement and movement in the virtual environment, thus changing the distance to the reward zone (high gain resulting in shorter distance), while visual cues remained in the same place. When plotted as a function of the distance run, the licks of the animal shifted, indicating that the animal was not relying simply on the distance travelled from the beginning of the corridor. f, If the position of the visual cues was shifted forward or back (high or low room length (RL)), the lick position shifted accordingly, indicating that the animals relied on vision to perform the task.

Extended Data Fig. 9 Comparison of response properties between V1 and CA1 neurons and correlation of V1 and CA1 errors in the two halves of the environment.

a, Cumulative distribution of the stability of V1 and CA1 response profiles. Tuning stability (the stability of responses) was computed as the correlation between the spatial responses measured from the first half and the second half of the trials. V1 and CA1 responses were highly stable within each recording session: the tuning stability was >0.7 for more than 60% of neurons in both V1 and CA1. b, Cumulative distribution of the Skaggs information (bits per spike) carried by V1 and CA1 neurons. Note that while V1 and CA1 neurons had comparable amounts of spatial information, this does not suggest that V1 represents space as strongly as CA1, because the Skaggs information metric mixes the influences of vision and spatial modulation. c, Normalized firing rate averaged across V1 or CA1 neurons as a function of distance from the peak response (similar to Extended Data Fig. 2b). Unlike CA1, the mean response averaged from V1 neurons shows a second peak at ±40 cm, consistent with the repetition of the visual scene. d, e, Pearson’s correlation between position errors estimated from V1 and CA1 populations in the first half of the corridor (shown in d). Each point represents a behavioural session (n = 8 sessions); x-axis values represent measured correlations; y-axis values represent correlations calculated after having shuffled the data within the times where the speed was similar (similar to Fig. 2h). The occurrence of error correlations in the unshuffled data indicates that these correlations are not due to rewards (which did not occur in this half of the maze) or licks (which were rare, and the 100-ms periods surrounding the few that occurred were removed from analysis). The significance of the difference between the measured and shuffled correlations was calculated using a two-sided two-sample t-test. f, Similar to e for the second half of the corridor.

Extended Data Fig. 10 Position decoded from CA1 activity helps to predict position decoded from V1 activity (and vice versa).

a, To test whether the positions encoded in V1 and CA1 populations are correlated with each other beyond what would be expected from a common influence of other spatial and non-spatial factors, we used a random forests decoder (Tree Bagger implementation in MATLAB) to predict V1 or CA1 decoded positions from different predictors. We then tested whether the model prediction was further improved when we added the position decoded from the other area as an additional predictor (that is, using the positions decoded from CA1 to predict V1 decoded positions and vice versa). b, Adding CA1 decoded position as an additional predictor improved the prediction of V1 decoded positions in every recording session (that is, reduced the prediction errors). V1 and CA1 decoded positions are thus correlated with each other beyond what can be expected from a common contribution of position, speed, licks and reward to V1 and CA1 responses. c, Same as b for predicting CA1 decoded position.

Supplementary information

Supplementary Information

Details of data analysis methods and description of models.

Reporting Summary

Supplementary Video 1

Video of animal performing the task in virtual reality. Top panel: Video of mouse while performing the navigation task in the virtual reality corridor, on one of the training sessions when no recording was made. Bottom panel: Illustration of the position of the animal along the length of the corridor. The ends of the reward zone are highlighted by cyan lines. On each trial, the positions where the animal licked are highlighted by a red mark, and the position where it was rewarded by a cyan mark.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saleem, A.B., Diamanti, E.M., Fournier, J. et al. Coherent encoding of subjective spatial position in visual cortex and hippocampus. Nature 562, 124–127 (2018). https://doi.org/10.1038/s41586-018-0516-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-018-0516-1

Keywords

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing