A network of pyramidal cells in the mouse cerebral cortex. Credit: Hermann Cuntz; Michael Häusser

At least half a dozen major initiatives to study the mammalian brain have sprung up across the world in the past five years. This wave of national and international projects has arisen in part from the realization that deciphering the principles of brain function will require collaboration on a grand scale.

Yet it is unclear whether any of these mega-projects, which include scientists from many subdisciplines, will be effective. Researchers with complementary skill sets often team up on grant proposals. But once funds are awarded, the labs involved often return to work on their parts of the project in relative isolation.

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives.

This may seem obvious, but such collaboration is stymied by technical and sociological barriers. And the conventional strategies — constructing collaborations top-down or using funding strings to incentivize them — do not overcome those barriers.

The European Commission's Human Brain Project (HBP), which launched in 2013, involves more than 100 laboratories and the investment of at least US$49 million per year. Following a reorganization, the HBP now emphasizes tool development, leaving the collective effort of gathering and analysing the data to other initiatives planned or under way, such as Japan's Brain/MINDS project, the US BRAIN initiative or China's proposed programme. It has been questioned how any of these will result in more than the sum of their parts1,2.

LISTEN

Could neuroscience learn a thing or two from big physics?

Of all the current big-neuroscience initiatives, perhaps the most effective has been that of the Allen Institute for Brain Science, a private non-profit organization in Seattle, Washington. More than 100 scientists working together at the institute have produced useful resources, including brain-wide maps of gene expression in mouse and human and, most recently, maps of neural activity in the mouse visual cortex3. But it is not clear how the Allen Institute's model — industrial processes in a centralized corporate organizational structure — could be applied to geographically distributed collaborations targeting more complex problems.

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives1,2,4, unlike high-energy physics, mathematics, astronomy or genetics.

In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

The challenge

Several advances over the past decade have made it vastly more tractable to solve fundamental problems such as how we recognize objects or make decisions.

Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, microelectronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons5. It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.

The industrial approaches used to produce neuroscience resources at the institute founded by philanthropist Paul Allen may not work elsewhere. Credit: John McDonough/Sports Illustrated/Getty

Other tools coming online include powerful statistical techniques for analysing data and advances in machine learning. There is also now a rich set of theories stemming from progress in fields such as statistical physics and computer science. Computational approaches have been used to understand, for instance, how neurons in the retina and visual cortex encode information about visual scenes6,7.

But the experiments now possible are increasingly resource-intensive. The neuronal activity driving a simple behaviour, such as a mouse navigating a maze, could involve the cooperation of several hundred brain areas. Mapping the whole picture implies making recordings in many neurons from each area. Yet a typical 1–3-year study involves recording from relatively small populations of neurons in just a single area of the brain. And, as we will discuss, these data cannot at present be combined across labs.

Most new approaches for the collection and analysis of neural data require training and expertise across a range of domains — from genetics to optics to computational neuroscience. As in most disciplines, neuroscientists in one laboratory — let alone one scientist — rarely hold the entire set of requisite skills. Moreover, because labs do not normally share raw data, the fruits of difficult experiments cannot be fully exploited by groups with complementary expertise.

In short, a generation ago, neuroscientists were largely limited by theory and tools. Today, the bigger problem is effectively harnessing, as a community, what is already available.

A solution

We propose that researchers join forces in 'meso-scale' collaborations of around 20 principal investigators and between 50 and 100 researchers to conduct experiments that are beyond the reach of single labs. Even at this scale, there will be many hurdles to clear. Specifically, an effective collaboration would need to do the following.

Focus on a single brain function. The downfall of many neuroscience collaborations — and especially of mega-projects — is setting goals that are too broad. The common goal has to be ambitious, yet reachable within, say, ten years, and well defined. A whole-brain theory of one brain function — a single behaviour — could meet those requirements. If a collaboration were largely limited to labs interested in the same behaviour — such as courtship in fruit flies, or foraging in mice — clear, shared objectives could be defined at the start. The labs would apply a range of recording and manipulation techniques to the same common behavioural task, allowing the functional data to be seamlessly combined.

To assemble a team of experts on such a focused problem, a collaboration would need to incorporate participants distributed throughout the world. In the past, physical proximity was indispensable for effective interaction. Now, online collaboration tools — such as Slack, GitHub or Google Docs — have changed the game. Scientists must harness these to plan experiments, make decisions, discuss problems and more. For more specific needs, new tools may need to be invented.

Combine experimentalists and theorists There is a growing consensus that theory is indispensable for grappling with brain complexity. In particular, theory is essential for making and testing predictions about how observations made at the cellular or circuit level will relate to those made at the behavioural level.

Yet, the challenges of implementing modern experimental approaches and of getting to grips with the mathematical language of theory mean that neuroscientists still tend to become either experimentalists or theorists. Moreover, whole labs are typically either experimental or theoretical. Theorists and experimentalists often meet at conferences to share ideas. They rarely converge when it comes to the design and interpretation of experiments.

So, concrete steps are required to catalyse more meaningful interactions, such as embedding theory PhD students and postdocs in experimental labs and vice versa.

Standardize tools and methods. Neuroscientists frequently live on the 'bleeding' edge technologically, building bespoke and customized tools. This do-it-yourself approach has allowed innovators to get ahead of the competition, but hampered the standardization of methods essential to making experiments efficient and replicable.

Remarkably, it is standard practice for each lab to custom engineer all manner of apparatus, from microscopes and electrodes to the computer programmes for analysing data. Thousands of labs worldwide use the calcium sensor GCaMP, for example, for imaging neural activity in vivo. Yet neither the microscopes used for GCaMP imaging nor the algorithms used to analyse the resulting data sets have been standardized.

The data sets generated by a functional neuroscience experiment are large. They can also be complex and multimodal in ways that, say, genomic data might not be, embracing recordings of activity, behavioural patterns, responses to perturbations, and subsequent anatomical analysis. Researchers have no agreed formats for integrating different types of information. Nor are there standard systems for curating, uploading and hosting highly multimodal data. Recently, initiatives such as Neurodata Without Borders (www.nwb.org) have finally started to address this.

Neuroscience is dominated by a competitive and individualistic culture.

Worse, neuroscientists lack standardized vocabularies for describing the experimental conditions that affect brain and behavioural functions. Such a vocabulary is needed to properly annotate functional neural data. For instance, even small differences in when a water drop is released can affect how a mouse's brain processes this event, but there is no standard way to specify such aspects of an experiment.

Share data. To maximize the effectiveness of a collaboration, all the data collected would need to be communal across the entire group. This may seem obvious, but most of the data currently being collected using high-throughput imaging and recording techniques are still effectively inaccessible to anyone outside the labs doing the studies. Journal requirements to make data public and efforts to build public databases and standardize formats have had little effect across much of neuroscience.

Beyond standardization problems, there are also substantial disincentives to data sharing that must be addressed by grass-roots collaborations. Sharing can yield big common benefits, allowing data sets from multiple laboratories to be combined and theorists to test their ideas. But a lab risks losing out to competitors if its generosity is unreciprocated.

An effective collaboration must create the technical means to share, and engender a sphere of trust within which it is safe to do so. The principle of sharing — of data, resources and plans — would need to be agreed as a precondition of joining a collaboration, and effectively enforced.

Assign credit in new ways. Like many other areas of biomedicine, neuroscience is dominated by a competitive and individualistic culture. Indeed it is largely this culture that hinders standardization and cooperation. The Human Genome Project opened up a more cooperative attitude towards data in the field of genetics8 that has been reverberating ever since9. But the intricacy of what we are proposing — the complex coordination of experiments and immediate sharing of raw data — goes well beyond most open-science norms and will be challenging.

To jump-start a culture of collaboration along the lines we are envisaging, groups of established scientists, who have less career pressure, could lead the way. The graduate students and postdocs involved in collaborations might need other ways to earn recognition than the current standard — being the first author on a paper. Neuroscience can take inspiration from fields such as particle physics, where these issues have been faced for years. And initiatives such as the Contributor Roles Taxonomy are revamping how contributions to research are defined and recognized (casrai.org/credit).

Small steps

Elements of our proposal have been discussed before, and the problems it addresses are not new. So why is now the time to finally make it work? First, the forces that drive neuroscientists apart, especially competition for resources, are stronger than ever. Second, advances in technology for information sharing, such as cloud services, are only now making distributed collaboration feasible. Third, because of the wealth of new experimental and theoretical tools, the potential benefits of collaboration to many may at last outweigh the risks to individuals.

If grounded in the same principles that make small-scale collaboration so successful — including equality and transparency — medium-sized collaborations would be fundamentally different from the much looser networks and top-down initiatives typically associated with big science. Doubts might be raised about how far such groups could be scaled up. But just such a distributed model, avoiding a central command-and-control structure, characterizes one of the largest and most effective mega-projects10 — the ATLAS collaboration at CERN, Europe's particle-physics lab near Geneva, Switzerland.

Effective large-scale neuroscience initiatives may be the goal, but they cannot be created from scratch even with vast funds. Yet there are many useful things that funders can do to help. First, support multiple medium-sized collaborations that set concrete goals (rather than crowning single mega-projects) and keep renewing those that demonstrate progress. Second, encourage investigators to combine individual grants to join collaborations. Third, underwrite the less sexy but crucial aspects of collaboration, such as management and support personnel, that are otherwise difficult to fund. Fourth, fund the development of collaborative-science software, which would offer great returns on investment. Fifth, study, experiment with and support new ways of assigning credit that promote cooperation.

Small but carefully considered steps, not grand gestures, will ensure that much-needed neuroscience collaborations take root and flourish.