The Ebola crisis demonstrates once again that, despite all the posturing of politicians, it is scientists who the public looks to in times of crisis and concern. The public still trusts scientists. A UK survey this year found that they trust scientists even if they do not always trust scientific information itself. Still, the public’s trust is fragile. Given how much scientists depend on public goodwill and the funding that flows from it, I am always surprised by how much scientists take the public’s trust for granted. They can — and should — do more to protect and nurture it.

Trust in science is often discussed only in response to some scandal or controversy, such as misconduct. This is unfortunate. Such a focus on bad behaviour, equating concerns about trust with misconduct, can make scientists unwilling to discuss the issue because they feel personally criticized. As a result, they ignore or even resist calls (such as this one) to promote and improve the overall trustworthiness of research.

Mishaps that cast science and scientists in a bad light and that could undermine trust are inevitable, particularly because many fields of science are poorly understood by the wider public. It is down to scientists to identify and try to prevent such mistakes.

Things can and do go wrong in science in countless ways, owing to the methods, technical procedures and complexity, which can make the most innocent of mistakes exceptionally difficult to detect. Too often, scientists do not consider the need for improvements because they are content with their faith that science self-corrects. This is a bad idea. Science’s ability to weed out incorrect findings is overstated.

There might once have been a time in science when there were multiple chances to ‘get it right’. That is much less true today. Modern scientific research is faster-moving and more connected, and the financial and reputational stakes are now much higher. The priority must be to try to get research right the first time, especially in biomedical fields. We cannot afford to leave the detection of problems to chance.

We cannot expect people to call attention to problems when it is not safe for them to do so.

Simply following the rules that others set will not help scientists much either. Regulations often fail to solve the problems that give rise to them. The United States has strengthened conflict-of-interest regulations for biomedical researchers, for example, but this does nothing to address the potential that financial relationships between research sponsors and institutions have to cause bias, a particularly significant shortcoming considering the extent to which large universities treat their science divisions as money makers. Complying with rules also tends to fatigue the research community on the one hand, and contributes to a false sense of security that things are being taken care of on the other.

Scientists need to articulate better what makes their work deserving of the public’s trust in the first place. I hope that we can agree that research should satisfy three basic expectations: publications can consistently be relied on to inform subsequent enquiry; research is of sufficient social value to justify the expenditures that support it; and research is conducted in accordance with widely shared ethical norms. Making science more trustworthy then comes down to steps to make sure those expectations are met. We need a culture that prevents and fixes mistakes not by chance, but by design. How can we create such a culture?

One of the most important steps is to recognize and identify where standards break down. We need to routinely conduct confidential surveys in individual laboratories, institutions and professional societies to assess the openness of communication and the extent to which people feel safe identifying problems in a research setting. Some research institutions, to their great credit, are already conducting these kinds of assessments, but most do not. It is crucial that we start to make them the norm.

We cannot expect people to call attention to problems when it is not safe for them to do so. At present, it is unsafe in too many research settings. Those who question the status quo can be ostracized and labelled as troublemakers. To make them safer, institution leaders must be prepared to hear unwelcome news and hold their nerve over bad publicity. And they must convince staff that their desire to improve is sincere. This is easier said than done, but the alternative is silence and stifled progress.

Building on the results of these surveys, institutions should be open and declare errors and near-misses. They should make public the actions they take to correct situations, and whether they work.

As science becomes less bound by both individual disciplines and geography, opportunities for errors and mistakes increase. One feature that we must better investigate is how distributing work among teams generates errors in data gathering and analysis. Unstable reagents can perform differently at different sites, for example, and a stronger emphasis on quality assurance could help us to discover and reduce any errors that might result from this. Unlike the call for surveys, which demands institutional buy-in, research teams could direct such efforts themselves, whether or not funders or universities push them to do it.

While science frets over misconduct and the bad apples in our midst, it fails to confront the bigger problems. We must make sure that we reward the public trust in scientists with trustworthy science.

Credit: Emi Manning/UC Davis