Nick Petrić Howe
Welcome to Nature’s Take. This is a show where we dive deep into the stories that matter in science. In each episode, we pull some of Nature's finest into one room, present them with a topic and see where the discussion leads us. In this episode, we're diving into registered reports. And joining me to take on quite possibly our nerdiest topic yet, are two voices listeners to the Nature Podcast may already know: Mary Elizabeth Sutherland...
Mary Elizabeth Sutherland
Hi, I'm Mary Elizabeth Sutherland. I'm a Senior Editor at Nature and I handle papers in the Behavioural and Social Sciences and Cognitive Neuroscience.
Nick Petrić Howe
...and Fede Levi...
Federico Levi
Hey, thanks for having me. My name is Federico Levi, and I'm a Research Editor handling Fundamental Physics.
Nick Petrić Howe
So, Nature has just started offering researchers the option to publish their findings in a new format, as a registered report. Now, the most fundamental difference between a conventional paper and a registered report is that the plan for the research, the methodology, is submitted to a journal and peer reviewed before the actual research is carried out. Whereas conventional papers are submitted once the bulk of the research and the analyses are completed. We have a whole episode to discuss the format, why Nature has taken this step and what it might mean for science. But before we do, in a Nature’s Take first, we have an outside voice to get us started. Earlier this week, I called up neuroscientist Chris Chambers, one of the researchers who created, pioneered and is championing registered reports to get his overview of what the format is, and why he thinks it's needed.
Chris Chambers
So when you do a piece of scientific research in virtually any area, by the time you get to the end of that project, that's the point where you write your papers. And then that gets submitted with results and conclusions for peer review at a journal when it goes through the regular review process. And one of the problems with that process, that procedure, is the potential for bias based upon the results of the work that we do. So the publishability of our work is assessed by editors, and by reviewers and by our peers, not just based upon the question and the methodology, but also the results we obtain, then that leads to publication bias, which distorts our impression of the evidence base, the knowledge in a field. Now, registered reports try to fix this by changing the way the peer review process works. So rather than doing all of the peer review, at the end of a study, what you do is you split the peer review process in half. And you do part of that review before the research is undertaken. So when it's in the proposal stage, and then based upon an in-depth scientific evaluation of the proposal, the journal or platform then issues what we call an in principle acceptance or IPA, in which the journal or platform essentially guarantees that they'll publish the final research, regardless of how the results turn out. So the idea here is that we eliminate evaluation of results from the evaluation of science, we separate those two things out so that we can select based on quality, rather than based on which results were positive or significant, or however else we might like to see them.
Mary Elizabeth Sutherland
Yeah, I'm so glad that you asked him to introduce this because that was an incredible overview of what they are. That's great. Thank you, Chris.
Nick Petrić Howe
Now, Chris has been advising Nature in its adoption of registered reports. And Mary Elizabeth, you've been working closely on this too? What are your perspectives on this format?
Mary Elizabeth Sutherland
I think it's a really great format. So, I started working with registered reports when I was at Nature Human Behaviour, which was a journal that was launched with the possibility of considering registered reports. And the idea, again, as Chris said, is really that you are upholding the rigour of research. And instead of focusing on what the results are, you are focusing on the question, which is a really important part of research. And what's interesting is, as registered reports have grown in popularity, we've started to see that there are many more null results out there. So there are many more times when the hypothesis isn't confirmed. And this is one of the issues in science that we know that we tend to choose the flashy answers, but the flashy answers aren't often true, and can end up just being bad for science, right? They waste grad students time who tried to follow up on something that isn't actually true, but then they can't get published. So I think that registered reports are a really great option for confirmatory research. So that means for research that is testing a specific hypothesis, and that they really up the quality of evidence because they require that you have proof for the null.
Nick Petrić Howe
Right, and this is what you sort of learn about in high school science, the null hypothesis. In other words, there is no, roughly, meaningful difference you can measure in your experiment. But experiments don't always reach that bar, right? To accept or reject the null?
Mary Elizabeth Sutherland
So, this is something important, there's lots of studies out there that just find an absence of evidence. But an absence of evidence doesn't mean that the null hypothesis is true, it just means that you didn't get evidence for the positive. And actually, statistically speaking, you should not interpret that unless you have the appropriate statistics and the appropriate results that support the null hypothesis. And so this is something that is very prominent in the registered report format, that when you come up with your experimental design, you have to be able to support the null and so then we actually get strong answers that can be either a yes or no, I mean, they can either confirm the hypothesis or actually say no to the hypothesis instead of just an absence of evidence. So, I think they can really move science forward by actually showing us the truth of what is out there.
Nick Petrić Howe
And Fede, you've not been involved in this process of Nature adopting it, and you come from sort of a physics background, a bit of a different field.
Federico Levi
Yeah. So, I'm aware that I'm sort of representing the old guard here, the people that have not adopted or are somewhat uninterested in registered reports. But... So, I think that it's a very interesting idea, I think there is a lot of merit to this. And there is a lot of interest in seeing what scientists might come up with, in our disciplines that could potentially be amenable to be published in a registered report format. However, we have struggled to imagine what that might look like, that's why we are quite interested in seeing what people come up with. Personally speaking, the fascinating part — but also the somewhat challenging part to imagine — is that essentially, a commitment to publish the article is made upon peer reviewing the methods and, let's say, the sort of like science program, because I would say that in many physics, and adjacent discipline, absence of evidence in the way that Mary Elizabeth put it is often predominant. Because essentially, your experiment might just not work. That doesn't mean that the sort of phenomena that you're looking for, and usually physics and related disciplines, look for phenomena might not be there. And a lot of adjustments need to be made in the course of the science, as you learn that all the things that you thought might work, actually don't really work, to actually move from absence of phenomena to a somewhat conclusive answer. So, that is why it is somewhat difficult to imagine that at the sort of experiment or method design stage, one can truly hope to construct an experiment that would yield statistically significant and robust null or positive results. And yeah, in a way, you know, there are countless experiments that don't work out there, and not all of them need to be published. Although in some cases, they might be interesting to be published, more often than not, it just needs to tweak something or change the method somehow, to actually carry on.
Mary Elizabeth Sutherland
Yeah, I think that that's a really good point. I mean, I am a great proponent of registered reports, I think that they're really great and they can really help science, but I really don't think that they're the best format for all papers. I think that there are papers that are in their Nature exploratory. And then there are those, like you said Fede, that take multiple stages that you can't know. So, I had one two-stage registered report when I was at Nature Human Behaviour, where basically they said, okay, you know, this is our first hypothesis, and they said, we're gonna do, you know, this experiment to address it, but then we want to get to the mechanism, but we can't know what the mechanism is until we know which way the hypothesis turns out, right? So the idea was that we would accept in principle that the scope was first we look at the hypothesis, then we look at the mechanism that will go under peer review, then they would do that part of it, then next, they would do the mechanism part like basically a second version of the registered report, right? So we'll go into registered report stages. That is possible. But again, that's really for if you can make it into just two stages. If you have to tweak and tweak and tweak to see if what you're getting is right, it doesn't make any sense. And then there are just plain exploratory studies that we have the bad habit, I think, at least in the fields that I say of calling hypothesis testing, where we say, oh, based on the literature, we hypothesise this, and it's like, well, no, you you got interested in this question. It's not a clear hypothesis, as in hypothesis testing, right? You are making a prediction based on the literature which you are investigating, but it's really an exploratory study. And those ones also don't benefit from the registered report format at this stage, because of how rigid it is in holding you to those methodological details. And like you said, Fede, like sometimes you can't know everything, you've got to go try. You have to try one thing and then the other thing and that's what makes the paper. So, I do think that registered reports are, at this stage — maybe we can change the format as well in some way — but at this stage that they aren't for all papers. And I should just say, in fact, at Nature when we consider them that that is something that sometimes I've written in my decision letters, sometimes we suggest that their experimental design is better for our conventional article format, rather than the registered report format, because the experimental question and design is more exploratory in Nature.
Federico Levi
I just wanted to add a couple of things, I mean, particularly inspired by Chris. I mean, the first one is that I think that, as he correctly said, the potential bias in seeking, let's say, the phenomenon that you're looking for, in order to potentially increase your chances for publication is definitely real in every discipline. And in that respect, there is an interesting comment that was published, I think, either last year or two years ago in Nature Physics, which was somewhat controversially called the inverse Occam's razor. And it was essentially flagging the fact that a certain number of physicists in particular disciplines are, let's say, trying to force particularly contrived explanation of the phenomenon that they are seeing to sort of increase the chance that their findings would be perceived as significant or important, rather than looking for the easiest, or let's say, the more mundane explanation of what they're seeing. And in that respect, I think that it's important to think about as a bias in formulating the research questions, and interpreting one's data, that seeking publication, particularly in selective titles, such as Nature might generate. So I think that that's definitely something to be aware of, perhaps the solution is community dependent. And I think that registered reports, at the very least, even if not directly applicable, in some disciplines would definitely give food for thought to scientist in order to sort of challenge these biases.
Nick Petrić Howe
One thing, I imagine many of our listeners will be thinking, is it sounds like there's a lot of stages to this, and it may take a lot of time. So, does this run the risk of wasting a lot of time in publishing, if it just doesn't work out?
Mary Elizabeth Sutherland
Well, I mean, the idea is that whether it works out or not, we publish it, right? I guess, if you mean whether or not, you know, the review process will be successful, and it will get to an accept-in-principle, you could argue that. You could also argue that the reviewers comments are going to be helpful, because they basically will preview what you would likely see when you sent... Let's say, you went the conventional route, and collected all of the data and then submitted your paper and then it was sent out to review, whatever the reviewers bring up, when you're just presenting the design is also something that they will likely bring up once the data have been collected. So I guess you could argue, well, you know, we submitted it, and it went through peer review, and then it was rejected, that that was a waste of time, but it's also a preview of what's going to come up. And I should also say that that isn't usually the case, because the question, the research question, that's the editorial question, right? Is the research question sufficiently interesting? And if the answer is yes, then you know, the review process is intended to make the study an example of the best practices of research at the time, right? So, the idea is that then it will have the impact, because the research question is important, and we've committed to publishing it. So there's not really a reason to then reject it, unless it's something like the peer reviewer say, the only way to do this is that you have to run at least 5000 people in 10 different countries, and the investigators say that's impossible for us, we cannot do that logistically. So then it wouldn't ended up going through, right? But it would only be if there's something that's being asked that we actually can't come to a consensus on, because otherwise the reviewers — also they're researchers as well, right — they know how difficult it is, so they try not to ask for things that are impossible. And rather, they try to ask for things, and you say, okay, if you can't run 5000 each, you know, is there a way that we could do an independent validation in a separate cohort? I mean, there's, there's a back and forth that is possible at this stage. It's really a collaborative process between the reviewers and the authors to get the study designed to be incredibly rigorous. And so that usually wouldn't result in a rejection. So it's usually not a waste of time, per se.
Federico Levi
But I think, you know, one of the things that has emerged as a sort of a point that has puzzled some scientists, and some colleagues is, so what if you know, science has moved on in the year or two or three that has taken to go from designing the experiment, to actually gathering the results? That's a real potential problem.
Mary Elizabeth Sutherland
That's a gamble that we take, and that we, we basically use our expertise on that judgment. So, the idea is that we look at the research question and we say, what's its history? How has it been followed throughout the literature? We also ask in the registered reports to have a timeline of how long it will take and then basically, we use our knowledge from being in this business for a while to think like, is it likely that the field is going to change so drastically that this question will no longer be of interest? And it's not will this question be answered before, but will it be able to be answered in such a rigorous and extensive or comprehensive way? But that being said, I would say that my judgment... Sometimes I think a paper is really cool and it goes through peer review and the peer reviewers think it's cool. And it also doesn't end up having the impact that we expected. So I think that that could also, you know, that happens in the conventional format that could happen in the registered reports format. But I don't see the risk as necessarily greater because it's the same editorial call going into, you know, is this a development of sufficient scientific impact, it's just looking not now in the time of the peer review process, but slightly longer, because it's a two-stage peer review process.
Nick Petrić Howe
And one other criticism that's raised against registered reports quite often is it might sort of limit creativity, or it's too inflexible for the real way that science works. What would be your sort of response to that?
Mary Elizabeth Sutherland
Well that, that one I totally disagree with, I think you can make that criticism, if you think that we're trying to get all research to fit into the registered report format, which we aren't. Fede gave a really great example of how his field — when you have to do a lot of tweaking to see what you're getting at — that's not good for the registered report format. The other thing that I think is a common misconception with registered reports is that you can do exploratory work in a registered report, it just has to be clearly labelled. So the idea of a registered report is we say, okay, here is your main question, right, you need to be able to test this hypothesis in the best possible way, as decided by you and the community as represented by the peer reviewers. After you do that, if you find something cool, go ahead and explore it, just say that these are the exploratory results that's actually expected in registered reports that you will get something new and interesting, you will be able to confirm or deny your hypothesis. But then you will also get something interesting that you want to investigate further and by all means, please do the only thing that we ask is that you say that this is exploratory so that it's explicit, saying, you know, and then we found this, and so we did an exploratory analysis, and blah, blah, blah. So I don't feel that it limits creativity, because it lets you do the exploratory analysis, what it doesn't let you do is basically find something super cool in your exploratory analysis, and then go back and rewrite your paper to say, you know, we hypothesise this really cool thing, because look at how prescient we are. And then you kind of change your whole way of thinking to say this was actually predicted. Rather you say, look, this cool thing came out. So I don't feel that it limits creativity, except if it's used for a question that it should not be used for.
Federico Levi
This is one of the things that I sort of struggled to wrap my head around. How do you design an exploratory study? That you are certain the answer will be conclusive? Because the problem—
Mary Elizabeth Sutherland
—But—
Federico Levi
—is essentially, what you don't want is somebody basically going out there and doing a research that then is inconclusive, right? You need to sort of have a clear answer, but you don't know what the answer is going to be. But how can you actually be sure?
Mary Elizabeth Sutherland
But, but, it's not an ex... It's not an exploratory study. That's the key thing, right? Like, you're not just exploring something, you have a very clear hypothesis. So I think one, one of the good examples was one that I handled at Nature Human Behaviour. It was a registered report that ultimately was published after I had joined Nature. And this was based on an influential paper, that was actually published in Nature, that showed that an intranasal administration of oxytocin increased the transfers by people who are playing the trust games, this basically means if you experimentally give somebody the hormone, oxytocin, you spray it up their nose, it increased the amount of money that they will give to strangers. Right? So the idea is that more oxytocin, more trust? Yeah? But after this study was published, results went one way results went another way. Some people replicated it, some people didn't. So the question is, is this true? Does an increase of oxytocin actually change trust? So here's a very clear question, right? And this is what I mean, it's not exploratory, they're not saying does it? How does it? They want to know for once and for all, is this effect true? Does giving people intranasal oxytocin increase trust measured in this way? And so they did this huge study, it was double-blind, it was a placebo-controlled, and they basically did an evidence based advance right where they tested many, many people, they did double-blind placebo-control, they did everything they could to really get at whether this effect was true or not. And it isn't, they found no effect of oxytocin on trusting behaviour. So basically, that I think is a good example of what this can do. You have a question, you have something in the literature and your question is, is it? Is it true or not, in a way? And then with this, you can say pretty convincingly, that it's not. So you're not it's not an exploration, it's not coming in and saying, like, oh, could this be? Or could this not be? It's really a test of a question that's out there.
Nick Petrić Howe
Does that answer your concern Fede, or...?
Federico Levi
As I said at the beginning, it's somewhat difficult to sort of translate these sort of studies into the sort of science me and my colleagues work with. And I think that, you know, as Mary Elizabeth explained, not every discipline will be amenable to this. But I would say that, it seems to me that when I hear example of successful registered report, there is a strong human component, which is about you know, the size of a cohort, how many humans you enroll, and what sort of questions or what sort of protocols you're gonna work with. And I think that that's the aspect that sometimes it's tricky for me to translate into, for instance, physics. Because even though I could say, okay, you know, what's going to happen if I shine a laser on this material? Well, you know, it depends on the laser, maybe your laser is not working correctly, there are a lot of details about the experiment that are sort of difficult to plan ahead, because you might just not have considered the fact that, you know, that particular sort of like setup of an experiment you have in mind is going to work. And then you need to change it. But then, of course, you sort of planned for that. And so then the registered report format is very unflexible. So that's where I somewhat get confused. But you know, as I said, earlier, it's going to be interesting to see if somebody comes up with good ideas. And I could totally imagine that some sort of like big survey kind of study, maybe in terms of like, I don't know, off the top of my head, observational astronomy, could be in principle amenable to registered report, because in that case, you know, the question is very clear. It's like, okay, we're gonna look at this. And we're going to try to understand the prevalence of a certain type of star or certain phenomenon. And then in that case, you know, the editor has the tool to decide, okay, whatever the answer, this could be a very helpful contribution to the community. So yeah, it's possible to imagine it's a bit difficult.
Nick Petrić Howe
Are there equivalents, or different things within your field, though, that are being used or tried to try and tackle publication bias?
Federico Levi
Yeah, so not quite in the same structural way, shall we say, as registered report, which I believe is kind of a genuine innovation in the way publishing works. But I would say that, to sort of echo what Chris has said at the beginning, a lot of people take issue with the word significant. And they believe that removing the significant aspect — let's say, looking to publish only significant or outstanding results — so removing that kind of criteria from the publication assessment, would automatically address the presence of bias. And therefore, there are a number of journals or publication venues that are making an effort to essentially do away with selecting only outstanding results, and perhaps publishing a much broader range of research that is just rigorous, and formal. So that's a way to sort of tackle bias at the level of seeking to publish somewhere.
Nick Petrić Howe
So that's actually something I wanted to end on. Evidently, there is a problem with publication bias that registered reports are trying to tackle. But as we have discussed a little bit today, that doesn't work for all research. In fact, as it stands, Nature is only offering the format for Cognitive Neuroscience, Behavioural and Social Sciences. But there are a lot of fields that also have publication bias that are not being covered by that. So I guess I'm wondering, what else do we, as Nature, and also the broader family of Nature journals need to do to tackle publication bias?
Federico Levi
I mean, that's another podcast.
Mary Elizabeth Sutherland
Yeah, I know, I was gonna say that's a huge question. That's really large.
Federico Levi
Yeah, I mean, that's, that's an excellent and very challenging question to answer because there is clearly a tension between... So what are we trying to do, as Nature, is to select and showcase what we believe are the most noteworthy achievements in science. And you know, whichever way you are going to select the few papers that we publish, there is going to be always a risk of skewing what we are publishing. And I would say the tension is between our criteria and other journals, similar selective journal criteria, and then the way science is assessed, because at the moment, I would say that there is a strong influence in publication bias coming from the fact that careers and scientists, let's say, overall assessment is carried out by looking at the venue in which given science is published. And I would say that there is definitely a push from a lot of researchers to remove the journal from the assessment of the science. But as long as the journal is an element, a proxy by which a given scientist's output is judged, then publication bias will always be a factor because people will try to sort of second guess what the Nature editor wants, or, you know, skew their research such that, you know, this is going to make the pages of Nature because that will have a beneficial impact on their career. So, I'm not sure whether Nature can do a lot to solve this. But I would say that there is definitely a tension and there are a lot of pieces that generate this bias.
Nick Petrić Howe
And thank you both so much for joining me. That's all we've got time for on this Nature’s Take. Nature’s Take will return. But for now. Thank you both for joining me.
Federico Levi
Thanks, Nick.
Mary Elizabeth Sutherland
Thank you for having us.