Most researchers would likely agree that a core part of the scientific process is attempting to replicate others’ findings, to see how robust/trustworthy those findings are. While conceptual replications (where a researcher tests the same ideas as tested in a previous study, but using different methods) are common, direct replications – where a researcher reproduces as closely as possible the methods used in a previous study and attempts to find the same effects reported in that previous study – are rare. This is a problem because direct replications are a very good way to estimate how robust a finding is.
One issue that makes conducting direct replications difficult is that researchers rarely share the materials they use to conduct a study. That is, if you wanted to replicate the finding from this – https://www.tandfonline.com/doi/full/10.1080/13546805.2019.1700789 – 2020 paper of mine, because I didn’t share the task I used in that study (something I only recently addressed), you would need to build your own version of the task. You could do that, but the task you build will not be precisely the same as the task employed in the original study (e.g., you may use fewer trials, which will likely make the task a less reliable measure of the construct you are trying to assess). In addition, building the task will take a considerable amount of time, and this could be thought of as an enormous waste of resource (if you think about all of the researchers in all of the labs recreating tasks that should have been shared openly when they were first used).
Sharing study materials is, therefore, an important way in which we can increase the transparency of our work and reduce ‘research waste’. A pretty straightforward way to do this is through the Open Science Framework and there is a guide at this – https://osf.io/9mu7r/download – address on how to do this.
