Promoting Reproducible Science
Imagine conducting an experiment which reaches interesting conclusions and then publishing a report of that work. One test of the validity of such conclusions is that the same conclusions are again revealed when the experiment is repeated. This is known as reproducible science.
If the repeated experiment does not reach the same conclusions as the original work, despite using an apparently identical design and levels of independent variables, then one or other of the experiments must be erroneous.
Conclusions which cannot be reproduced in further work may be an indication of insufficient (small) samples, HARKing or p-hacking in the original work, not unexpected given the pressure on researchers to produce new and original findings with impact.
Robust research methodology is a key focus of work in the Lighting Research Group at the University of Sheffield. We have shown, for example, how range bias in an adjustment task explains the outcome of brightness preference experiments, colour preference and measurement of discomfort from glare. We have repeated experiments on multiple occasions to determine whether the findings are robust. Consider investigation of pedestrians’ ability to detect pavement hazards. Our original experiment was repeated four times, these checking an extended range of illuminances, an alternative mode of presenting hazards, the type (raised or depressed) of trip hazard, and a recently completed study investigating the impact of multi-tasking.
We have published several papers promoting reproducible lighting research and the July 2020 Editor’s report for LEUKOS, The Journal of the Illuminating Engineering Society of North America, suggests that these are having some impact.
In the year to that report, the most downloaded paper was Jim Uttley’s article Power Analysis, Sample Size, and Assessment of Statistical Assumptions—Improving the Evidential Value of Lighting Research. This article reviewed previously published lighting research and found that most studies use inadequate sample sizes and fail to report important statistical information such as how statistical assumptions are tested, how sample sizes are determined, and the size of any effects found. The article also provided a brief primer on some of these issues, to help improve statistical reporting within the field in the future.
In that same period, the most cited paper was Steve Fotios’ article A Revised Kruithof Graph Based on Empirical Data. The original Kruithof graph proposed a relationship between the colour (CCT) and amount (illuminance) of light for comfortable conditions. It has since been widely cited in design texts and has been apparently ‘validated’ by many other researchers. However, even a cursory read of Kruithof’s original paper would reveal the absence of anything like robust evidence – there are no clues as to what experiment he did. In this new paper, Steve searched for papers directly or indirectly investigating the Kruithof graph and screened them using a very basic set of methodological criteria, for example, that in a repeated measures design the different scenes were observed in a randomised order and that conclusions were established through statistical analysis. Only nine of the 29 papers considered passed this screening process. When the results from these papers were plotted, the resultant graph was a flat line, a very different graph to that of Kruithof, and one which suggests there is no relationship.
One problem with repeating a previous experiment is that by using their findings as a datum for the selection of test variables, the original findings may be repeated by default, leading to a misplaced conclusion of validated findings. Simply reproducing the results of a previous study does not always indicate that study provides good evidence, if it was based on flawed methodology. In the Lighting Research Group we focus on using robust methods that provide both valid and reproducible findings.
The University’s four flagship institutes bring together our key strengths to tackle global issues, turning interdisciplinary and translational research into real-world solutions.