Drag the sliders above to simulate what happens as R&D teams make their experiments both more repeatable and more reproducible.
Seeing clearly outside the lab
When you can see the world clearly, it’s easy to get where you want to go:

You might still get where you want to go after your eyesight has deteriorated, but you are more likely to suffer the costs and delays of making wrong turns:

The fundamental difference between clear vision and blurry vision is that clear vision has a much higher signal-to-noise ratio, S/N.
Seeing clearly inside the lab
When Researchers do experiments with high S/N, it’s much easier for them to get where they want to go, too. But what too many of us don’t appreciate – because these principles haven’t been taught effectively at our schools or in our workplaces – is that when we do experiments with low S/N, our R&D is not simply “blurred” by the kind of random variation simulated in the blurry freeway sign above. Our data are additionally biased by non-random variation, such as the scrambling illustrated here:

Non-random variation is the bigger threat to the accuracy of our predictions and the quality of our decisions because it is more slowly varying and (ironically) less predictable than random variation:
- Although there is a random component to the error in every measurement we make, each random error is completely uncorrelated with every other random error, so random variation simply “blurs” all of our data in a uniformly predictable and repeatable way.
- In contrast, the non-random component of our experimental error biases many different slices of our data in many differently correlated ways, whether we know it or not.1 The non-random variation that eludes our ability to detect it and model it “scrambles” our data in an effectively irreproducible way.
The four keys to seeing more clearly in the lab are thus
- making our random errors smaller,
- making our non-random errors detectable,
- turning the non-random errors we detect into improved protocols for data generation and analysis, and
- doing all of these things before too many of the signals our experiments are intended to measure get lost in our noise.
Although non-random errors may be responsible for our most costly and confusing cases of irreproducibility, they can be straightforward to explain and control when we design our experiments and engineer our data systems – as we have done with R2DIO – to make non-random variation as easy to detect as possible, and its root causes as easy to identify as possible.
Conclusion
As Researchers we can get where we want to go faster when we make our experiments both more repeatable and more reproducible.
Until we do these things, many of our outcomes will be irreproducible not because our experiments were underpowered or p-hacked or HARKed, but because our research processes were fundamentally unstable. When such non-random errors occur in our experiments – i.e., when our experiments are not in a state of statistical control, to use W. Edwards Deming‘s terminology – both our instincts and our statistical methods can greatly underestimate how often our experiments mistake noise for signal and signal for noise.2
When we make our experiments both more repeatable and more reproducible, not only will we see the signals in our experiments more clearly, our innate curiosity as scientists and engineers will be repeatedly rewarded and emboldened. We can enter a virtuous cycle of ever more incisive questions and deeper insights that improves teamwork and accelerates our R&D.