### Replicability Index

###### General Information

The replicability index, also known as the R-index, is based on statistical power. This power is based upon the long-run probability of obtaining statistically significant results in a series of studies . For example, a study with 50% power is expected to produce 50 significant results and 50 non-significant results (Replicability review, 2016). The value of the R-index ranges from 0% – 100% and could be defined as “a quantitative measure of research integrity that can be used to evaluate the statistical replicability of a set of studies (e.g., journals, individual researchers’ publications)” (Schimmack, 2014). By counting the significance in studies within journals, one could say that most studies have a 90% chance of rejecting the null-hypothesis. However, statistical power in general suggests that this number should be around 60%.

For examples of how to use the R-index, calculating statistical power, and more information, please see the resources we have provided below.

**Here are some resources on the technique**:

Buhrmester, M., Kwang, T., Gosling, S. D., Harari, G. M., Lane, N. D., & Wang, R. (2018). Replicability-Index. Update.

Dr. R’s blog about replicability: top 10 list. (2016). Replicability-Index.

Dr. R’s comment on the Official Statement by the Board of the German Psychological Association (DGPs) about the Results of the OSF-Reproducibility Project published in Science. (2015). Replicability-Index.

Harms, C., Genau, H. A., Meschede, C., & Beauducel, A. (2018). Does it actually feel right? A replication attempt of the rounded price effect. Royal Society open science, 5(4), 171127.

Hidden figures: replication failures in the stereotype threat literature. (2017). Replicability-Index.

Hughes, J. (2015). Evaluating the r-index and the p-curve. *Disjointed Thinking.*

Knutson, B. (2011). What scientific concept would improve everybody’s cognitive toolkit? *Edge.*

McCook, A. (2017). “I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes. *Retraction Watch.*

Replicability Rankings of Eminent Social Psychologists. (2018). Replicability-Index.

Replicability review of 2016. (2016). Replicability-Index.

Schimmack, U. (2016). *The Replicability-Index: Quantifying Statistical Research Integrity.* Replicability-Index.

Schimmack, U., & Chen, Y. (2017). *The power of the pen paradigm: A replicability analysis.* Replicability-Index.

Schimmack, U., Heene, M., & Kesavan, K. (2017). Reconstruction of a train wreck: how priming research went off the rails. *Replicability-Index.*

Scudamore, C. L., Soilleux, E. J., Karp, N. A., Smith, K., Poulsom, R., Herrington, C. S., … & White, E. S. (2016). Recommendations for minimum information for publication of experimental pathology data: MINPEPA guidelines. *The Journal of pathology*, *238*(2), 359-367.

The Replicability Index is the Most Powerful Tool to Detect Publication Bias in Meta-Analyses. (2020). Replicability-Index.

2016 replicability rankings of 103 psychology journals. (2017). Replicability-Index.

2018 Journal Replicability Rankings. (2018). Replicability-Index.