Web Postings


Online Essays and Reports on Scientific Practices:

Unobtrusive Methods (Raymond Lee, 2019)

In a paper by Raymond Lee, he reports on unobtrusive methods to collect data that do not involve direct elicitation of information from research participants, which is important when collecting difficult or awkward data from respondents.

What If Only What Can Be Counted Will Count? A Critical Examination of Making Educational Practice “Scientific” (Jennifer Ng, et al., 2019)

In a study by Jennifer Ng, et al., they use ethnographic research methods to examine researchers and educators and their scientific practices.

Research in Social Psychology Changed Between 2011 and 2016: Larger Sample Sizes, More Self-Report Measures, and More Online Studies (Kai Sassenberg and Lara Ditrich, April 12, 2019)

In self-reports by Sassemnberg and Ditrich, they mention the debate about false positives in psychological research has led to a demand for higher statistical power. They also talk about the many methods of collecting data and samples. 

Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods (Evan C. Carter, et al., June 13, 2019) 

Carver, et al. discuss publication bias and questionable research practices in primary research that can lead to badly overestimated effects in meta-analysis.

Improving scientific practice in sports‐associated drug testing (Jon Nissen‐Meyer, et al., July 2019)

In a scholarly article Nissen Meyer, et al., they discuss the ethical, scientific, and legal considerations of scientific practices within the fields of sports. 

Modeling Tropical Diversity in the Undergraduate Classroom: Novel Curriculum to Engage Students in Authentic Scientific Practices (Jana Bouwma-Gearhart, et al., August 2019)

In this scholarly article, Bouwma-Gearhart, et al., they discuss methods of engaging students on science and scientific practices. 

Morality in Scientific Practice: The Relevance and Risks of Situated Scientific Knowledge in Application-Oriented Social Research (Letizia Caronia, et al., September 2019)

In this scholarly article, Caronia, et al., discuss the renewed consensus on empiricism in application-oriented social sciences and a growing trust in evidence-based practice and decision-making. 

Presidential Address, PSA 2016: An Epistemology of Scientific Practice (Kenneth Water, October 2019)

Kenneth Water discusses philosophers’ traditional emphasis on theories, theoretical modeling, and explanation misguides research in philosophy of science. Also, Water discusses articulating and applying core theories as a part of scientific practice.

Toward a practice‐based theory for how professional learning communities engage in the improvement of tools and practices for scientific modeling (Jessica Thompson, et al., November 2019)

Thompson, et al., discuss the opportunity for the improvement of science instruction teachers need opportunities to collaboratively learn from practice, in practice, and to engage in the revision of classroom tools.

Creating a bubble…and an understanding of scientific practices (Scott Ashmann, et al., 2018)

In this journal article, they discuss the need for creativity in science practices, but also the role of communication, the open-endedness of scientific endeavors, and the use of others’ ideas.

Editorial: Improving Scientific Practices and Increasing Access (Christopher Aberson, December 2018)

In this editorial piece, Aberson discusses the importance of access to science and scientific materials for the public and improving scientific practices within the field of academia. 

Using Apps That Support Scientific Practices (Kelly Mills, et al., December 2017)

In this article, they discuss the opportunities that new technologies and applications may have in improving scientific practices with academia and for young students thinking of going into the field of science. 

Mastering Scientific Practices With Technology, Part 3 (Ben Smith, et al., May 2016)

In this report, Ben Smith talks about the many ways one could use technology as a tool to improve their work within the fields of science.

A Reply to Professor Lubet’s Critique (University of Wisconsin Madison, June 2, 2015)

Sociologist Alice Goffman replies to Northwester University Professor Steven Lubet’s critique of her book On the Run, refuting Lubet’s charge that she was complicit in a crime and inaccurate in her reporting. Other sociologists’ comments on her response can be viewed at this link.

Social, Behavioral, and Economic Sciences Perspectives on Robust and Reliable Science (National Science Foundation, May 2015)

A report by the “Subcommittee on Replicability in Science
Advisory Committee to the National Science Foundation Directorate for Social, Behavioral, and Economic Sciences,” detailing the committee’s findings to create more robust research methods in the National Science Foundation.

Transparency, Openness and Replication (RKWRICE, May 21, 2015)

In a recent blog post about the LaCour & Green retraction, Rick Wilson discusses the implication of this retracted paper on the field of psychology and lessons to be learned from this retraction.

Does the public trust science? A university communicator’s reflections (Health News Review, May 21, 2015)

Gary Schwitzer describes a blog post by Kirk Englehardt, the Director of Research Communication at the Georgia Institute of Technology. The blog post is the latest in a series inside looks at how medical press releases are announced and what communication is like between universities and the public.

My View on the Connection between Theory and Direct Replication (The Trait-State Continuum, April 9, 2015)

Brent Donnellan comments on the idea that researchers should propose an alternative theory if they are to make a valid replication attempt. He argues that it makes no difference “what is in the heads of replicators,” although he is interested in tests wherein two competing predictions are tested.

Why a meta-analysis of 90 precognition studies does not provide convincing evidence of a true effect (Daniel Lakens, April 4, 2015)

Reviews a recent meta-analysis on precognition.  Daniel Lakens concludes that the meta-analysis is not convincing and that, despite the results of this meta-analysis, there are still reasons to doubt the evidentiary value of studies reporting evidence for precognition.

How a p-value between 0.04-0.05 equals a p-value between 0.16-017 (Daniel Lakens, March 20, 2015)

A statistical discussion of the meaning of p values and of the reliance on p values by researchers in statistical inferences.

Is there p-hacking in a new breastfeeding study? And is disclosure enough? (The Hardest Science, March 18, 2015)

A blog post by Sanjay Srivastava on The Hardest Science that suggests the authors of a recent study on the benefits of breastfeeding essentially admitted to p-hacking during their analysis.  It urges researchers to do a better job of conveying exactly what p-hacking is to members of the press.

How do you feel when something fails to replicate? (The Trait-State Continuum, March 13, 2015)

Brent Donnellan argues that how researchers feel about the results of replication attempts has no bearing on their interpretation of those results; rather, the, “important issues are the relevant facts – the respective sample sizes, effect size estimates, and procedures.”

The End-of-Semester Effect Fallacy: Some Thoughts on Many Labs 3 (Rolf Zwaan, March 11, 2015)

The “end-of-semester effect” is the idea that data provided by subjects who are run at the end of the semester is useless. But Rolf Zwaan argues there is little evidence to support this claim.

Which statistics should you report? (Daniel Lakens, February 27, 2015)

Daniel Lakens takes an approach to reporting statistics that emphasizes reporting as much data as possible because researchers need to take into account that meta-analysis will most likely be conducted on the results of their paper.

Can we Live without Inferential Statistics? (Rolf Zwaan, February 26, 2015)

Rolf Zwaan talks about how the journal Basic and Applied Social Psychology (BASP) has banned the reporting of inferential statistics. He questions how science can be conducted without summary statements.

Op-Ed: Forensic Authorities Say Rigorous Review of Forensic Practices Urgently Needed (Innocence Project, February 25, 2015)

The Innocence Project writes an op-ed decrying the state of forensic science in the United States.

Using Faulty Forensic Science, Courts Fail the Innocent (Live Science, February 24, 2015)

A Live Science op-ed discusses how flaws in forensic science lead to serious problems in the US justice system.

I did a Newsnight thing about how politics needs better data (Bad Science, February 16, 2015)

Ben Goldacre of Bad Science goes on the BBS program “Newsnight” to explain why democracy requires solid, scientific evidence in order to create valid policies. (Video)

“The” Effect Size Does Not Exist (Data Colada, February 9, 2015)

Blog posting at Data Colada that discusses how computation of “the average” effect size may be impossible because we cannot determine how to weigh effect sizes from different experiments.

Share the Results of your R-Index Analysis with the Scientific Community (R-Index Bulletin, February 6, 2015)

R-Index Bulletin introduces the mission of its site: to provide researchers a publication to share results free of charge, and offer a transparent forum for other members of the scientific community to comment and post analysis.

The Dripping Stone Fallacy: Confirmation Bias in the Roman Empire and Beyond (Rolf Zwaan, January 28, 2015)

Rolf Zwaan discusses confirmation bias, and how it relates to what he calls, “Dripping Stone Fallacy,” (a reference to Emperor Claudius). He discusses how a person who disagrees with a paper’s conclusions is more likely to rigorously assess its methods and results than if he or she agreed with the conclusions.

When Replicating Stapel is not an Exercise in Futility (Rolf Zwaan, January 18, 2015)

Rolf Zwaan discusses the value in replicating disgraced and discredited Diedrick’s Stapel’s research. He argues that while Stapel’s fraud invalidates his results, it does not affect the value of his hypotheses.

The Test of Insufficient Variance (TIVA): A New Tool for the Detection of Questionable Research Practices (Replication-Index, December 30, 2014)

A blog post on Replication-Index introduces the Test for Insufficient Variance (TIVA). TIVA allows one to ascertain if p-values were created with insufficient techniques.

My BMJ editorial: how can we stop academic press releases misleading the public? (Bad Science, December 10, 2014)

Ben Goldacre promotes his BMJ editorial that opines on the fact that the “academic press releases routinely exaggerate scientific findings and mislead the public.” It accompanies a paper that provides suggestions on how to stop academics from misleading the publics.

Psychology journals should make data sharing a requirement for publication (Daniel Lakens, December 10, 2014)

Argument by Daniel Lakens that in order for a study to be published, journals should require authors to make the data accessible.

Musings on the “file drawer” effect (Babies Learning Language, November 19, 2014)

The file drawer effect is when the results of meta-analyses are inflated by negative findings not being published, indicating that some negative results that advances our knowledge should be published.  Michael Frank goes on to discuss when is the appropriate time to publish and how to avoid publication bias.

Comments on “reproducibility in developmental science” (Babies Learning Language, November 10, 2014)

In response to an article by Duncan et al., Michael Frank suggests different ways in making developmental psychology more reproducible, including larger samples, internal replication, and developmental comparison.

Help! Someone Thinks I P-hacked (Data Colada, October 22, 2014)

Uri Simonsohn writes a blog post on Data Colada that discusses how authors can respond to accusations that they have engaged in p-hacking.

Why Do We Cite Small N Studies? (Daniel Lakens, October 9, 2014)

Daniel Lakens discusses the concept of an “impact factor,” raises questions as to how N-size actually affects the quality of results, and discusses the “N-Pact Factor.”

Publication bias in psychology: putting things in perspective (Daniel Lakens, September 24, 2014)

A blog post by Daniel Lakens describing techniques that assist in estimating the power of studies and an analysis of publication bias in psychology.

Too Good to Be True (Slate, July 24, 2014)

A paper written by Alec Beall and Jessica Tracy found that women are more likely to wear read or pink at peak fertility. However, there were specific problems in the study concerning representativeness, measurement, bias, and statistical significance.

The Real Source of the Replication Process (Funderstorms, July 12, 2013)

David Funder describes the origins of the replication movement and what the goals were in the beginning.  He also discusses how there has been significant changes in this movement.

Developing Good Replication Practices (Rolf Zwaan, July 9, 2014)

Rolf Zwaan creates a list of practices required for good replication, such as using sufficient power, and pre-registering tests.

Shifting our cultural understanding of replication (Babies Learning Language, June 2, 2014)

Discussion on the importance of replication and the culture of psychology by Michael Frank, includes advice on the correct way to treat authors whose papers do not replicate.

Replications (Van Bergen, 1963) (Daniel Lakens, May 15, 2014)

Blog post by Daniel Lakens that includes the importance and necessity of replications.

Things that make me skeptical (The Trait-State Continuum, March 26, 2014)

Brent Donnellan discusses if and when unbelievable results should ever be believed, and talks about factors (beyond sensational results) that make hims skeptical, including small sample size, conceptual replication, and “breathless press releases.”

Estimating p(replication) in a practical setting (Babies Learning Language, March 24, 2015)

Blog posting by Michael Frank that estimates the proportions of psychological research studies that can be replicated, also discusses the importance of replication.

Just Do It! (The Trait-State Continuum, March 6, 2014)

Brent Donnellan praises a section in Perspectives in Psychological Science advocating replication. He goes on to make his own recommendation: researchers should dedicate five to 10 percent of their time replicating.

Science journalists: Ask not what the science of science communication can do for you… (The Cultural Cognition Project, February 5, 2014)

Dan Kahan writes about science communication and how science journalists need to take more responsibility in the dissemination of ideas to the public.

What effect size would you expect? (Daniel Simons, January 25, 2014)

A blog post ruminating on the types of effect sizes one would witness if the null hypothesis is true and the implications of this analysis for an attempt to replicate a study.

Developing Good Replication Practices (Rolf Zwaan, July 9, 2014)

Rolf Zwaan creates a list of practices required for good replication, such as using sufficient power, and pre-registering tests.

Replication will not save psychology (Notes from Two Scientific Psychologists, November 7, 2013)

Although Andrew Wilson thinks that replication is important, he believes that psychology needs to have better and stronger theories which will lead to more directed hypotheses.  He argues that psychology need to rely on more than replication to fix many of the current problems.

Open Access Week 2014 (Building Blogs of Science, October 25, 2013)

Fabiana Kubke makes an argument in support of Open Science, explaining why and how knowledge belongs to the public.  Sharing knowledge is the duty of researchers and should be prioritized over career advancement and publication.

Science Gone Bad (Building Blogs of Science, October 5, 2013)

Fabiana Kubke mediates on the aftermath of a sting by Science Magazine that revealed untruthful peer review in Open Access. She argues that this illuminates a broader problem with peer-review, and not Open Access itself, and goes on to criticise Science Magazine and the AAAS for what she sees as apparent hypocrisy in their investigation.

Predatoromics of science communication (Building Blogs of Science, October 4, 2013)

Fabiana Kubke discusses problems with the current process of peer review and how publication bias can come into effect.

Science is in a reproducibility crisis – how do we resolve it? (The Conversation, September 19, 2013)

Fiona Fidler and Ascelin Gordon discuss the current trend of irreproducibility, how the scientific community is responding, and what steps should be taken moving forward.

Post-publication peer review and social shaming (Babies Learning Language, September, 10, 2013)

Michael Frank discusses the peer review system, critiquing the system as a whole and suggesting areas that need to be improved to encourage stronger research practices.

Failure to replicate, spoiled milk and unspilled beans (Building Blogs of Science, September 6, 2013)

Fabiana Kubke begins by discussing broader issues related to replication, such as the difficulty of replicating others’ experiments, the lack of recognition for replication attempts, and barriers towards publishing negative attempts. She then opines on an under-discussed issue: the fact that published studies often do not contain enough information for other labs to undertake a replication attempt.

When Can Psychological Science Be Believed? (Rabble Rouser, July 10, 2013)

Lee Jussim makes a post his Psychology Today hosted blog, Rabble Rouser, that suggests a set of guidelines to use when assessing the credibility of psychological research findings and conclusions.

Direct replication of imperfect studies – what’s the right approach? (Daniel Simmons, June 3, 2013)

In a response to a blog post by Rolf Zwaan, Daniel Simmons argues about the use and importance of direct replication in determining the reliability and validity of a study’s findings.

U.S. Faculty Survey 2012 (Roger C. Schonfeld, Ross Housewright, and Kate Wulfson, April 8, 2013)

Ithaka Strategic Consulting + Research (S+R) conducts surveys of American faculty once every three years. This survey reports on “research and teaching practices broadly, as well as the dissemination, collecting, discovery, and access of research and teaching materials” at American universities.

An Open Discussion on Promoting Transparency in Social Science Research (Edward Miguel, March 20, 2013)

After attending a CEGA Blog Forum in December of 2012, UC Berkeley economics professor Edward Miguel writes how distinctions between disciplines have limited “synergy” in working towards transparency in Social Science Research. He explains how a Pre-Analyis Plan (PAP) and study registration could lead to better research in all fields.

Bayes’ Rule and the Paradox of Pre-Registration of RCTs (Donald Green, March 20, 2013)

Donald P. Green, a professors of Political Science at Columbia University, discusses how the Bayes’ Rule explains how Pre-Registration of research enhances the credibility of that research. His argument is that a Pre-Registration document detailing how analysis will be conducted prevents researchers from “fishing” for significant statistics.

Monkey Business (Macartan Humphreys, March 20, 2013)

Mactaran Humphreys, a Political Science professor at Columbia University, claims that he is “sold on the idea of research registration,” after seeing firsthand how fragile the data is the research canon of  political economy of development, and after trying pre-registration himself.

Targeted Learning from Data: Valid Statistical Inference Using Data Adaptive Methods (Maya Petersen, Alan Hubbard, and Mark van der Laan, March 20, 2013)

This paper advocates in favor of pre-specified analysis plans for data, but discusses the challenges of implementing such plans. It gives recommendations on how a priori plans can and should be used.

Transparency and Pre-Analysis Plans: Lessons from Public Health (David Laitin, March 20, 2013)

David Laitin contends that Political Science has been, and likely will continue to be, a observational science. He says that the model for best practices in Political Science therefore methodologically resembles epidemiology, rather than methods to improve experimental endeavors such as drug trials.

Freedom! Pre-Analysis Plans and Complex Analysis (Gabriel Lenz, March 20, 2013)

Gabriel Lenz discusses how Pre-Anylsis Plans (PAPs) prevent “data torturing,” but can have the “reverse consequence” of leading researchers to use simple analysis. Lenz contends that researchers ought to that understand PAPs allow for more complex statistical analyses, because they prevent rigorous analysis from being seen as “data torturing.”

The Need for Pre-Analysis: First Things First (Richard Sedlmayr, March 20, 2013)

Richard Sedlmayr argues that debate on Pre-Anylsis Plans (PAPs) are beside the point, because the problem with research is not that data are not being analyzed before they have been discovered, but that “exploratory analyses [are being presented] as hypothesis-driven when they aren’t.” Instead of “hundred-page PAPs,” Sedlmayr advocates for greater transparency in the documentation of research.

Transparency-Inducing Institutions and Legitimacy (Kevin Esterling, March 20, 2013)

Kevin Esterling argues that while pre-registration of research has it’s flaws–such as possibly de-insentivizing exploratory research–it is nevertheless useful as a tool for increasing transparency. Esterling believe that greater transparency is necessary in all styles of research, and while he believes this can be achieved in part by pre-registration, he believes a broader solution would the be the adoption and/or use of institutions to assist in making research more transparent.

Research Transparency in the Natural Sciences: What can we learn (Temina Madon, March 20, 2013)

Mason advocates for increased transparency in the natural sciences through a variety of methods, including instituting minimum reporting guidelines for study designs, rewarding researchers who make their experimental data public, and increasing funding for replication attempts.

The Role of Failure in Promoting Transparency (Carson Christiano, March 20, 2013)

Carson Christiano, Head of Research Partnerships for the Center for Effective Global Action (CEGA), discusses her organizations support for open discussion on, “ambiguous research results, misguided technologies, and projects that fail to achieve their desired impact.” She discusses the importance of sharing and discussing failure in the sciences.

A Replication Initiative from APS (Funderstorms, March 5, 2013)

APS, a psychology research organization, has taken steps to fix the replication problem in published research.  One of APS’s psychology journals will include a section to publish replication studies, which is a step that author David Funder believes to be important.

Does (effect) size matter (Funderstorms, February 1, 2013)

Since social psychology papers typically focus on p-values and not on effect sizes, David Funder argues in this post that effect sizes are just as important.  According to Funder, it is important to know the degree to which an effect occurs and psychologists should include them in their research.

Studies show only 10% of published science articles are reproducible. What is happening? (Moshe Pritsker, May 3, 2012)

A blog post by Moshe Pritsker lists some major failures to replicate findings in various academic fields.

Replication, period. (Funderstorms, October 26, 2012)

David Funder discusses the importance of replication and how it is the cure-all to problems in published research.  He also makes the point that if professors and researchers are going to put in the effort to do replication, then this research should be published.

Cherry picking is bad. At least warn us when you do it. (Bad Science, September 24, 2011)

Ben Golacre explains how selectively referencing other studies in the scientific literature can have severe consequences.  The author also brings up the idea of “systematic reviews” as a solution to this problem.

Brain imaging studies report more positive findings than their numbers can support. This is fishy. (Bad Science, August 26, 2011)

Ben Goldacre discusses how validating research findings is difficult in studies of the brain, because there is not enough variation in size among research to detect publication bias. He discusses how John Ioannidis gathered a large array of research results and used a power calculation to investigate the legitimacy of the general models’ claims.

Brain imaging studies report more positive findings than their numbers can support. This is fishy. (Bad Science, August 13, 2011)

Ben Goldacre writes blog post commenting on how John Ioannidis worked backwards to conclude that brain imaging studies report about twice as many positive findings as should be possible in a random sample.

Existential angst about the bigger picture (Bad Science, May 21, 2011)

Ben Goldacre discusses how engaging, “fun,” and positive research results are more likely to be published (publication bias). He explains how a lack of pre-registration leads to many unpublished negative results, therefore deluding the “bigger picture,” of what the research is saying.

Wealth Debate: How Two Economists Stacked the Deck (The Fiscal Times, March 25, 2011)

Mark Gimein describes a study performed by two researchers, Michael I. Norton from Harvard Business School and Dan Ariely from Duke. Norton and Ariely claim that Americans prefer wealth distribution that looks like Sweden’s over the wealth distribution of the U.S. However, Gimein reports that Norton and Ariely did not relay Sweden’s accurate wealth distribution to study participants, instead asking them to compare distribution in the U.S. to income distribution in Sweden. Gimein criticizes this approach as misrepresenting economic realities.

“None of your damn business” (Bad Science, January 14, 2011)

Ben Goldacre discusses how academic journals handle retractions poorly. He shares how some editors believe that details of retractions do not need to be shared, and argues that such an attitude is not good for science.

A new and interesting form of wrong (Bad Science, November 27, 2010)

Ben Goldacre discusses the findings of the Stonewall organization, wherein their survey findings indicate average coming-out age has fallen. Goldacre points out some of the interesting statistical traps they fell into, and attests that, “Maybe we should accept that all research of this kind is only produced as a hook for a news story about a political issue, and isn’t ever supposed to be taken seriously. And in any case, my hunch is that a well-constructed study would probably confirm Stonewall’s original hypothesis.”

Moniker mumbo jumbo (Psychology in Action, November 1, 2010)

Stephanie Vezich discusses the Name-Letter Effect paper by Nelson & Simmons, which postulates that one’s initials have an affect on how successful one is.  She also includes information about the response by McCullough & McWilliams, which finds that the original paper had multiple statistical errors.

(The original Name-Letter Effect paper and the response can be seen on the Corrections portion of this our website).

I love research about research (Bad Science, July 24, 2010)

Ben Goldacre comments on “spin” in academic publishing, wherein studies with negative results are not presented as negative. Instead, researchers often dig up positive results within the data and pretend that was what they were looking for originally, or they write their abstracts in such a way that distracts from the actual results. Goldacre finishes his essay by explaining how pre-registration could address this problem.

How myths are made (Bad Science, August 8, 2009)

Ben Goldacre discusses the dangers of only citing research that supports a certain claim. He looks into how misconceptions of science can spread, and become “myths” that people erroneously believe have a basis in legitimate scientific inquiry.

Pay to play? (Bad Science, February 14, 2009)

Ben Goldacre explains a study published in the BMJ that contends that research funded by the pharmaceutical industry are incredibly more likely to be published in bigger, more respectful journals than research funded by government. Goldacre argues that this is because academic journals are in fact businesses, and that they can earn more by publishing “glossy reprints” of studies for pharmo companies to use to promote their products.

You are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day. (Bad Science, November 15, 2008)

Ben Goldacre opines how large numbers and statistics are being used by newspapers to delude the general population.

More crap journals? (Bad Science, October 4, 2008)

Ben Goldacre ironically praises the fact that more tenuous research is being published, because he believes it will help end the misconception that, “if it’s published then it must be true.”

Pools of blood (Bad Science, May 10, 2008)

Using an example of meta-analysis that revealed the dangers of artificial drug products, Ben Goldacre discusses the importance of drug and pharmaceutical companies being transparent with their trial results and data. He argues that meta-analysis is a phenomenally useful tool that is not celebrated enough.