Abstract
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform.
Original language | English |
---|---|
Pages (from-to) | 713-729 |
Number of pages | 17 |
Journal | Perspectives on Psychological Science |
Volume | 11 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 Sept 2016 |
Keywords
- heterogeneity
- meta-analysis
- p-curve
- p-hacking
- p-uniform