top of page

P-Hacking Problem

Last, but not least, is the phenomenon of p-hacking. Imagine a researcher, proud of how well their rigorous study was executed. They finally had collected all of the data and now it was time to look at the statistics.

They are looking for their hypothesis to be supported by significant results. 

However, there’s nothing significant about it – they just don’t have the right p-value.

​

This scenario is all too common in research due to a number of things (remember moderator effects?), but some researchers don’t settle for insignificance (that won’t get them published!) They may turn to p-hacking: running their data over and over again until a significant p-value pops out. Basically, they incorporate variables into the statistics that may not have even been related to their original hypothesis. Here, try it for yourself.

​

Understandably, this leads people to consider p-hacking as a form of cheating. Though many scientists do it, there have been efforts made to combat the outpouring of false positives in psychology research. The number of papers retracted from major journals has gone up, perhaps implying that people within the field are more apt to see research misconduct as a serious issue that has a solution, even if it won’t solve every problem in the field.

Okay, now that we've covered three major problems in psychology research, why should this be relevant to you

Meaning the

p-value < .05, or, there’s a 95% chance their results were not due to chance.

bottom of page