top of page

Annotated Bibliography

Henrich (2010) “Most People Are Not WEIRD”

https://www2.psych.ubc.ca/~heine/docs/Weird%20Nature.pdf

This article, though short, holds a key piece of information for my project regarding the samples used in research studies. WEIRD stands for White Educated Industrialized Rich Democratic societies. The article argues that 96% of psychology studies use sample subjects from WEIRD populations, even though WEIRD societies only make up 12% of the world population. This means that almost all of what we know about human psychology is actually based on a very small, privileged group of people. It begs the question – what don’t we know about 88% of humans that we have generalized to them? While there could be many similarities between people from different ancestries (and there are), there are still many anomalies that have important public health implications.

 

 

Scutti (2018) Ibuprofen linked to male infertility, study says

https://www.cnn.com/2018/01/08/health/ibuprofen-male-fertility-study/index.html

This article is an example of research blown up in the media. It’s not an example of sensationalized research that’s made it onto news shows, but it’ll be used as an example of how frequent and prevalent these paraphrased research studies are. This specific study was done in one of the Scandinavian countries on a group of men ages 18-35, with 17 men in the treatment group and 14 in the control group. The results of the study were based on a single exercise session for the men. These all lead me to ask questions: How do men’s bodies who are older than 35 respond to ibuprofen? How do we know there isn’t some cultural, ancestral, or environmental component at play that leads to the results (a confounding variable)? Do these men typically exercise more than once in a 6-week period (the length of the study)? How does their exercise health confound the results (a guy who’s healthy vs. a guy who is overweight)? These are examples of questions I ask as an educated psych student and would want other consumers to be prompted to ask in the future.

 

 

Aschwanden (2015) “Science Isn’t Broken”

https://fivethirtyeight.com/features/science-isnt-broken/#part1

This article explains p-hacking. P-hacking is the tweaking of data and statistics until the results are publishable and match what you wanted to find. This narrowing and expanding of data until it’s “perfect” is possible because many variables go into a specific research study. The paper here uses the example of this hypothesis: The U.S. economy is affected by whether Republicans or Democrats are in office. There is a really cool interactive tool that allows you to try out your own manipulation of the data, by choosing factors such as which political party to look at, which politicians (presidents, governors, senate, and/or house of representatives), and ways to measure economic performance, among other variables. It shows how scientists may include or exclude certain confounding variables in order to make the data look how they want it to look, which is a problem.

 

 

Simpson, Nelson, and Simonsohn (2012) “A 21 Word Solution”

https://poseidon01.ssrn.com/delivery.php?ID=930086105106029114019009067015107029046084048036031020026007118111021024024099115023032062030033112029051074098029003108081017110061008028038094116118113112096010030058087084095003095122102110018007114081002121089081124005087103067007099009003002082&EXT=pdf

The authors of this paper suggest that researchers include disclosure agreements in their papers that contain these words: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.” This would be a response that means the author did not p-hack their results. They suggest that just by stating the truth of your results is a small but potentially powerful move that would encourage other scientists to support “the call for transparency” in science. This is an example I could incorporate into my project of how scientists can make a positive change in the consumption of psych research.

 

 

“How to Be a Wise Consumer of Psychological Research”

http://www.apa.org/research/action/mer.aspx

“How to Become an Informed Consumer of Psychology Research”

https://www.verywellmind.com/becoming-a-consumer-of-psychology-2795611

The first article, posted on the American Psychological Association’s website, is an example of one way I may want to format my project. It takes a more professional, academic, and formal tone to the subject. This could be beneficial in both my objective to write a journalistic article and a more casual “how-to” infographic. I really like their suggested questions at the end of the article, and would possibly utilize this concise summarizing of points in my creation of the infographic. The second article is a lot more informal and casual based on it’s use of numbering the “how-to”s and  of using pictures. In my journalistic article, I want to maintain a credible and academic tone, but I also plan on incorporating visuals to keep the reader interested. This could be in the form of pictures or large quotes.

 

 

“Scientific Studies: Last Week Tonight with John Oliver”

https://www.youtube.com/watch?v=0Rnq1NpHdmw

It just so happens that John Oliver has done a whole episode on the issues in scientific studies that I want to talk about. He goes over the replication problem, p-hacking, media exaggeration, confirmation bias (reading what you want to hear so that your thoughts are reconfirmed as true, even if there is evidence stating otherwise), and much more. He uses a funny and powerful example of the ridiculousness of some of the media interpretations of research in the last four minutes of the episode, starting at 15:50. It’s a spoof of TED Talks called TODD Talks (TODD = Trends, Observations, and Dangerous Drivel), and the starting line goes, “Do you love science and all its complexity, but wish it could be a little less complex and a lot less scientific?” and from there it turns into people saying “What if I told you that the cure to racism is coffee?” etc. etc. I hope I can somehow use it, perhaps in the video form, in my own work because it’s so spot on in a humorous way.

 

 

“When the Revolution Came for Amy Cuddy”

https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html

 “’Power Poses’ Co-Author: ‘I do not believe the effects are real’”

https://www.npr.org/2016/10/01/496093672/power-poses-co-author-i-do-not-believe-the-effects-are-real

AND Meeting with Pamela Davis-Kean (my consultant)

This New York Times Magazine piece written in 2017 reviews the scandal that came about in social psychology regarding Amy Cuddy’s research on power posing. Specifically, her study said that by posing in powerful stances for one minute each, your body releases testosterone, which results in a behavior/performance change in interviews and other work scenarios. She has the second most popular (viewed) TED Talk ever – which is ironic when compared to the above reference from John Oliver and the TODD Talk spoof. The article details how her research came out during a time when researchers were discovering the replication problem in psych research, which includes the issue of p-hacking. Her academic peers basically found her research was not replicable and there is no release of testosterone when doing power poses. Cuddy then argued that she never said there was a resulting physiological response, but that the power of power posing was based on if you believed that you felt a change in yourself from posing. Even her research partner admitted in a quote that she has “no faith in the embodied effects of power poses,” which could be seen as betrayal of her research partner, but also is a researcher addressing the lack of validity and reliability in the study.

When I met with my professor, Pam, she introduced me to the issue of Amy Cuddy and how Cuddy has responded and has been interpreted within the research community. In the NYTimes article, the word “bully” is used to describe those who exposed her faulty research results. In Pam’s words, Cuddy “even wrote a book about being bullied” called Presence in 2018. It was clear to me that Pam is a strong opponent of Cuddy’s role as a victim. Pam argued that Cuddy’s handling of the whole situation makes women scientists look bad, and that the point of challenging her research was about making sure science is produced at a high standard.

 

 

“Editors’ Introduction to the Special Section on Replicability in Psychological Science”

http://journals.sagepub.com/doi/10.1177/1745691612465253

This article was the first one addressing the replication problem in psychology, written in 2012. It details the high prevalence of studies that are based on false positives – meaning, the results have not been replicated in other studies, and so they are not supported by science very well. This problem stems back to how academic journals are run. Scientists want to have their work published in the highest-regarded journals in their discipline, and, journals have a publication bias where they only want to publish positive, and/or flashy results. Though this has been a problem for decades, the issue has only become more prominent now with how hyper-competitive academia and research has become. In fact, many of the studies that I read about in textbooks have not been replicated. This means that studies that are considered pillars of psychology may not even have true findings. I want to explore this idea a lot more in my paper, and explore perhaps why journals have this publication bias.

bottom of page