Uri Simonsohn

Uri Simonsohn
  • Senior Fellow

Contact Information

  • office Address:

    3730 Walnut Street
    500 Jon M. Huntsman Hall
    Philadelphia, PA 19104

Research Interests: behavioral economics, consumer behavior, experimental methodology, judgment and decision making

Links: Personal Website

Overview

Go to PERSONAL WEBSITE

Go to Data Colada Blog

Professor Simonsohn studies judgment, decision making, and methodological topics

He is a reviewing editor for the journal Science, an associate editor of  Management Science, and a consulting editor for the journal Perspectives on Psychological Science.

He teaches decision making related courses to undergraduates, MBA and PhD students (OID290, OID690, OID900, and OID937)

He has published in psychology, management, marketing, and economic journals.

Continue Reading

Research

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2022), Above averaging in literature reviews, Nature Reviews Psychology, 1 (9), pp. 1-2. Abstract

    Meta-analysts’ practice of transcribing and numerically combining all results in a research literature can generate uninterpretable and/or misleading conclusions. Meta-analysts should instead critically evaluate studies, draw conclusions only from those that are valid and provide readers with enough information to evaluate those conclusions.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration Is A Game Changer. But, Like Random Assignment, It Is Neither Necessary Nor Sufficient For Credible Science, Journal of Consumer Psychology, 31 (January) (), pp. 177-180. Abstract

    We identify 15 claims Pham and Oh (2020) make to argue against pre-registration. We agree with 7 of the claims, but think that none of them justify delaying the encouragement and adoption of pre-registration. Moreover, while the claim they make in their title is correct—pre-registration is neither necessary nor suffi- cient for a credible science—this is also true of many our science’s most valuable tools, such as random assignment. Indeed, both random assignment and pre-registration lead to more credible research. Pre-registration is a game changer.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration: Why and How, Journal of Consumer Psychology, 31 (January) (), pp. 151-162. Abstract

    In this article, we (1) discuss the reasons why pre-registration is a good idea, both for the field and individual researchers, (2) respond to arguments against pre-registration, (3) describe how to best write and review a pre-registration, and (4) comment on pre-registration’s rapidly accelerating popularity. Along the way, we describe the (big) problem that pre-registration can solve (i.e., false positives caused by p-hacking), while also offering viable solutions to the problems that pre-registration cannot solve (e.g., hidden confounds or fraud). Pre-registration does not guarantee that every published finding will be true, but without it you can safely bet that many more will be false. It is time for our field to embrace pre-registration, while taking steps to ensure that it is done right.

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2020), Specification Curve Analysis, Nature Human Behaviour, 4 (November) (), pp. 1208-1214. Abstract

    Empirical results hinge on analytical decisions that are defensible, arbitrary and motivated. These decisions probably introduce bias (towards the narrative put forward by the authors), and they certainly involve variability not reflected by standard errors. To address this source of noise and bias, we introduce specification curve analysis, which consists of three steps: (1) identifying the set of theoretically justified, statistically valid and non-redundant specifications; (2) displaying the results graphically, allowing readers to identify consequential specifications decisions; and (3) conducting joint inference across all specifications. We illustrate the use of this technique by applying it to three findings from two different papers, one investigating discrimination based on distinctively Black names, the other investigating the effect of assigning female versus male names to hurricanes. Specification curve analysis reveals that one finding is robust, one is weak and one is not robust at all.

  • Joachim Vosgerau, Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), 99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis, Journal of Experimental Psychology: General, 148 (), pp. 1628-1639. Abstract

    Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal-meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (1) all conducted studies were included (i.e., an empty file-drawer), and (2) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal-meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of one study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (1) an internal meta-analysis would need to exclusively contain studies that were properly pre-registered, (2) those pre-registrations would have to be followed in all essential aspects, and (3) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run.

  • Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016), PLoS ONE, 14(3), e0213454 (). Abstract

    P-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p- curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2018), False-Positive Citations, Perspectives on Psychological Science, 13 (), pp. 255-259. Abstract

    We describe why we wrote “False-Positive Psychology,” analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.

     

  • Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2018), Psychology’s Renaissance, Annual Review of Psychology, 69 (), pp. 511-534. Abstract

    In 2010-2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre- registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “meta-analytical thinking” increases the prevalence of false-positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

  • Joseph Simmons and Uri Simonsohn (2017), Power Posing: P-curving the Evidence, Psychological Science , 28 (May) (), pp. 687-693. Abstract

    In a well-known article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing.” In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on p-curve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting?  We conclude not. The distribution of p-values from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives.

    Related
    Links
  • Robert Mislavsky and Uri Simonsohn (Forthcoming), When Risk is Weird: Unexplained Transaction Features Lower Valuations. Abstract

    We define transactions as weird when they include unexplained features, that is, features not implicitly, explicitly, or self-evidently justified, and propose that people are averse to weird transactions. In six experiments, we show that risky options used in previous research paradigms often attained uncertainty via adding an unexplained transaction feature (e.g., purchasing a coin flip or lottery), and behavior that appears to reflect risk aversion could instead reflect an aversion to weird transactions. Specifically, willingness to pay drops just as much when adding risk to a transaction as when adding unexplained features. Holding transaction features constant, adding additional risk does not further reduce willingness to pay. We interpret our work as generalizing ambiguity aversion to riskless choice.

Teaching

Current Courses

  • OIDD9370 - Methods Stumblers: Pragmatic Solutions To Everyday Challenges In Behavioral Research

    This PhD-level course is for students who have already completed at least a year of basic stats/methods training. It assumes students already received a solid theoretical foundation and seeks to pragmatically bridge the gap between standard textbook coverage of methodological and statistical issues and the complexities of everyday behavioral science research. This course focuses on issues that (i) behavioral researchers are likely to encounter as they conduct research, but (ii) may struggle to figure out independently by consulting a textbook or published article.

    OIDD9370003 ( Syllabus )

Past Courses

  • OIDD2900 - Decision Processes

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

  • OIDD9000 - Foundations of Dec Proc

    The course is an introduction to research on normative, descriptive and prescriptive models of judgement and choice under uncertainty. We will be studying the underlying theory of decision processes as well as applications in individual group and organizational choice. Guest speakers will relate the concepts of decision processes and behavioral economics to applied problems in their area of expertise. As part of the course there will be a theoretical or empirical term paper on the application of decision processes to each student's particular area of interest.

  • OIDD9370 - Methods Stumblers

    This PhD-level course is for students who have already completed at least a year of basic stats/methods training. It assumes students already received a solid theoretical foundation and seeks to pragmatically bridge the gap between standard textbook coverage of methodological and statistical issues and the complexities of everyday behavioral science research. This course focuses on issues that (i) behavioral researchers are likely to encounter as they conduct research, but (ii) may struggle to figure out independently by consulting a textbook or published article.

Awards And Honors

  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2009

In the News

Knowledge @ Wharton

Activity

Latest Research

Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2022), Above averaging in literature reviews, Nature Reviews Psychology, 1 (9), pp. 1-2.
All Research

In the News

Overwhelmed by Too Many Choices When Shopping? Wharton Retail Expert Weighs In

Wharton retail expert discusses why shoppers may feel overwhelmed when faced with many options.Read More

Knowledge @ Wharton - 2024/12/19
All News

Awards and Honors

Wharton Excellence in Teaching Award, Undergraduate Division 2011
All Awards