Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. In this talk, Jennifer Allen will discuss findings from a recent working paper (co-authored with Duncan J Watts, and David Rand) that evaluates this claim. In the paper, the authors introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). They estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users’ exposure to flagged content was limited. In contrast, mainstream media stories highlighting rare deaths after vaccination were not flagged by fact-checkers, but were among Facebook’s most-viewed stories. Their work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods. Additionally, they show that fact-checking has only limited efficacy in preventing misinformed decision-making and introduce a novel methodology incorporating crowdsourcing and machine learning to better identify misinforming content at scale.