For years, I’ve struggled to convince my parents not to believe everything they read on social media. From pizza gate to fentanyl-laced Halloween candy, I’ve heard it all. Earlier this year, my mom called me, distraught over what she had seen on Facebook. “I just can’t believe how horrible people are to Meghan Markle – these rumors are phony baloney! Maybe I shouldn’t trust this Facebook anymore.” It was as if the clouds had parted. A false rumor about Meghan Markle had finally made my mom realize the extent of bias and misinformation on social media. Eureka!
So, when we were putting together a list of tips to help social media users get accurate information on one of the most consequential elections of our time, calling my mom was a no-brainer. I live and breathe this stuff, but my mom, like many Americans, doesn’t have accessible and easily understandable tools to help her navigate social media without falling into one of the traps designed to keep users scrolling and clicking.
As we went through the tips, she asked, “Why would the AI want to trick you into getting the wrong information for the election?” I explained how AI works through a metaphor about making pancakes: the ingredients, varying in type and quality, are the data, and the instructions, which can differ depending on the cook, are the algorithms. How you combine those ingredients into the type of pancakes you decide to make is the algorithm at work. The cook’s selection of ingredients and recipe determines what kind of pancakes you get. In the case of social media, it’s hard to know who the cook is, what their intentions are, what kind of ingredients they’re working with, or if there is any intention behind it at all. My mom responded, “Well, that makes sense. I didn’t know there was a cook at all! I just thought this is what people are saying, and I never really thought about it.”
Meanwhile, on what seems like another planet, uncertainty around the regulation and governance of social media looms large. A growing number of individual states are taking steps to fill the regulatory void on privacy and children’s mental health, but the effectiveness of this patchwork approach remains uncertain. Congress mandating the sale of TikTok based on national security fears put into direct conflict questions of freedom of speech. Other regulatory agencies like the FTC and FCC have made efforts to address specific harms of social media like fraudulent products and scams, but lack the jurisdiction to make the necessary overhaul. Without comprehensive social media policy to protect Americans against manipulation, fraud, and abuse, the regulatory landscape remains fragmented and disjointed. Any of these discussions — by Congress, regulatory agencies, state legislatures, or tech companies themselves — have the potential to profoundly shape how Americans and people around the world engage with social media and make sense of the world.
While DC and Silicon Valley debate the safety and societal guardrails of these platforms, people like my mom are regularly getting duped, manipulated, and exploited – especially this election season. Amidst this uncertainty, as we navigate one of the most critical elections in our nation’s history, people can take steps to safeguard their online experiences. Regardless of the outcomes of legislative debates or court rulings, there are things you should know and actions you can take independently to mitigate the negative impacts of social media:
Three things to know about social media:
- Social media is designed to make you angry – because anger sells: Numerous studies have shown that social media algorithms push content that triggers strong emotions. The more strongly you feel about a post, the more likely you are to engage with it – and more engagement means greater profits for social media companies whose ultimate product is selling people’s attention to advertisers. For example, Facebook’s ranking algorithm prioritized content featuring reaction emojis, notably the “angry reaction emoji,” as it boosted user engagement. However, Facebook researchers discovered that posts with the angry reaction emoji were significantly more likely to contain misinformation, toxicity, and fake news. Research conducted by computer scientists from Cornell and UC Berkeley revealed that X’s (formerly Twitter) algorithm amplified tweets expressing strong emotions, particularly anger, while political tweets showcased by the algorithm fostered ‘othering behavior,’ perpetuating negative perceptions of groups with opposing views. A study by researchers from University College London and the University of Kent observed a four-fold increase in misogynistic content presented to a new TikTok account over five days.
- Social media companies have cut back on safety: Mass layoffs and the removal of important safety rules drops the quality of content on your feed. Between November 2022 and November 2023, Meta (formerly Facebook), X (formerly Twitter), and YouTube eliminated 17 critical policies across their platforms which curbed hate speech, harassment, and misinformation on their networks. These included reversing policies on misinformation about the 2020 US election being stolen, lax requirements on political ads, and weakened privacy protections for user data to train AI. Layoffs of c.40,000 employees across Meta, X, and YouTube included significant cuts in trust and safety, ethical engineering, responsible innovation and content moderation. And an investigation by researchers at NYU found that 90% of ads they tested on TikTok containing false and misleading election information evaded detection, despite the platform’s policy not allowing for political ads.
- AI makes it harder to tell fact from fiction: Bad actors use AI to make convincing fakes of both audio and video, to further reduce trust in the facts. AI can produce hyper-realistic images and convincing deepfakes, audio or video that is digitally altered to make real people say or do things they did not actually say or do.In 2018, The New York Times presented readers with a test challenging them to distinguish between real and AI-generated images, highlighting the difficulty of discerning truth. In January, a robocall impersonating President Biden misled New Hampshire voters, illustrating the potential for misinformation. The phenomenon known as the “Liar’s Dividend” warns that the prevalence of AI could lead to genuine content being dismissed as deepfakes, further eroding trust in information.
Three things to do to improve your experience on social media
- Clean up your feed: Adjust your settings on Facebook and Instagram to minimize the junk clogging up your feed. For a step-by-step guide to improve the quality of content, click here for Instagram and here for Facebook. Privacy Party is another tool in the form of a browser plug-in to help you keep your private accounts actually private and protect your information.
- Make a voting plan: Stricter voting laws and increases in voter suppression tactics have created growing confusion and distrust in elections. At least 14 states have introduced laws increasing the complexity of voter registration, mail-in voting and voter identification. Research shows that individuals that make a voting plan are 9% more likely to vote.
- Rely on your state and local elections’ offices for elections information: With inaccurate info on social media, AI blurring reality, and more complex voter requirements, it’s easy to get confused about elections. Local election offices will give you the most accurate info, and protect you from efforts to mislead you on social media. For a list of official election offices, visit: https://www.nass.org/initiatives/trustedinfo
As we wait to see how laws and court decisions unfold that govern digital platforms, taking these proactive steps can help us deal with the uncertainties of social media rules and make online interactions better for everyone.
To learn more, visit ElectionEssentials.ShorensteinCenter.org. Thanks to researchers Rehan Mirza and Kevin Wren for their contributions to this important area of work.
News & Events
Information Overload? What to do about it this election season.
Laura Manley, Shorenstein Center Executive Director
For years, I’ve struggled to convince my parents not to believe everything they read on social media. From pizza gate to fentanyl-laced Halloween candy, I’ve heard it all. Earlier this year, my mom called me, distraught over what she had seen on Facebook. “I just can’t believe how horrible people are to Meghan Markle – these rumors are phony baloney! Maybe I shouldn’t trust this Facebook anymore.” It was as if the clouds had parted. A false rumor about Meghan Markle had finally made my mom realize the extent of bias and misinformation on social media. Eureka!
So, when we were putting together a list of tips to help social media users get accurate information on one of the most consequential elections of our time, calling my mom was a no-brainer. I live and breathe this stuff, but my mom, like many Americans, doesn’t have accessible and easily understandable tools to help her navigate social media without falling into one of the traps designed to keep users scrolling and clicking.
As we went through the tips, she asked, “Why would the AI want to trick you into getting the wrong information for the election?” I explained how AI works through a metaphor about making pancakes: the ingredients, varying in type and quality, are the data, and the instructions, which can differ depending on the cook, are the algorithms. How you combine those ingredients into the type of pancakes you decide to make is the algorithm at work. The cook’s selection of ingredients and recipe determines what kind of pancakes you get. In the case of social media, it’s hard to know who the cook is, what their intentions are, what kind of ingredients they’re working with, or if there is any intention behind it at all. My mom responded, “Well, that makes sense. I didn’t know there was a cook at all! I just thought this is what people are saying, and I never really thought about it.”
Meanwhile, on what seems like another planet, uncertainty around the regulation and governance of social media looms large. A growing number of individual states are taking steps to fill the regulatory void on privacy and children’s mental health, but the effectiveness of this patchwork approach remains uncertain. Congress mandating the sale of TikTok based on national security fears put into direct conflict questions of freedom of speech. Other regulatory agencies like the FTC and FCC have made efforts to address specific harms of social media like fraudulent products and scams, but lack the jurisdiction to make the necessary overhaul. Without comprehensive social media policy to protect Americans against manipulation, fraud, and abuse, the regulatory landscape remains fragmented and disjointed. Any of these discussions — by Congress, regulatory agencies, state legislatures, or tech companies themselves — have the potential to profoundly shape how Americans and people around the world engage with social media and make sense of the world.
While DC and Silicon Valley debate the safety and societal guardrails of these platforms, people like my mom are regularly getting duped, manipulated, and exploited – especially this election season. Amidst this uncertainty, as we navigate one of the most critical elections in our nation’s history, people can take steps to safeguard their online experiences. Regardless of the outcomes of legislative debates or court rulings, there are things you should know and actions you can take independently to mitigate the negative impacts of social media:
Three things to know about social media:
Three things to do to improve your experience on social media
As we wait to see how laws and court decisions unfold that govern digital platforms, taking these proactive steps can help us deal with the uncertainties of social media rules and make online interactions better for everyone.
To learn more, visit ElectionEssentials.ShorensteinCenter.org. Thanks to researchers Rehan Mirza and Kevin Wren for their contributions to this important area of work.