The views expressed in Shorenstein Center Discussion Papers are those of the author(s) and do not necessarily reflect those of Harvard Kennedy School, Harvard University or the organizations and institutions with whom the authors are affiliated.
Discussion Papers have not undergone formal review and approval. Such papers are included in this series to elicit feedback and to encourage debate on important issues and challenges in media, politics and public policy. Copyright belongs to the author(s). A PDF of this paper is available for download here for personal use only.
The machines aren’t coming—they’re here, reshaping our world with unprecedented power while carrying persistent flaws. As artificial intelligence floods into more aspects of daily life, we face a critical choice: will we let generative AI accelerate social decay and inequality, or will we harness these powerful but imperfect tools to help solve humanity’s greatest challenges?
Every day, this technology’s impact grows more visible and consequential. Hobby Lobby now casually sells AI-generated artwork with bizarrely missing details, displacing human artists without hesitation.1@cosmicdealheather, “Honestly It’s Worse than I Thought #ai #aigenerated #aiart #boycott #w… | Hobby Lobby | TikTok,” accessed December 26, 2024, https://www.tiktok.com/@cosmicdealheather/video/7447325330475568430. Meta populates Facebook and Instagram with AI-generated photos and even AI profiles with fabricated life stories.2Hannah Murphy and Cristina Criddle, “Meta Envisages Social Media Filled with AI-Generated Users,” Financial Times, December 27, 2024, sec. Meta Platforms, https://www.ft.com/content/91183cbb-50f9-464a-9d2e-96063825bf-cf. Digital commerce uses your data to personalize prices without your knowledge.3Luis Prada, “Kroger Asked About Surge Pricing and Facial Recognition at Grocery Stores,” October 16, 2024, https://www.vice.com/en/article/surge-pricing-facial-recognition-surveillance-grocery-stores/; “Wendy Davis on X: ‘.@BedoyaFTC Notes That New @FTC Chair Andrew Ferguson Has Quietly Removed Five Requests for Public Comment from Consideration — Including One That Sought Comment on “Surveillance Pricing.” https://T.Co/e7dXS8PmOW https://T.Co/pk5fgBT9ah /X,” X (formerly Twitter), January 23, 2025, https://x.com/wendyndavis/status/1882490346443313500. What connects these developments is a sophisticated machinery of reality distortion that affects our communities in increasingly pervasive ways.
This analysis examines three interconnected crises imposing mounting burdens on American society:
- The erosion of shared truth in a new AI-driven digital age
- The unchecked power of tech companies to shape public discourse, knowledge, and human behavior
- The hidden economic and environmental costs of AI infrastructure citizens unknowingly bear.
At the heart of these crises lies a simple truth: too much power concentrated in too few hands. This power imbalance lets corporations control our digital landscape, economic reality, and physical environment with minimal accountability or oversight.
For citizens concerned about AI’s impact on their communities, it shows how to get involved effectively. For legislators wrestling with these transformative technologies, it outlines practical solutions that are pro-freedom, pro-consumer and creator, and anti-pollution.
Key Themes
-
-
- Redistribute concentrated power – make Big Tech pay its own costs, end utility bill subsidies, protect creative work, stop profiting from degrading public trust and destabilizing society
- Put people, not algorithms, in control – protect privacy, enable platform choice, preserve human expertise and creativity
- Build resilience first – bolster essential literacies (reading, math, civic, financial) and community infrastructure, while modernizing the grid with flexible load management and “all-hands-on-deck” approaches that prioritize public benefit over concentrated profit
- Create real accountability – establish liability frameworks, protect whistleblowers, require companies to pay for the harms they cause and establish public oversight and governance of powerful technologies
-
Importantly, while this analysis raises serious concerns about Big Tech’s consolidated power, many solutions actually align with Silicon Valley’s self-interest: reduced energy costs, improved infrastructure, and reliable human-curated content serve both corporate profits and the public good. This alignment could make positive change achievable. Life can be good for all.
The stakes could not be higher. AI already affects many aspects of daily life—increasingly coming from behind the curtain. Without immediate action, these effects will only intensify. But with smart intervention, we can protect human agency and community prosperity while capturing AI’s benefits—and break the machinery of reality distortion before it fully takes hold.
Recent advances in AI capabilities raise the specter of recursive self-improvement, where systems become capable of enhancing themselves. While multiple AI labs express growing confidence in their ability to achieve artificial general intelligence (AGI) within years rather than decades, they’ve failed to articulate a clear vision of what that world would actually look like for ordinary citizens. Instead, they’re placing the burden on all of society to bear the costs of both concrete harms (hallucinations, bias, pollution) and abstract risks (existential threats, job displacement, social disruption) and policymakers to determine how to govern technologies that could fundamentally reshape human society—without providing a roadmap for what that reshaped society might be.
In the meantime, we’re forced to contend with AI’s persistent reality distortion. Facebook Marketplace users suspend critical thinking and trust AI-generated descriptions over contradictory evidence from major retailers. NotebookLM, among Google’s flagship AI products, butchered this very analysis on March 11, 2025—inventing policy recommendations I never made while transforming my horse manure crisis analogy (about utilities getting grid planning wrong at consumers’ expense) into generic technophobia. If even sophisticated AI systems so confidently misinterpret straightforward metaphors while projecting false authority, how can we build the shared understanding necessary to address our mounting challenges? This analysis outlines a way forward.
Just as the Industrial Revolution’s ultimate success required labor laws and environmental protections, AI’s promise—our new Industrial Revolution—can only be fully realized with thoughtful frameworks that preserve human agency while fostering innovation. Recent forecasts from leading AI researchers have dramatically compressed our timeline: superhuman capabilities in specific domains are anticipated to within 2-3 years, not decades, with predictions it will directly impact working families.
Society At a Crossroads: Our Fragmented Reality and the Coming Flood of Machine Learning
The convergence of social media’s attention economy with increasingly powerful AI tools represents a critical inflection point for American society. Our digital environments—already structured to reward emotional manipulation and tribal outrage—are now being supercharged with technologies capable of generating unlimited, personalized content designed to maximize engagement regardless of societal cost. At Meta, this convergence is playing out in particularly concerning ways: former executives warn of unprecedented new vectors for misinformation as the company dismantles internal safeguards against harmful characterizations of vulnerable groups while simultaneously developing more sophisticated AI systems that can generate and amplify such content at unprecedented scale.4Sam Biddle, “Leaked Meta Rules: Users Are Free to Post ‘Mexican Immigrants Are Trash!’ Or ‘Trans People Are Immoral,’” The Intercept, January 10, 2025, https://theintercept.com/2025/01/09/facebook-instagram-meta-hate-speech-content-moderation/; Clare Duffy, “Calling Women ‘Household Objects’ Now Permitted on Face- book after Meta Updated Its Guidelines | CNN Business,” CNN, January 7, 2025, https://www.cnn.com/2025/01/07/tech/meta-hateful-conduct-policy-update-fact-check/index.html; Christopher Wiggins, “What LGBTQ+ People Should Know about Meta’s New Rules,” accessed January 8, 2025, https://www.advocate.com/news/meta-policies-lgbtq-attacks; John Shinal, “Online Threats Lead to Real-World Harm, Say Security Experts,” CNBC, August 29, 2017, https://www.cnbc.com/2017/08/29/online-threats-real-world-harm.html.
The real-world consequences of this convergence are everywhere around us, extending far beyond our screens. Research has consistently linked online disinformation to offline violence.5James A. Piazza, “Fake News: The Effects of Social Media Disinformation on Domestic Terrorism,” Dynamics of Asymmetric Conflict 15, no. 1 (January 2, 2022): 55–77, https://doi.org/10.1080/17467586.2021.1895263; Zach Bastick, “Would You Notice If Fake News Changed Your Behavior? An Experiment on the Unconscious Effects of Disinformation,” Computers in Human Behavior 116 (March 1, 2021): 106633, https://doi.org/10.1016/j.chb.2020.106633; Jennifer Kavanagh and Michael D. Rich, “Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life” (RAND Corporation, January 15, 2018), https://www.rand.org/pubs/research_reports/RR2314.html. This dynamic creates feedback loops where digital pollution drives real-world harm, which then generates more inflammatory content online. We see this cycle play out in local communities where synthetic news—designed to provoke rather than inform—shapes civic decisions, polarizes neighbors, and erodes the shared factual foundation necessary for problem-solving. When we can’t agree on basic facts, we start failing to solve our problems.
Sadly, our media too often fixates on political optics, ignoring real-world consequences. Polling or what politicians say dominates, rather than explaining what’s illegal or untruthful. Take environmental coverage: despite pollution’s massive impact on our health and economy, stories rarely connect political decisions to their human costs. This style of coverage—treating life-and-death policy decisions as mere political theater—leaves Americans struggling to understand how politicians actually affect their lives, their children’s futures, and their communities’ wellbeing.
We’re living in what The Atlantic’s Derek Thompson calls “the anti-social century.” Self-imposed solitude might just be the most important social fact of the 21st century in America—an unprecedented rise in time spent alone and at home. This isolation warps our politics and personalities, making it harder to understand others’ realities or even recognize our shared challenges. When people primarily interact through screens rather than face-to-face, we lose the basic social fabric that helps communities solve problems together.6Derek Thompson, “The Anti-Social Century,” The Atlantic, January 8, 2025, http://theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/?gift=o6MjJQpusU9ebnFuymVdsMKo25mQ7_Dim-WNHAJNoVhY.
Even Google search, long the gold standard for accessing human knowledge, now prioritizes AI-generated content that lacks factual verification, crowding out human-created, expert-verified sources. Approximately half of American adults struggle with reading comprehension,7Emily Schmidt, “Reading the Numbers: 130 Million American Adults Have Low Literacy Skills,” APM Research Lab, March 16, 2022, https://www.apmresearchlab.org/10x-adult-literacy. making them particularly vulnerable to sophisticated AI-generated content, as web searches now deliver wrong answers again.
Most alarming, the massive data centers powering these AI systems force utilities to keep coal plants running—an expensive choice that burdens ratepayers with both higher costs and increased pollution, literally making everyday people pay more money to burn the dirtiest fossil fuels to generate digital pollution and misinformation.8Evan Halper et al., “A Utility Promised to Stop Burning Coal. Then Google and Meta Came to Town.,” Washington Post, October 12, 2024, https://www.washingtonpost.com/business/2024/10/08/google-meta-omaha-data-centers/.
We’re learning their current tendency to ‘hallucinate’ is evolving into something more concerning: strategic deception.9Alexander Meinke et al., “Frontier Models Are Capable of In-Context Scheming” (arXiv, December 6, 2024), https://doi.org/10.48550/arXiv.2412.04984; Apollo Research, “Scheming Reasoning Evaluations,” Apollo Research, accessed January 10, 2025, https://www.apolloresearch.ai/research/scheming-reasoning-evaluations. What happens when vast machine intelligence combines with persistent unreliability and risk of deception? We’re not adequately preparing for a world awash in artificial intelligence—both its remarkable capabilities and its deep-seated flaws.10Ethan Mollick, “Prophecies of the Flood,” January 10, 2025, https://www.oneusefulthing.org/p/prophecies-of-the-flood. These aren’t just technical challenges for AI developers to solve. They demand urgent attention from policymakers, business leaders, and citizens who must navigate this transformation. How we prepare for and shape it will determine whether it becomes a force for progress, and how that progress will be distributed across society—or disruption. We can’t wait until the water starts rising to have these crucial conversations.
This crisis reflects a fundamental shift in how online information operates. As information literacy expert Mike Caulfield and journalist Charlie Warzel observe, today’s internet functions less as a brainwashing engine and more as a “justification machine’ where ‘a rationale is always just a scroll or click away.”11Charlie Warzel and Mike Caulfield, “January 6 and the Triumph of the Justification Machine – The Atlantic,” accessed January 12, 2025, https://www.theatlantic.com/technology/archive/2025/01/january-6-justification-machine/681215/?gift=otEsSHbRYKNfFYMngVFweDjW4HMOQ6NwGolHc9wUdb0. The incentives of the modern attention economy—where engagement and influence are rewarded—ensure there will always be a rush to provide such rationales. When confronted with evidence that might shake their beliefs, users can instantly seek validation from conspiracist feeds, decontextualized videos, and now AI systems—all eager to confirm existing worldviews. Users aren’t seeking truth but confirming existing beliefs, making it increasingly easy to maintain those beliefs even in the face of contradicting evidence.
Something fascinating is happening to our collective minds. Social media isn’t just changing what we consume—it’s reshaping how our brains process information. The science of neuroplasticity shows our brains are adapting to our constant exposure to quick-hit digital stimulation, reshaping our neural pathways until sustained focus feels as unnatural as reading a book on a roller coaster. We’re not getting dumber—our brains are adapting to an environment that prizes constant distraction over deep reflection. This rewiring of our collective attention spans makes us particularly vulnerable to the justification machine’s endless scroll of convenient confirmations.12Samson Nivins et al., “Long-Term Impact of Digital Media on Brain Development in Children,” Scientific Reports 14, no. 1 (June 6, 2024): 13030, https://doi.org/10.1038/s41598-024-63566-y; Joseph Firth et al., “The ‘Online Brain’: How the Internet May Be Changing Our Cognition,” World Psychiatry 18, no. 2 (June 2019): 119–29, https://doi.org/10.1002/wps.20617; Fathima Basheer and Sudha Bhatia, Repercussion of Social Media Usage on Neuroplasticity, 2019; Maria T. Maza et al., “Association of Habitual Checking Behaviors on Social Media With Longitudinal Functional Brain Development,” JAMA Pediatrics 177, no. 2 (February 1, 2023): 160–67, https://doi.org/10.1001/jamapediatrics.2022.4924; Michael Landon-Murray and Ian Anderson, “Thinking in 140 Characters: The Internet, Neuroplasticity, and Intelligence Analysis,” Journal of Strategic Security 6, no. 3 (October 2013): 73–82, https://doi.org/10.5038/1944-0472.6.3.7; Martin Korte, “The Impact of the Digital Revolution on Human Brain and Behavior: Where Do We Stand?,” Dialogues in Clinical Neuroscience 22, no. 2 (June 2020): 101–11, https://doi.org/10.31887/DCNS.2020.22.2/mkorte.
While Americans’ trust in mass media plumbed new depths, recent catastrophic weather events—from Hurricane Helene to Los Angeles’s wildfires—showed how public media, public technology like Watch Duty, and select, often local outlets broke through the noise and institutional inertia, delivering timely reporting that married scientific precision with human narrative. Meanwhile social media platforms, once heralded as truth’s digital champions, devolved into algorithmic amplifiers13Timothy Graham and Mark Andrejevic, “A Computational Analysis of Potential Algorithmic Bias on Platform X during the 2024 US Election,” Working Paper, November 1, 2024, https://eprints.qut.edu.au/253211/; TOI Tech Desk, “Elon Musk Targets Google and Microsoft: ‘Even with Best of Intentions, They Can’t Help but Introduce Bias,’” The Times of India, September 24, 2024, https://timesofindia.indiatimes.com/technology/tech-news/elon-musk-targets-google-and-microsoft-even-with-best-of-intentions-they-cant-help-but-introduce-bias/article-show/113619683.cms. of digital pollution:14Wyatt Myskow and Martha Pskowski, “Misinformation Spreads Like Wildfire Online While LA Neighborhoods Burn,” Inside Climate News (blog), January 10, 2025, https://insideclimatenews.org/news/10012025/misinformation-spreads-like-wildfire-as-los-angeles-burns/. misinformation, conspiracy theories and financial scams that leads to real-world offline victims.
Drawing on both Congressional experience and current research, this work offers practical solutions people from policymakers to philanthropists should consider that address root causes and can bridge political divides:
- Rebuilding trusted information sources through strengthened journalism and digital literacy
- Ensuring AI development serves human needs while preserving individual agency
- Creating sustainable AI infrastructure that aligns innovation with environmental and economic costs
- Developing legal frameworks to ensure tech accountability and protect creative rights
While some recommendations require legislative or regulatory action, many can be implemented by philanthropies, industry leaders, civic organizations, and community groups. Effective solutions often emerge from collaboration across sectors rather than government action alone.
AI’s Paradox: Powerful but Unreliable
The Reliability Question
Generative AI—with Large Language Models (LLMs) being the prime offender—produces authoritative-sounding falsehoods as fluently as facts . University of Glasgow researchers bluntly labeled these outputs ‘bullshit,’15Michael Townsen Hicks, James Humphries, and Joe Slater, “ChatGPT Is Bullshit,” Ethics and Information Technology 26, no. 2 (June 8, 2024): 38, https://doi.org/10.1007/s10676-024-09775-5. highlighting LLMs lack of true comprehension. These systems routinely hallucinate and become unreliable when faced with simple name changes or variable swaps,16Aryan Gulati et al., “Putnam-AXIOM: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning,” 2024, https://openreview.net/forum?id=YXnwlZe0yf¬eId=yrsGpHd0Sf. demonstrating fundamental flaws beneath their confident veneer.
Hallucinations and Their Impact
In March 2025, the Los Angeles Times pulled down their AI-generated coverage bot after it defended the Ku Klux Klan.17Corbin Boiles, “MAGA Newspaper Owner’s AI Bot Defends KKK,” The Daily Beast, March 5, 2025, https://www.thedailybeast.com/maga-newspaper-owners-ai-bot-defends-kkk/. Google’s shift from human-curated ‘Snippets’ to ‘AI Overviews’ in 2024 turned queries about “warm-blooded reptiles” in science18Rosyna Keller [@rosyna], “Here’s an Example of @Google’s AI Overview Hallucinating. Not Only Hallucinating, but Hallucinating a Hallucination You’d Only Recognize If You Have Deep Subject Matter Knowledge. https://T.Co/6AUn2mfQvX,” Tweet, Twitter, December 26, 2024, https://x.com/rosyna/status/1872344266653171774. into AI-generated oversimplifications and prioritizes AI imitations over authentic works, replacing expert-verified information with plausible falsehoods.19ItsTheTalia, “ItsTheTalia on X: ‘You Cannot Comprehend the RAGE That Rushed through Me as I Saw That the First Google Result for Googling Hieronymus Bosch Is Incestuous AI Sloop. What Are We Even Doing Here…? https://T.Co/yHP5by93NY / X,” X (formerly Twitter), September 14, 2024, https://x.com/ItsTheTalia/status/1835092917418889710; “Learn How Google’s Featured Snippets Work – Google Search Help.”
Tech companies want to dream about sci-fi AI futures, but they need to fix the mundane, dangerous problems—hallucinations, reliability failures, and rampant information pollution. It’s possible, even likely, that LLMs won’t ever be able to completely solve these problems without additional uninvented technology. Even the most sophisticated AI summarization tools consistently obliterate nuance, often misrepresenting complex arguments. Despite extensive testing with cutting-edge AI systems, these tools sometimes generate summaries that contradict key findings. This limitation drove the development of three carefully calibrated editions of this paper—each preserving essential context while accommodating different reading time constraints. While AI assisted in this process, its current limitations underscore why human oversight remains indispensable.
The 2025 Cybertruck bombing in Las Vegas—the first attack publicly linked to AI—where a decorated special forces Army veteran used ChatGPT to methodically plan his attack despite built-in safety protocols, harming seven innocent victims. While individual blame is complex, these incidents reveal that “move fast and break things” sometimes leaves behind broken lives and communities.
The stakes escalate as we move from today’s “narrow AI” systems, which perform specific, defined tasks, to autonomous agents that can independently pursue complex objectives through chains of decisions. We’re already seeing the future – corporate lawyers observe and warn: autonomous AI agents regularly make decisions and take actions their creators neither anticipated nor designed for.20Tara S. Emory and Maura R. Grossman, “AI Agents: The Next Generation of Artificial Intelligence,” accessed January 8, 2025, https://natlawreview.com/article/next-generation-ai-here-come-agents.
Until AIs tout their hallucination-free guarantee—and crucially, promise it can identify and correct flawed human inputs that cause errors too—we should approach every AI-generated insight with informed skepticism.
Maintaining Human Agency by Enabling User Control
We need policies and technologies that empower digital self-determination in an age where we are all creators and consumers: ensuring people can consolidate content across platforms, choose how they discover and filter the content they view, and freely transfer their data, relationships, and creative works between whatever service they choose.
This framework resembles what digital governance scholars call ‘middleware’—intermediary layers between platforms and users that allow individuals to choose how their feeds are curated.21Tharin Pillay, “Social Media Fails Many Users. Experts Have an Idea to Fix It,” TIME, February 18, 2025, https://time.com/7258238/social-media-tang-siddarth-weyl/; Brett M. Frischmann and Susan Benesch, “Friction-In-Design Regulation as 21st Century Time, Place, and Manner Restriction,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, August 1, 2023), https://doi.org/10.2139/ssrn.4178647; Luke Hogg and Renée DiResta, “Shaping the Future of Social Media with Middleware | The Foundation for American Innovation,” December 17, 2024, https://www.thefai.org/posts/shaping-the-future-of-social-media-with-middleware; The Future of Free Speech, “Preventing ‘Torrents of Hate’ or Stifling Free Expression Online?,” The Future of Free Speech, May 28, 2024, https://futurefreespeech.org/preventing-torrents-of-hate-or-stifling-free-expression-online/; Jonathan Stray, Ravi Iyer, and Helena Puig Larrauri, “The Algorithmic Management of Polarization and Violence on Social Media,” Knight First Amendment Institute, August 22, 2023, http://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media; Alex Moehring and Alissa Cooper, “Better Feeds: Algorithms That Put People First,” Knight-Georgetown Institute, March 4, 2025, https://kgi.georgetown.edu/research-and-commentary/better-feeds/; Mike Masnick, “Empowering Users, Not Overlords: Overcoming Digital Helplessness,” Techdirt, January 27, 2025, https://www.techdirt.com/2025/01/27/empowering-users-not-overlords-overcoming-digital-helplessness/.
Imagine choosing your own algorithm to filter content from all your social feeds—TikTok, Instagram, Twitter—based on what you define as time well spent, whether that’s viral videos, family updates, learning, creativity, or meaningful connection. In contrast, platform-controlled algorithms push hyperpartisan misinformation even into accounts following only sports.22“Aaron Reichlin-Melnick on X: ‘Further Evidence That Musk Turned Twitter into a Right-Wing Propaganda Machine. I Haven’t Touched This Account—Which Only Follows @NBA—in a Month. I Just Checked and Majority of Content the Algorithm Put in the Account’s Notifications Is Trump, Laura Loomer, and Election Denial. https://T.Co/uUkG1whHg6 / X,” X (formerly Twitter), October 21, 2023, https://x.com/ReichlinMelnick/status/1725882073422987629. Today’s developers can build partial solutions through browser extensions, but platforms have methodically closed the access points needed for comprehensive alternatives to their algorithms. Open standards and interoperability requirements could enable API access, standardized data export, and relationship portability —technological solutions that may require regulatory support to overcome platform resistance .
These rights would ensure creators can maintain their content libraries, audience relationships, and analytics across platforms, preventing platform lock-in and mitigating concerns over censorship. With users controlling their own content aggregation and filtering, platforms are still hosts, but lose their power as gatekeepers.
Critics rightly note that many users won’t customize their feeds or may abandon tools that challenge their viewpoints—after all, Facebook’s passive consumption model defeated RSS readers precisely because it required less effort. But even if only a minority of users initially choose alternative algorithms or cross-bubble content curation, their existence creates market pressure for better options, just like organic and nutrition transparency transformed grocery stores without universal adoption. On-device AI models could make such tools more affordable and private.23Matthew Hutson, “Forget ChatGPT: Why Researchers Now Run Small AIs on Their Laptops,” Nature 633, no. 8030 (September 16, 2024): 728–29, https://doi.org/10.1038/d41586-024-02998-y; Katherine Bourzac, “Fixing AI’s Energy Crisis,” Nature, October 17, 2024, https://doi.org/10.1038/d41586-024-03408-z. At times, China’s Deepseek v3 exceeds benchmarks. Running its best version requires a powerful computer but frees you from both cost and the Chinese Communist Party’s ideological constraints on their website versions. While Deepseek likely trained on unauthorized materials similar to how Meta and OpenAI are alleged, this doesn’t excuse anyone stealing from creators to build competitors.
Unlike China, America and other democracies value creative rights and fair compensation. Surrendering creators’ rights to big tech risks realizing the dystopian world of ‘Ready Player One’—where creative work is systematically extracted from masses to build immersive virtual worlds that simultaneously entertain and exploit them, concentrating wealth and power in the hands of tech oligarchs.
AI agents threaten to shatter the critical boundaries between applications, operating systems, and user data. The industry’s latest obsession—AI agents that handle everything from concert tickets to calendar scheduling—demands unprecedented access to our digital lives. These agents require your browser, payment information, calendar, messaging apps, and essentially root-level permissions across your entire system. Signal’s founders warn about these risks: agents can be a fundamental breakdown of the privacy architecture that keeps our digital lives secure.24Sarah Perez, “Signal President Meredith Whittaker Calls out Agentic AI as Having ‘profound’ Security and Privacy Issues,” TechCrunch (blog), March 7, 2025, https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/. Because most of this processing happens on remote servers, not your device, private communications, financial details, and personal schedules all potentially flow through corporate servers accessible to governments. Agents are a profound restructuring of digital security that prioritizes corporate convenience and “magic genie” functionality over fundamental rights of privacy—all so our “brains can sit in jars” while AI handles life’s details. The machinery of reality distortion isn’t just changing what we see and how we think—it’s reaching for control of what we do.
Big Tech’s Accountability Crisis: The Cost of Unchecked Power
Public Trust Under Siege
Major platforms once promised user empowerment—Facebook, Twitter/X, Google—now frequently amplify misinformation25Anna Edgerton, “US Heads Into Post-Truth Election as Platforms Shun Arbiter Role,” Bloomberg.Com, January 22, 2024, https://www.bloomberg.com/news/articles/2024-01-22/us-heads-into-post-truth-election-as-platforms-shunarbiter-role. and extremism, with consequences as severe as the January 6 Capitol insurrection.26Kurt Wagner, “Twitter, Facebook Reach Trump Breaking Point After Siege of Capitol,” Bloomberg.Com, January 7, 2021, https://www.bloomberg.com/news/articles/2021-01-07/twitter-facebook-reach-trump-breaking-point-after-siege-of-capitol.
These firms are retreating from moderation and systematically dismantling safeguards: disbanding fact-checking teams and content-safety plans, and blocking researcher access.27Daniel Zuidijk, “Fight Against Misinformation Suffers Defeat on Multiple Fronts,” Bloomberg.Com, July 8, 2024, https://www.bloomberg.com/news/newsletters/2024-07-08/fight-against-misinformation-suffers-defeat-on-multi-ple-fronts.
Most alarming is a mere 3% of users generate one-third of all toxic content—superspreaders of digital pollution, a concentrated source of harm that often leads to physical violence.28Claire E. Robertson, Kareena S. del Rosario, and Jay J. Van Bavel, “Inside the Funhouse Mirror Factory: How Social Media Distorts Perceptions of Norms,” Current Opinion in Psychology 60 (December 1, 2024): 101918, https://doi.org/10.1016/j.copsyc.2024.101918; Alex Wickham et al., “UK Riots: Suspected Foreign Groups Using TikTok, Telegram to Incite Violence – Bloomberg,” August 7, 2024, https://www.bloomberg.com/news/articles/2024-08-07/suspected-foreign-agitators-boost-uk-extremists-to-inflame-riots?srnd=undefined. Profit-driven engagement metrics override public safety .
Meta’s and YouTube’s retreat from content moderation is alarming, after X/Twitter’s abandonment led to plummeting content quality and an advertiser exodus.29Aisha Counts and Eari Nakano, “Twitter’s Surge in Harmful Content a Barrier to Advertiser Return,” Bloomberg. Com, July 19, 2023, https://www.bloomberg.com/news/articles/2023-07-19/twitter-s-surge-in-harmful-content-a-barrier-to-advertiser-return; Aisha Counts, “Social Media Platforms Show Little Interest in Stopping Spread of Misinformation – Bloomberg,” August 8, 2024, https://www.bloomberg.com/news/newsletters/2024-08-08/social-media-platforms-show-little-interest-in-stopping-spread-of-misinformation?cmpid=BBD080824_TECH&utm_medium=email&utm_source=newsletter&utm_term=240808&utm_campaign=tech. Following vague but ominous threats of prosecution from President-elect Trump—a dangerous precedent for a company already under investigation for moving too fast and breaking the law across a number of federal and state agencies—Meta terminated partnerships with independent fact-checkers who had effectively reduced the spread of false information in the US.
A former Meta trust and safety employee warned: “(Moderation) is not the climate change debate . . .This is degrading, horrible content that leads to violence and has the intent to harm other people.”30Casey Newton, “Meta Surrenders to the Right on Speech,” Platformer, January 8, 2025, https://www.platformer.news/meta-fact-checking-free-speech-surrender/. By dismantling safeguards against non-illegal but harmful content, it effectively declared “open season” on vulnerable populations—precisely when they face heightened targeting from extremist movements. Meta argues, “the lack of dialogue changes no hearts. At worst it hardens them.” I agree, but the attention economy powering these algorithms are tuned for conflict.31Casey Newton, “Meta Goes Mask-Off,” Platformer, January 14, 2025, https://www.platformer.news/meta-trump-pivot-messenger-themes-labor-zuckerberg-wishlist/; Rui Fan, Ke Xu, and Jichang Zhao, “Weak Ties Strengthen Anger Contagion in Social Media” (arXiv, May 5, 2020), https://doi.org/10.48550/arXiv.2005.01924.
Similarly, Trump’s promised TikTok reprieve effectively places the platform under an implicit gag order against criticizing him—ironically enabled by the very bipartisan law he inspired to force its sale. This pattern of platforms heel-turning under political pressure, whether driven by political pressure or a fickle founder, signals a dangerous new phase where platform policies heel to intimidation rather than protecting users.
Meme, sports, and pop culture accounts are weaponized, repeating implausible fiction into perceived truth.32Fazio, Rand, and Pennycook, “Repetition Increases Perceived Truth Equally for Plausible and Implausible Statements”; My Mixtapez [@mymixtapez], “Four Days after Sending $500 Million to Ukraine, Biden Announces a ‘One-Time Payment of $770’ to California Fire Victims.🇺🇸🫡https://T.Co/rxdsBNvsgu”; “MyMixtapez on Instagram.” Studies revealed systematic pro-GOP bias in X and TikTok’s content distribution algorithms, while Meta and YouTube’s systems were effectively weaponized by coordinated disinformation campaigns.33Eric W. Dolan, “TikTok’s Algorithm Exhibited pro-Republican Bias during 2024 Presidential Race, Study Finds,” PsyPost – Psychology News, February 4, 2025, https://www.psypost.org/tiktoks-algorithm-exhibited-pro-republican-bias-during-2024-presidential-race-study-finds/; Esat Dedezade, “TikTok Users Report Anti-Trump Content Being Hidden Following Platform’s Unbanning,” Forbes, January 22, 2025, https://www.forbes.com/sites/esatdedezade/2025/01/22/tiktok-users-report-anti-trump-content-being-hidden-following-platforms-unbanning/; Prithvi Iyer, “New Research Points to Possible Algorithmic Bias on X | TechPolicy.Press,” November 15, 2024, https://www.techpolicy.press/new-research-points-to-possible-algorithmic-bias-on-x/. This wasn’t organic virality but sophisticated manipulation of platform mechanics, underscoring why users need direct control over the algorithms shaping their information diet.
When platforms can’t or won’t prevent algorithmic manipulation, the solution isn’t just better corporate oversight—it’s also giving users the power to choose and customize their own content filtering systems.
Even Steve Bannon, Trump’s former chief strategist who helped catalyze MAGA’s rise, warns of “technofeudalism” from Silicon Valley elites. While acknowledging America’s need for technological progress, Bannon argues Silicon Valley’s version of progress threatens fundamental human values and community bonds.
Besides, as Musk’s X platform devolves into an echo chamber of conspiracy theories and his companies increasingly blur the line between corporate and government functions, the solutions proposed here become more crucial, not less. When government actors can shape public discourse and policy with minimal oversight, strengthening information ecosystems becomes existential. The double standard is glaring: Dan Rather’s career ended over sharing one fraudulent document, while Musk’s daily stream of comparable falsehoods only strengthens his following. We must reassert lying as harmful—beyond fact-checking, as a corrosive moral failing. Normalized deception undermines the common ground and authentic connection required for society to function.
The machinery of reality distortion now extends to the physical layer of internet access itself. Senate Republicans’ pivotal votes for the bipartisan infrastructure law demanded the Biden program’s local control and lengthy timelines. The new administration plans to strike Biden-era conditions ensuring federal rural internet investments are affordable to middle-class Americans. Trump’s Commerce Department and Senate Chairman Cruz also want to redirect $20 billion in bipartisan infrastructure funding from permanent fiber networks built to last 50-100 years to Musk’s Starlink disposable satellite system—incinerating bipartisan Internet investment in the atmosphere every 5-7 years. Taxpayers and rural America shouldn’t settle for renting disposable internet on a subscription plan that centralizes a chokepoint susceptible to surveillance, filtering, and censorship, some of which Musk has already demonstrated on X.34Wes Davis, “The Head of a Biden Program That Could Help Rural Broadband Has Left,” The Verge, March 16, 2025, https://www.theverge.com/news/630954/rural-broadband-equity-program-head-leaves-trump-musk-starlink; Mike Masnick, “Musk Shows Us What Actual Government Censorship On Social Media Looks Like,” Techdirt, February 3, 2025, https://www.techdirt.com/2025/02/03/musk-shows-us-what-actual-government-censorship-on-social-media-looks-like/; US Representative Frank Pallone, “Pallone Slams Republicans for Undermining Broadband Program and Standing by Silently While Musk Grifts Off the American People,” March 5, 2025, http://democrats-energycommerce.house.gov/media/press-releases/pallone-slams-republicans-undermining-broad-band-program-and-standing-silently.
The tech industry is increasingly resembling an ouroboros—the ancient symbol of a snake consuming its own tail. Already, AI generates an estimated 5% of Wikipedia’s English-language articles, meaning new AI models are training on content generated by earlier AI models.35Creston Brooks, Samuel Eggert, and Denis Peskoff, “The Rise of AI-Generated Content in Wikipedia” (arXiv, Oc- tober 10, 2024), https://doi.org/10.48550/arXiv.2410.08044; Ethan Mollick, “Ethan Mollick on X: ‘The Ourouborous Has Begun. Wikipedia Is an Important Source of Training Data for AIs. At Least 5% of New Wikipedia Articles in August Were AI Generated (To Be Clear, This Does Not Mean That AI Will Fail as It Trains on Its Own Data, Synthetic Data Is Already a Part of Training) https://T.Co/kuDkfEgJQv’ /X,” X (formerly Twitter), October 14, 2024, https://x.com/emollick/status/1845881632420446281. This self-referential cycle risks amplifying biases and hallucinations, degrading the quality of training data. The risk to science is particularly acute: even breaking the 1% summary hallucination barrier still means searching for 9 or fewer hallucinations per 1000 words. OpenAI’s newest product, Deep Research, while impressive in synthesizing data, exemplifies this problem—experts note it generates plausible-sounding but fabricated figures and misrepresents key aspects, threatening to flood scientific literature with difficult-to-detect errors.
Meanwhile, a 2025 FTC report reveals how Big Tech’s market power creates another similar self-consuming cycle: the largest cloud providers (Alphabet, Amazon, and Microsoft) are securing privileged positions with leading AI developers like Anthropic and OpenAI through investments that grant them exclusive rights and sensitive technical information.36“FTC Issues Staff Report on AI Partnerships & Investments Study,” Federal Trade Commission, January 17, 2025, https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-issues-staff-report-ai-partnerships-investments-study. These firms extracted ideas, top talent and technology licenses from startups, leaving the smaller firm’s younger employees holding less-valuable stock. This consolidation of power means fewer companies control both the infrastructure AI runs on and the AI systems themselves, potentially limiting competition, innovation, and public accountability.
Breaking up Big Tech isn’t a simple solution. Former FTC Chair Lina Khan and others make compelling arguments for it—still, Bell Labs and Alphabet produced remarkable breakthroughs through their monopoly-funded research. Shareholders of these firms when their stocks are split historically do even better over time. Competition in AI is actually driving the worst outcomes—pushing companies to rush dangerous products to market with inadequate safeguards. This suggests traditional antitrust remedies alone won’t solve our AI governance challenges. When companies violated the law, they need to be held accountable. We also need frameworks that can both foster beneficial innovation and prevent destructive competition, rather than simply breaking up large firms and hoping market forces produce optimal outcomes.
Labor Exploitation and Creative Appropriation
Behind AI and social media content lies a hidden workforce: Global South moderators and data workers earning $2/hour to process AI and humanity’s darkest content.37Caroline Haskins, “The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery,’” Wired, May 22, 2024, https://www.wired.com/story/low-paid-humans-ai-biden-modern-day-slavery/.
Creative professionals face their own crisis: artists’ work is scraped without consent to train AI models, while writers and journalists compete against AI often plagiarizes their work.38Randall Lane, “Why Perplexity’s Cynical Theft Represents Everything That Could Go Wrong With AI,” Forbes, June 11, 2024, https://www.forbes.com/sites/randalllane/2024/06/11/why-perplexitys-cynical-theft-represents-everything-that-could-go-wrong-with-ai/; Tim Marchman, “Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine,” Wired, June 21, 2024, https://www.wired.com/story/perplexity-plagiarized-our-story-about-how-perplexity-is-a-bullshit-machine/; Dhruv Mehrotra and Tim Marchman, “Perplexity Is a Bullshit Machine,” Wired, June 19, 2024, https://www.wired.com/story/perplexity-is-a-bullshit-machine/. Translators, creatives, data workers and others lose livelihoods. This dual exploitation reveals how Big Tech’s AI ambitions depend on devaluing human labor.
Creators deserve a seat at the table during policies affecting their future. While tech companies and their lobbyists shape AI policy, the artists, writers, musicians, and other creatives whose work trains these systems are largely excluded from these decisions.
AI companies trample even basic digital rights, ignoring robots.txt—the Internet’s ‘No Trespassing’ sign—to scrape content without permission. Perplexity.ai and others brazenly violate their own stated policies, deploying hidden bots to plagiarize and harvest explicitly forbidden content protected from those following Internet standards.39Dhruv Mehrotra and Tim Marchman, “Perplexity Is a Bullshit Machine | WIRED,” June 19, 2024, https://www.wired.com/story/perplexity-is-a-bullshit-machine/; Lane, “Why Perplexity’s Cynical Theft Represents Everything That Could Go Wrong With AI”; Marchman, “Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine”; Dhruv Mehrotra, “Amazon Is Investigating Perplexity Over Claims of Scraping Abuse,” Wired, June 27, 2024, https://www.wired.com/story/aws-perplexity-bot-scraping-investigation/.
The Rise of Synthetic Local News
‘Pink slime’ operations—AI-generated local news funded by special interests—now frequently outnumber legitimate newspapers.40Sara Fischer, “‘Pink Slime’ News Outlets Outpacing Local Daily Newspapers,” Axios, June 11, 2024, https://www.axios.com/2024/06/11/partisan-news-websites-dark-money. These operations have evolved from clickbait websites to printed tabloids masquerading as community papers, sometimes resurrecting shuttered local or diverse, niche publications. Political groups and industry lobbies flood swing district mailboxes with these sophisticated forgeries, leaving residents struggling to distinguish propaganda from genuine journalism.41Miranda Green [@mirandacgreen], “Where Did Trump Voters Get Their News?📰Yes, There’s Social and Partisan Sites, but There Is Another Influential Strategy That Isn’t Getting Enough Attention: Manipulated, Pay-to-Play and All out Fake News Sites I’ve Been Covering a Mix of Those for Years. Here’s a Primer 🧵,” Tweet, Twitter, November 8, 2024, https://x.com/mirandacgreen/status/1854967274202935803.
Roughly half of US adults struggle with reading comprehension to some degree; around 130 million people over 16. Basic reading difficulties compound the complexity of online information assessment. Combating this infiltration requires more than digital literacy—communities need media-savvy local validators who can help residents navigate between authentic journalism and sophisticated propaganda.
While Big Tech’s unchecked power poses serious threats to discourse and problem solving, its impact on economic systems and workforce dynamics presents equally pressing challenges.
Economic Adaptation: Building Community Resilience in the AI Age
Workforce Transformation in the AI Era
Recent research reveals a concerning pattern in AI’s impact on labor markets: each profession faces a distinct “inflection point” with AI tools. AIs, trained on material created by the very humans they displace, initially act as productivity multipliers—web developers saw 65% income increases when first adopting AI tools. However, once AI capabilities cross a certain threshold, the relationship inverts dramatically—translators and bloggers experienced 30-40% income drops as AI translation and content generation became more sophisticated.42Dandan Qiao, Huaxia Rui, and Qian Xiong, “AI and Freelancers: Has the Inflection Point Arrived?,” n.d., https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content; Amanda Williams, “How Google and AI Are Killing Travel Blogs Like Mine – A Dangerous Business,” January 15, 2025, https://www.dangerous-business.com/how-google-and-ai-are-killing-travel-blogs-like-mine/. Similarly, working musicians have seen their streaming revenue drop as Spotify replaced their songs with AI-enhanced bargain-rack music—another ouroboros where the long tail of music creation eats itself.43Liz Pelly, “The Ghosts in the Machine,” Harper’s Magazine, accessed January 13, 2025, https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/. This pattern suggests workforce disruption may be more sudden and severe than gradual adaptation allows, requiring more urgent policy responses.
Congressional oversight of automation’s impact has been inconsistent: robust 1950s investigations found ‘enlightened business’ accepting responsibility for displaced workers, followed by decades of neglect until 2016.44Chairman Wright Patman, “Automation and Recent Trends (85th Congress) – United States Joint Economic Committee,” November 14, 1957, https://www.jec.senate.gov/public/index.cfm/reports-studies?ID=CF015E66-5427-45D5-B619-89F5EAC8258D; United States Joint Economic Committee, “JEC Examines Impact of Robots and Automation on Workforce and Economy – JEC Examines Impact of Robots and Automation on Workforce and Economy – United States Joint Economic Committee,” May 26, 2016, https://www.jec.senate.gov/public/index.cfm/republicans/2016/5/jec-examines-impact-of-robots-and-automation-on-workforce-and-economy/. Components of recent legislation—particularly the CHIPS & Science Act and the Inflation Reduction Act—finally aims to modernize workforce skills and revitalize struggling regions and lower energy and health care costs . RECOMPETE, for example, shows how combining regional planning and solving families’ root crises leads to rebirth amidst turmoil. The law expands innovation hubs and corridors that combine research, manufacturing, and workforce development.
Community Impact and Family-Sustaining Policies
The recent financial downturn and Trump-initiated trade war (which Congress can turn off anytime) have only intensified corporate adoption of Shopify-style policies requiring proof AI can’t do the task before new positions are approved. A sobering assessment comes from one of the past two years’ top-20 technology forecasters, now an AI policy advisor at a think tank. His prediction: there’s a 50% chance that within 10 years, technological advancement will eliminate most current employment. While he acknowledges there will always be some roles where humans are intrinsically preferred over AI, he doubts these positions alone could sustain anything close to our current workforce.45Peter Wildeford 🇺🇸 [@peterwildeford], “@brandonwilson I’m 50% Sure We’re Going to All Be Unemployed Due to Technology within 10 Years,” Tweet, Twitter, January 17, 2025, https://x.com/peterwildeford/status/1880229517798830566; Peter Wildeford 🇺🇸 [@peterwildeford], “I Was Speaking Too Loosely When I Said ‘All’, I Meant to Say There Still Will Be Jobs Where There Is ‘Pure Human Preference’ (Even If Maybe AI Could Be More Competent in Some Ways and AI Is Generally More Skilled),” Tweet, Twitter, January 18, 2025, https://x.com/peterwildeford/status/1880600538628362449. Declining industrial robot costs reinforce this forecast, as automation becomes viable for even small manufacturers.
Strategic public R&D investment offers a proven counterbalance to this extraction—creating broadly shared prosperity through innovations that solve fundamental challenges from national security to climate change to inequality. Targeted R&D builds problem-solving capacity while yielding returns far exceeding costs. Yet the Department of Government Efficiency’s (DOGE) theatrical cuts target precisely these high-return investments, sacrificing future prosperity for immediate political theater—like Trump directing Army Corps officials to release California’s reservoir water despite knowing it would waste a critical resource for millions.46Tina Reed, “Universities Feel Ripple Effects of DOGE Cuts to Health,” Axios, February 26, 2025, https://www.axios.com/2025/02/26/musk-doge-science-cuts-universities-fallout; David Deming, “DOGE Is Failing on Its Own Terms,” The Atlantic (blog), February 11, 2025, https://www.theatlantic.com/ideas/archive/2025/02/nih-nsf-sci-ence-doge/681645/; Brian Buntz, “Steep Budget Cuts and Layoffs Coming to NSF,” R&D World, February 21, 2025, https://www.rdworldonline.com/nsf-layoffs-in-2025-deep-budget-cuts-headed-for-u-s-research-sector/.
Breaking this machinery of reality distortion requires us to confront three crucial points: First, support is essential during the transition period, helping communities adapt rather than collapse. Second, even in a highly automated economy, human needs for care, creativity, and community won’t disappear—they may become even more central to meaningful work and social cohesion. Third, and perhaps most importantly, strengthening these social foundations builds the resilience and civic capacity we’ll need to navigate whatever upheavals AI brings.
The solution lies not just in universal basic income alone, as some in AI advocate, but in a profound reorientation of our political economy toward greater social security and family stability.
A comprehensive framework of social supports could transform American resilience through affordable housing, guaranteed sick leave, accessible child care, job training, universal pre-K, and enhanced market competition—at about one-seventh the annual cost of the Bush deficits. American history offers clear lessons for our present moment: we face a new Gilded Age, where tech giants wield power rivaling the 19th century robber barons, tariffs return as economic weapons, and preventable diseases resurge while billions flow to AI development.
Countries aren’t corporations—they exist to serve people, not quarterly profits. At its core, the purpose of government is to solve the problems that prevent people from living their best lives—focusing on areas where public action can unlock the best solutions and empower citizens, not attempting to solve every issue but strategically addressing the many barriers that hold people back from being their best selves.
Lost in debates about AI’s workforce disruption is an immediate crisis: 48 million Americans serve as family caregivers with minimal financial support. While detailed policy solutions exist, media coverage routinely overlooks these critical proposals in favor of horse-race political coverage. Treating government as merely a business undermines the patient investments in infrastructure, education, and innovation that built American prosperity.47Andrew Yamakawa Elrod, “What Was Bidenomics? | Andrew Yamakawa Elrod,” Phenomenal World (blog), September 26, 2024, https://www.phenomenalworld.org/analysis/what-was-bidenomics/.
This approach ensures technological progress serves human flourishing while rebuilding hope in struggling communities. It’s about building an economy that works for everyone, not just those at the technological frontier. These policies would mark a decisive shift from treating economic insecurity as inevitable to seeing it as a solvable challenge through targeted, proven interventions that address fundamental causes of hardship while strengthening community resilience.
This isn’t about vilifying wealth or success—it’s about distinguishing value creation from value extraction. The issue isn’t prosperity but predation—business models that drain communities while enriching the few. Competition benefits consumers while even well-intended regulations need constant oversight to ensure they serve people, and aren’t captured by the very industries they oversee. We need more entrepreneurs solving real problems, not fewer. The path forward requires both successful businesses and equitable systems that ensure prosperity is broadly shared.
Rather than accepting a future where AI concentrates wealth while leaving communities behind, this framework uses public investment to address root causes of economic hardship, not just symptoms. Like our chronic disease epidemic where 90% of healthcare spending treats preventable conditions, we know how to solve these economic challenges but often focus on treating symptoms rather than causes.48Gabriel A. Benavidez, “Chronic Disease Prevalence in the US: Sociodemographic and Geographic Variations by Zip Code Tabulation Area,” Preventing Chronic Disease 21 (2024), https://doi.org/10.5888/pcd21.230267. RFK Jr.’s appeal illustrates how corporate influence in healthcare and agriculture has created both economic and health crises—though his dangerous anti-vaccine advocacy undermines his legitimate critiques of corporate power.49Brian Deer, “Opinion | I’ll Never Forget What Kennedy Did During Samoa’s Measles Outbreak,” The New York Times, November 25, 2024, sec. Opinion, https://www.nytimes.com/2024/11/25/opinion/rfk-jr-vaccines-samoa-measles.html.
The Great Wealth Capture: Economic and Health System Failures
During Congress’s four decades of inattention on automation, $50 trillion moved from the bottom 90% of Americans to the top 0.5%—history’s largest wealth transfer.50Talmon Joseph Smith and Karl Russell, “The Greatest Wealth Transfer in History Is Here, With Familiar (Rich) Winners,” The New York Times, May 14, 2023, sec. Business, https://www.nytimes.com/2023/05/14/business/econo-my/wealth-generations.html. Congressional tax cuts (1978-1989) slashed rates on corporations, capital gains, and top earners by nearly half. After the 1993 tax increases briefly created surpluses, subsequent cuts (2001-2003, 2017) failed to generate promised investment while expanding deficits, with national debt soaring from $5.6 trillion to $35.32 trillion.5151. Rafael A. Corredoira et al., “The Changing Nature of Firm R&D: Short-Termism & Influential Innovation in US Firms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March 18, 2022), https://doi.org/10.2139/ssrn.4071191; Rachelle C. Sampson and Yuan Shi, “Are US Firms Becoming More Short-Term Oriented? Evidence of Shifting Firm Time Horizons from Implied Discount Rates, 1980-2013,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2019), https://doi.org/10.2139/ssrn.2837524.
Research confirms this shift fundamentally changed corporate behavior, increasing short-termism and causally reducing influential inventions, harming both firm competitiveness and U.S. economic growth.52Bobby Kogan, “Tax Cuts Are Primarily Responsible for the Increasing Debt Ratio,” Center for American Progress (blog), March 27, 2023, https://www.americanprogress.org/article/tax-cuts-are-primarily-responsible-for-the-increasing-debt-ratio/; Heidi Peltier, “The Cost of Debt-Financed War: Public Debt and Rising Interest for Post-9/11 War Spending,” January 1, 2020; “Historical Debt Outstanding | U.S. Treasury Fiscal Data,” accessed October 5, 2024, https://fiscaldata.treasury.gov/datasets/historical-debt-outstanding/; “FiscalData Explains the National Debt,” FiscalData.Treasury.Gov Explains the National Debt, September 25, 2024, https://fiscaldata.treasury.gov/americas-finance-guide/national-debt/. These findings confirm our economy has become dominated by rentier extraction rather than value creation—the technofeudal reality Bannon described where gatekeepers extract wealth without creating proportional value.
Recent data from Moody’s Analytics reveals an alarming economic dependency: the top 10% of earners now drive nearly half (49.7%) of all consumer spending—up dramatically from 36% three decades ago. While the wealthiest increased their spending 12% from 2023-2024, middle and lower-income households actually reduced spending during the same period, creating a fundamentally unstable economic structure that rises or falls on the whims of the few. This dependency isn’t accidental—it’s the logical outcome of the massive wealth transfer upward. When a February 2025 Wall Street Journal analysis shows just 24 “superbillionaires” control $3.3 trillion (equivalent to France’s entire GDP), we’re witnessing more than inequality—it’s a complete restructuring of economic power. Nobel Prize-winning economist Joseph Stiglitz identifies the root cause: monopoly power in tech coupled with systematic tax avoidance has created a self-reinforcing cycle where “these guys live in a totally different world from ordinary Americans,” undermining the solidarity essential for society to function.
Corporate focus on quarterly profits and shareholder returns has systematically eroded the traditional duties of care that businesses once maintained toward their customers. Congress and states must counter this trend by implementing enforceable duties of care across more sectors—such as social media platforms, utilities, AI developers, and online gambling operators. These duties would require businesses to prioritize consumer safety over engagement metrics or quarterly returns, with meaningful penalties for executives who knowingly deploy harmful systems. Much as healthcare providers face consequences for negligence, companies deploying services that predictably harm users should bear responsibility. The epidemic of despair we’re witnessing—reflected in rising depression, substance abuse, and suicide rates—correlates directly with industries that profit from exploiting vulnerability rather than building value.
Recent data shows between 25-30% of Americans live paycheck to paycheck or face financial insecurity—a crisis compounded by widespread financial illiteracy and deepening inequality.53Carol Graham, “Despair Underlies Our Misinformation Crisis,” John Templeton Foundation, June 27, 2022, https://www.templeton.org/news/despair-underlies-our-misinformation-crisis; Carol Graham, “Our Twin Crises of Despair and Misinformation,” Brookings, July 22, 2024, https://www.brookings.edu/articles/our-twin-crises-of-despair-and-misinformation/. The zero-interest era’s artificial suppression of consumer costs planted a time bomb of public rage. When VC and notably petro-state sovereign wealth funds quietly withdrew their massive tech subsidies and services like private burrito taxis hit real market rates, millions watched their glimpse of an affordable future vanish. The whiplash from subsidized convenience to the new unaffordable reality may have been more radicalizing than traditional political grievances.
Rates of despair—measured by depression, anxiety and suicide—are growing in wealthy nations like ours. New research by neurologists finds that people in despair are particularly vulnerable to misinformation, conspiracy theories, and radicalization. The epidemic of loneliness—the ‘anti-social century’—further erodes social ties, driving many deeper into digital worlds where misinformation and conspiracies thrive uncontested.
DOGE’s Bite: Theater Over Substance
The politics of despair paved the way for dramatic policy shifts that often exacerbate rather than address these fundamental economic challenges. President Trump’s first term added $8 trillion in enacted spending hikes and tax cuts to the deficit. Despite campaign promises of fiscal responsibility, the hard truth is that Trump’s economic policies primarily benefit the wealthy while burdening working Americans with long-term costs.
This pattern continues in his second term, where the appearance of “breaking the system” masks the reality that it’s being broken to serve narrow interests, not to fix fundamental problems for average citizens. Contrary to strengthening US manufacturing, a Federal Reserve paper revealed Trump’s 2018-19 tariffs actually increased prices and cut manufacturing jobs without boosting production, particularly damaging competitive manufacturers reliant on trade inputs and export markets.54Aaron Flaaen and Justin Pierce, “Disentangling the Effects of the 2018-2019 Tariffs on a Globally Connected U.S. Manufacturing Sector,” Finance and Economics Discussion Series 2019.0, no. 86 (December 2019), https://doi.org/10.17016/feds.2019.086. The new Treasury-backed cryptocurrency reserve prioritizing the five digital currencies owned by Trump’s crypto advisor exemplifies this troubling trend.55Sam Kessler, “David Sacks Responds to U.S. Crypto Reserve Conflict of Interest Allegations,” CoinDesk, March 3, 2025, https://www.coindesk.com/policy/2025/03/03/david-sacks-investments-complicate-trump-s-crypto-reserve-plans. Even conservatives called it “a slap in the face” when Trump appointed a junk food industry lobbyist to run the Department of Agriculture. Despite his promise to “drain the swamp,” one out of 14 of his appointees from his first term were lobbyists in positions to regulate their industries creating conflicts of interest.
While Elon Musk’s DOGE has become emblematic of what even fiscal conservatives label “spending cut theater.”56David French and Jillian Weinberger, “Opinion | Elon Musk and the Useless Spending-Cut Theater of DOGE,” The New York Times, March 5, 2025, sec . Opinion, https://www.nytimes.com/2025/03/05/opinion/musk-useless-spending-cuts-doge.html. Despite the hype, DOGE’s genuine cuts represents just 0.04% of the proposed $4,500 billion in proposed new tax cuts.57“A Distributional Analysis of Donald Trump’s Tax Plan,” ITEP, October 7, 2024, https://itep.org/a-distributional-analysis-of-donald-trumps-tax-plan-2024/; “Elon Musk Is Failing to Cut American Spending,” The Economist, accessed February 12, 2025, https://www.economist.com/finance-and-economics/2025/02/12/elon-musk-is-failing-to-cut-american-spending. After journalists pointed out calculation errors in their initial claims, DOGE officials responded not with corrections but by reducing transparency, making independent verification nearly impossible. Contracts were public when valid; why the secrecy when challenged?58David A. Fahrenthold and Jeremy Singer-Vine, “DOGE Makes Its Latest Errors Harder to Find,” The New York Times, March 13, 2025, sec . U .S ., https://www.nytimes.com/2025/03/13/us/politics/doge-errors-funding-grants-claims.html. Not content with proposed congressional cuts to food aid, DOGE slashed critical programs that support food banks and pay farmers for healthy local produce—ironically undermining the very wholesome food systems the Make America Healthy Movement champions . Meanwhile, DOGE has ‘detonated a crisis at a highly sensitive nuclear weapons agency’ and systematically fired top performing federal employees who recently received promotions.59Evan Halper and Hannah Natanson, “How DOGE Detonated a Crisis at a Highly Sensitive Nuclear Weapons Agency,” The Washington Post, March 2, 2025, https://www.washingtonpost.com/business/2025/03/02/doge-nuclear-worker-firings-musk-trump/; Aimee Picchi, “USDA Cancels $1 Billion in Funding for Schools and Food Banks to Buy Food from Local Suppliers – CBS News,” March 13, 2025, https://www.cbsnews.com/news/usda-cancels-local-food-purchasing-food-banks-school-meals/; Jace Dicola, “USDA Cuts $13 Million Program for Western Slope Farmers, Food Banks, Schools,” The Grand Junction Daily Sentinel, March 15, 2025, https://www.gjsentinel.com/news/western_colorado/usda-cuts-13-million-program-for-western-slope-farmers-food-banks-schools/article_4b7c62e6-002a-11f0-b10f-b31de2e08a0f.html.
The Government Accountability Office (GAO)—whose actual job is investigating fraud and inefficiency—identified $70 billion in savings last year and $2 trillion since 2003, and often reports that understaffing, not bloat, causes delays and inefficiencies—a counterintuitive but evidence-based conclusion that true fiscal conservatives should embrace. There’s no excusing the end-run around Congress. Clinton and Gore reinvented government and got them to pass the Government Performance and Results Act, DOGE couldn’t even bear to work through a bipartisan caucus dedicated to its work.
Preserving Human Knowledge
The Internet’s knowledge commons faces dual threats: vital expertise vanishes behind paywalls (Substack, newspapers need better options), while ‘link rot’ erases public access to human knowledge. The best congressional policy coverage remains priced for lobbyists, not citizens—creating an information aristocracy despite some journalists finding fragile lifelines through crowdfunding. Members of Congress and influencers with access to paywalled news like Politico Pro could help bridge this gap by sharing gift links and reacting on camera to key facts, ensuring the public understands policies affecting their lives.
Meanwhile, documentary and other edgy filmmaking and television—some of our most powerful tools for preserving human perspectives and understanding complex social issues—face their own crises as funding sources constrict and distribution channels prioritize non-challenging formats.60Keri Putnam, “What’s at Risk in the Streaming Media Age,” Shorenstein Center (blog), January 18, 2024, https://shorensteincenter.org/commentary/whats-risk-streaming-media-age/. Even as traditional media struggles to explain what happens in Washington, some innovative formats break through: podcasts like ‘The Middle’ prove thoughtful dialog can bridge divisions, while fearless documentarians continue capturing crucial narratives that challenge our comfortable assumptions. These voices—whether through film, audio, or emerging formats—remain our best defense against digital oversimplification.
Ensuring Literacy, Strengthening and Niche Journalism
“Garbage In, Garbage Out,” my grandmother would say during her time as a human computer—one of the mathematicians who calculated missile trajectories by hand before personal computers existed—at White Sands, whenever her team’s calculations produced faulty outputs due to incorrect inputs.
As Texas teacher Chinea Bond discovered, students relying on AI produced ‘really, really bad’ papers, lacking fundamental analytical skills.61Andrew Boryga, “Why I’m Banning Student AI Use This Year,” Edutopia, August 2, 2024, https://www.edutopia.org/article/banning-student-ai-use-chanea-bond/. Instead of rushing to rely on AI, learners and new AI users should be lead by teachers to first build a bedrock of “core skills” only eventually adding AI to truly enhance their capabilities. Finland’s systematic approach to digital literacy—where ninth-graders consistently lead global rankings in detecting misinformation—offers a proven model.62Jenny Gross, “How Finland Is Teaching a Generation to Spot Misinformation,” The New York Times, January 10, 2023, sec . World, https://www.nytimes.com/2023/01/10/world/europe/finland-misinformation-classes.html; Shane Horn and Koen Veermans, “Critical Thinking Efficacy and Transfer Skills Defend against ‘Fake News’ at an International School in Finland,” Journal of Research in International Education 18, no. 1 (April 1, 2019): 23–41, https://doi.org/10.1177/1475240919830003; Eliza Mackintosh, “Finland Is Winning the War on Fake News. Other Nations Want the Blueprint,” May 2019, https://www.cnn.com/interactive/2019/05/europe/finland-fake-news-intl; Amelia Nash, “Media Literacy A to Z: How Finland Is Arming Students Against Misinformation,” PRINT Magazine, August 27, 2024, https://www.printmag.com/culturally-related-design/media-literacy-a-to-z-how-finland-is-arming-students-against-misinformation/. Their integration of fundamental skills—reading, mathematics, finance, science, civics, critical thinking—throughout K-12 education creates a foundation for evaluating sources and using AI responsibly, even a free book for every 9th grader.63Nash, “Media Literacy A to Z”; “ABCs of Media,” ABCs of Media, August 2024, https://abcsofmedia.com/. The Homework Apocalypse—where AI can complete most traditional assignments—presents an even subtler danger: students may uncritically accept AI’s plausible-sounding but potentially flawed connections and analysis, rather than developing the essential skills of teasing out genuine patterns and verifying historical or factual accuracy. This demands not just new forms of assessment, but teaching students to approach AI-suggested insights with scholarly skepticism.64Jackie Davalos and Leon Yin, “AI Detectors Falsely Accuse Students of Cheating—With Big Consequences,” Bloomberg.Com, October 18, 2024, https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations; Ethan Mollick, “Post-Apocalyptic Education,” August 20, 2024, https://www.oneusefulthing.org/p/post-apocalyptic-education.
Educators nationwide are discovering that phone bans improve not just attention spans, but student wellbeing—a wake-up call for all of us. Phone manufacturers quietly acknowledge radiation risks in their fine print, and warn against the millions who keep phones pressed against their bodies routinely. Don’t use the cellphone as a phone and save your eyes: in iOS open Settings>Accessibility>Accessibility Shortcut. Try Reduce White Point first, toggle it by triple clicking the Lock/Side button, then try Smart Invert. Use whichever you prefer.
Adults require similar support our children do. Grassroots, ‘guerrilla’ outreach can deliver reliable news to communities drowning in misinformation, particularly where local and niche newspapers have vanished. Local journalism helps citizens distinguish truth from fiction, rebuilding trust and resilience against AI-driven manipulation. It’s not a giant leap: after state and local civil rights groups couldn’t match 2020 funds to educate voters,65Maya King, “More Money Urgently Needed to Reach Younger and Minority Voters, Organizers Warn Harris Donors,” The New York Times, September 24, 2024, sec. U.S., https://www.nytimes.com/2024/09/24/us/politics/young-minority-voters-harris-campaign.html. weeks later, Meta enabled targeted harassment campaigns that historically escalate to real-world violence.
The Planet’s Price of Progress: Climate Impacts of AI Infrastructure
A Massive Environmental Footprint
In 1894, London and New York faced what seemed an insurmountable crisis: their streets were drowning in horse manure.66“The Great Horse Manure Crisis of 1894,” Historic UK, accessed January 9, 2025, https://www.historic-uk.com/HistoryUK/HistoryofBritain/Great-Horse-Manure-Crisis-of-1894/. The very technology powering urban transportation threatened public health and city life itself. Today’s AI boom presents a remarkably similar challenge—computing infrastructure threatens to overwhelm our electrical grid and environment.
New research shows billions will lose their livelihoods and global economic output could plummet by up to 34% if Earth warms by 3°C this century—yet investing less than 2% of GDP now could eliminate most of those catastrophic losses. This economic reality makes addressing AI’s environmental footprint not just an ethical imperative but a financial necessity.67Amine Benayad et al., “Landing the Economic Case for Climate Action with Decision Makers,” March 12, 2025; Ruth Newkeen, “New Report from BCG and Cambridge on Climate-Change Investment – News & Insight,” Cambridge Judge Business School, March 12, 2025, https://www.jbs.cam.ac.uk/2025/new-report-from-bcg-and-cambridge-on-climate-change-investment/.
While some dismiss AI’s environmental impact by comparing individual queries to daily actions like eating meat, this narrow analysis dangerously understates the massive scale of AI infrastructure challenges.68Andy Masley, “Using ChatGPT Is Not Bad for the Environment,” Substack newsletter, The Weird Turn Pro (blog), January 13, 2025, https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for. Meta’s and Google Search’s shift to AI in 2024 significantly raised their energy footprints.69Isabel O’Brien, “Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?,” The Guardian, September 15, 2024, sec. Technology, https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech. Automated AI agents multiply this impact often generating thousands of queries per second.
AI servers consume 10-100 times more energy than standard web/email servers,70Arman Shehabi et al., “United States Data Center Energy Usage Report,” June 1, 2016, https://doi.org/10.2172/1372902. demanding ever-larger facilities that lock regions into decades of ecological consequences. These hidden costs hit vulnerable communities hardest.
Infrastructure Reality vs. Utility Claims
Loudoun County, Virginia—which handles 70% of global email—has seen energy use surge 240% in five years, reaching 3.4 GW, with projections showing consumption tripling by 2028.71“Ben Inskeep on X: ‘5) Data Centers Provide Very Few Jobs. Other Economic Development Activity Produces 100x More Jobs per MW of Power Demand. https://T.Co/fpeZI9j0Ay / X,” X (formerly Twitter), January 29, 2025, https://x.com/Ben_Inskeep/status/1884731768915267837. Its grid is so strained that data centers also rely on over 4,000 constant, highly-polluting, noisy diesel generators.
America’s power system operates at just 53% average utilization—meaning our vast network of generation plants and transmission lines sits unused nearly half the time. The core challenge is getting power where and when it’s needed. Strategic deployment of American-made innovations—including storage, microgrids, advanced transmission, and local grid technologies—could solve this mismatch, but utilities often seek a different path.
Yet while utilities cite AI-driven growth to justify expensive new fossil fuel plants, independent analysis tells a different story. The Tennessee Valley Authority—home to xAI’s world’s largest AI supercomputer—projects more modest demand growth. Their assessment aligns with Duke University’s Nicholas Institute’s 2025 analysis, which confirms our existing grid can handle significant new loads through flexible management, modernization, and distributed generation. A recent comprehensive energy report offers promising paths forward that align with values across the political spectrum—from market competition to environmental stewardship to consumer protection.72Ethan Howland, “Existing US Grid Can Handle ‘Significant’ New Flexible Load: Report,” Utility Dive, February 11, 2025, https://www.utilitydive.com/news/us-grid-headroom-flexible-load-data-center-ai-ev-duke-report/739767/.
Utilities are following a predictable playbook—starting the conversation with yesterday’s solutions because they earn guaranteed returns on new power plants in ways they don’t from grid improvements or efficiency. As a result, Wall Street is surging utility stock prices.73Sharon Goldman, “Entergy’s Stock Is Surging after a $10 Billion Meta Deal — the CEO Says Mega AI Data Centers Are Changing the Utility Business,” Yahoo Finance, February 20, 2025, https://finance.yahoo.com/news/entergy-stock-surging-10-billion-223534445.html. Meanwhile, Tech companies are already generating more of their own clean power to meet their immediate AI-driven demand. But if utilities push rates too high, these companies might eventually move all their operations behind the meter—stranding expensive assets and forcing regular customers to shoulder an even heavier burden. The result? A perverse system forcing everyday people to pay more to burn the dirtiest fossil fuels in vulnerable communities to generate digital pollution and misinformation.
Utilities make substantially more money over a gas plant’s lifetime than if a large industrial user brings their own power generation. This incentivizes fighting against the very technological innovations that could reduce costs for consumers and pollution for communities. That’s what petro-oligarchs are afraid of too—that the world of cheap American clean power made with our friends in democracies that respect workers and the planet is within reach. But incumbents and monopolists in our country and outside it fight to keep us tethered to a global market of toxic substances. We can do better.
Local Communities Bear the Burden
Local officials in Loudoun County report facing three stark choices: government-imposed energy constraints, breakthrough technology and efficiencies, or rapid development of clean onsite microgrids. The gap between utility claims and technical reality highlights why communities need accurate data to make informed infrastructure decisions .
A single Amazon campus in Indiana will consume over 2,200 MW—more than many cities. Major tech companies will triple both the state’s electricity sales this decade and its carbon pollution over 20 years. Indiana’s sales tax exemption surrenders $23 billion in data center revenue, while these facilities receive enormous public subsidies and force ratepayers to shoulder billions in infrastructure costs, perpetuated by decades-old fossil fuel subsidies. State boundaries offer no protection from these hidden subsidies: Kentucky’s PSC and Attorney General recently documented through a FERC complaint how their residential customers are being charged for transmission upgrades serving Big Tech data centers in Indiana and Ohio—revealing the urgent need for interstate regulatory reform.74Complaint of Kentucky Public Service Commission, Attorney General of the Commonwealth of Kentucky v. American Electric Power Service Corporation, et al. under EL25-67.,” Federal Energy Regulatory Commission Accession Number 20250312-5238, accessed March 30, 2025, https://elibrary.ferc.gov/eLibrary/filelist?accession_number=20250312-5238&optimized=false.
By 2029, AI facilities could require an additional 128 gigawatts—equivalent to powering a small nation. Data centers could devour 17% of all U.S. electricity by 2030, quadrupling today’s consumption.75Goldman Sachs, “AI Is Poised to Drive 160% Increase in Data Center Power Demand,” May 14, 2024, https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand. While China expects renewables to meet all new electricity demand by 2025, effectively capping coal power growth, U.S. utilities continue expanding fossil fuel infrastructure.76Gavin Maguire, “US Power System Becomes More Fossil-Dependent than China’s,” Reuters, October 25, 2024, sec. Energy, https://www.reuters.com/business/energy/us-power-system-becomes-more-fossil-dependent-than-chinas-maguire-2024-10-25/. The EPA now distorts reality by challenging that carbon pollution endangers human health—a scientific fact established decades ago—creating a dangerous regulatory environment where polluters would face fewer constraints.77Oliver Milman, “The Trump EPA’s Baffling New Agenda Consists of Throttling Major Environmental Rules – Mother Jones,” March 13, 2025, https://www.motherjones.com/politics/2025/03/lee-zeldin-epa-deregulation-donald-trump-pollution-climate-emissions-rules-environmental-protection/.
In North Omaha, where people of color make up 68% of the population and asthma rates rank among the nation’s highest, residents watched the 2023 deadline to stop burning coal vanish. Meta’s Nebraska facility alone consumed nearly as much electricity as the North Omaha coal units produced in 2023, while Google’s Nebraska usage ranks as its highest in the U.S. Though both tech giants claim “net zero” impact through distant renewable contracts, they remain silent about local fossil fuel use. Energy economists concluded: “If not for the data centers and poor planning by the utility, they would not need to push to keep those coal units open.”
Downriver in Memphis, xAI’s facility depletes local aquifers of 1 million gallons daily despite available treated wastewater nearby.78Dara Kerr, “How Memphis Became a Battleground over Elon Musk’s xAI Supercomputer,” NPR, September 11, 2024, sec. Business, https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution. After replacing an oven factory that once employed thousands with a data center providing dozens of jobs, its new methane generators add pollution to neighborhoods already suffering elevated asthma and mortality rates.79Mandy Hrach, “FOX13 INVESTIGATES: Why More Children Suffer from Asthma in Memphis than Other Cities,” FOX13 Memphis, June 29, 2023, https://www.fox13memphis.com/news/fox13-investigates-why-more-children-suffer-from-asthma-in-memphis-than-other-cities/article_88d71b52-16bd-11ee-987f-d3bed23374ba.html; Myracle Wicks, Tarvarious Haywood, and Bria Bolden, “Elon Musk’s xAI to Build Multi-Billion-Dollar Supercomputer Project in Memphis,” WMC Action 5 News, June 5, 2024, https://www.actionnews5.com/2024/06/05/elon-musk-build-multi-billion-dollar-ai-supercomputer-project-memphis/.
Ending Utility Secrecy and Empowering Consumers
The cost-shifting between data centers and utilities reveals this manipulation. While Exelon fought Amazon’s attempt to force Mid-Atlantic ratepayers to cover its data center costs,8080. Ethan Howland, “Talen-Amazon Interconnection Agreement Needs Extended FERC Review: PJM Market Monitor,” Utility Dive, July 11, 2024, https://www.utilitydive.com/news/talen-amazon-interconnection-agreement-ferc-constellation-vistra/721066/; “FERC Rejects Interconnection Pact for Talen-Amazon Data Center Deal at Nuclear Plant | Utility Dive,” accessed December 25, 2024, https://www.utilitydive.com/news/ferc-interconnection-isa-talen-amazon-data-center-susquehanna-exelon/731841/; “FERC Rejects Interconnection Deal for Talen-Amazon Data Centers,” accessed December 26, 2024, https://www.ans.org/news/article-6534/ferc-rejects-interconnection-deal-for-talenamazon-data-centers/. its ComEd subsidiary will shift 90% of a $5 billion grid expansion onto Illinois ratepayers81Ethan Howland, “AEP, Exelon Oppose Talen-Amazon Interconnection Pact to Protect Rate Base Growth Potential: Constellation,” Utility Dive, July 24, 2024, https://www.utilitydive.com/news/aep-exelon-talen-amazon-interconnection-rate-base-ferc-constellation/722246/.—showing how both tech companies and utilities maximize profits at public expense. A new paper investigates nearly 50 regulatory proceedings investigating data centers shifting their utility costs to consumers and other businesses.82Eliza Martin and Ari Peskoe, “Extracting Profits from the Public: How Utility Ratepayers Are Paying for Big Tech’s Power,” March 5, 2025.
Secret negotiations for data center pricing undermine fair rates and encourage venue-shopping. Congress could eliminate these problems by banning confidential agreements between utilities and data centers, requiring cost-based rates without special carveouts or subsidies. While Texas’s Smart Meter program provides usage data which saved ratepayers up to 18%, New York’s IEDR will include both usage and cost information—an even better model for transparency. States could also require new large loads to procure a percentage of demand from virtual power plants, similar to California’s proposal. The FTC needs expanded enforcement power to investigate utilities blocking customer data access—practices that would be illegal in banking or healthcare.
Data centers create dangerous ‘bad harmonics’—electrical distortions that damage nearby residents’ appliances and increase fire risks in both urban and rural areas. Communities bear these costs through higher insurance rates and damaged electronics.83Leonardo Nicoletti, Naureen Malik, and Andre Tartar, “AI Needs So Much Power, It’s Making Yours Worse,” Bloomberg.Com, December 27, 2024, https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/. Utilities say they’re not the problem but refuse to share exculpatory data .
The energy rate problem is so severe that most states’ failed regulatory approach allows investor-owned utilities to enjoy ratepayer-guaranteed returns nearly double their market-based cost of capital. This costs families $300 a year, totaling $50 billion while slowing the transition to cleaner, more efficient systems. A new paper argues Congress and states should pass legislation setting the rate of return at the actual cost of capital, ending this legalized extraction from American families.84Mora, “Update”; Shaw, “MAHA?”; Clark, “‘A Slap In The Face.’”
Solutions for Sustainable Infrastructure
Unlike the 1894 Horse Manure Crisis, clean energy solutions to our AI power challenge already exist and often cost less than fossil fuels.85Keith J. Benes, Joshua E. Porterfield, and Charles Yang, “AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy – United States Department of Energy,” April 2024. The Department of Energy’s Loan Program Office leverages expertise to help banks commercialize technologies from research programs like ARPA-E, generating positive revenue while accelerating American innovation—precisely the efficient government initiative that efficiency advocates should champion.
Smart load management offers an alternative to costly new construction. Some data centers participate in demand response programs, agreeing to reduce power during peak stress in exchange for lower rates.
While most tech companies struggle with sustainability, mid-sized firms like Stripe demonstrate creative solutions, showing how data centers could affordably operate off-grid through smart integration of solar, battery storage, and methane gas generators—though connecting to the grid remains more cost-effective, even at reduced capacity.86KyleBarankoetal.,“Fast,Scalable,Clean,andCheapEnough,”December18,2024, https://www.offgridai.us/; Tyler Norris, “Tyler Norris on X: ‘@duncan__c Most Balancing Authorities Have Vastly Underutilized Existing Gen Capacity & T&D Networks in Most Hours, so Going 100% BTM Is Redundant No? Not to Mention Huge Land Requirement and Need for on-Site Gas Delivery…’ / X,” X (formerly Twitter), December 19, 2024, https://x.com/tylerhnorris/status/1869590251339161907; Jigar Shah, “Jigar Shah on X: ‘@tylerhnorris @duncan__c Exactly, You Might Be Forced to Undersize the Grid Connection, but It Is Never Economically Better to Go off-Grid from an Economic Optimization Standpoint.’ / X,” X (formerly Twitter), December 19, 2024, https://x.com/JigarShahDC/status/1869592001022407117. Baranko et al., “Fast, Scalable, Clean, and Cheap Enough”; Norris, “Tyler Norris on X”; Shah, “Jigar Shah on X.”
Grid-enhancing technologies offer immediate solutions. Smart sensors and software can increase transmission capacity by over 33%, while advanced controls redirect power around congested lines. These technologies could save ratepayers billions annually while enabling more renewable energy connections. States are leading this transformation—California, Minnesota, South Carolina, Utah, and Virginia now require utilities to evaluate GETs in their resource planning. Several states have enacted performance-based rate regulations, aligning incentives toward improving resilience, affordability, and reducing pollution, rather than just building more infrastructure.
Isolated Grids, Universal Lessons
Texas presents both promises and warnings. The state generates massive clean power—31 GW from wind and solar plus 5 GW from nuclear as of 2024. Its ‘connect and manage’ interconnection process enabled its grid ERCOT to add about three times more than multi-state PJM (the nation’s largest grid operator), and ERCOT outperformed PJM by an even wider margin in 2022.87William Driscoll, “Bringing ERCOT’s Speedy Interconnection Process to the Rest of the U.S.,” pv magazine USA, September 5, 2023, https://pv-magazine-usa.com/2023/09/05/bringing-ercots-speedy-interconnection-process-to-the-rest-of-the-u-s/; Ethan Howland, “Can ERCOT Show the Way to Faster and Cheaper Grid Interconnection?,” Utility Dive, November 27, 2024, https://www.utilitydive.com/news/connect-and-manage-grid-interconnection-ferc-ercot-transmission-planning/698949/. Texas also pioneered consumer-empowering data policies that enable 6-18% savings88“Mission:Data on X: ‘In Texas, without Doing *anything* Special (No Requests or Signing up for Programs), This REP Provides Helpful Weekly Email Reports of Household Usage. One of the Many Benefits of Consistent 15-Minute Metering and Centralized Data Access. https://T.Co/GssvUuYITQ’ / X,” X (formerly Twitter), October 21, 2024, https://x.com/mission_data/status/1848421782048674252; Mission:Data, “Texas Decision Enhances Customer Choice of Advanced Energy Providers,” Mission:data, May 10, 2018, https://www.missiondata.io/news/2018/5/10/texas-decision-enhances-customer-choice-of-advanced-energy-providers. “Mission”; Mission:Data, “Texas Decision Enhances Customer Choice of Advanced Energy Providers .” (New York improved those regulations requiring utilities pass along cost data too). However, Texas’ over-reliance on methane gas infrastructure proved catastrophic during the Big Freeze. The system’s collapse contributed to 700 deaths, disproportionately affecting low-income communities of color, illustrating the deadly costs of infrastructure dependency on a single fuel source.8989. “The Texas Big Freeze: How a Changing Climate Pushed the State’s Power Grid to the Brink,” Utility Dive, accessed January 31, 2025, https://www.utilitydive.com/news/the-texas-big-freeze-how-a-changing-climate-pushed-the-states-power-grid/601098/; “Mission.”“The Texas Big Freeze”; “Mission.”
Similarly, California’s experience offers critical lessons. While California pioneered many grid innovations, its isolated transmission system and outdated utility liability rules create unnecessary risks and costs. Despite billions spent on wildfire damage, we know how to prevent homes from burning—controlling a home’s 100-foot perimeter ignition zone and following the National Fire Protection Association checklist can dramatically decrease fire risk.90National Fire Protection Association Wildfire Division, “NFPA – Preparing Homes for Wildfire,” FireWise, accessed September 20, 2024, https://www.nfpa.org/education-and-research/wildfire/preparing-homes-for-wildfire. Community microgrids with rigorous safety protocols could help utilities avoid catastrophic wildfire risks while improving reliability, protecting both ratepayers and utilities from preventable disasters. Yet consistently, we fail to rebuild with the foresight needed to withstand inevitable future disasters.
Critical Supply Chain Challenges Critical infrastructure
Supply chain bottlenecks threaten this modernization. Severe transformer and substation shortages—with lead times now stretching beyond 24 months—create urgent need to maximize existing assets.
This is the time for an all-hands-on-deck strategy. While utilities will keep natural gas as a priority, increasing planned exports could raise prices for domestic manufacturers. We must diversify our energy sources and support industries friendshoring our energy supply chains to ensure we win the race for a clean economy without being beholden to another set of oligarchs.91“Over 130 Industry Leaders Express Support for Supply Chain Provisions in America COMPETES Act | U.S. Representative Lisa Blunt Rochester,” November 6, 2024, https://web.archive.org/web/20241106205924/ https://bluntrochester.house.gov/news/documentsingle.aspx?DocumentID=2833. “Rep. Blunt Rochester Praises House Passage of Her Bipartisan Promoting Resilient Supply Chains Act | U.S. Representative Lisa Blunt Rochester,” May 16, 2024, https://web.archive.org/web/20240516183243/ https://bluntrochester.house.gov/news/documentsingle.aspx-?DocumentID=4160; Sarah Rathke, “Supply Chain Legislation On The Horizon,” Global Supply Chain Law Blog, June 12, 2024, https://www.globalsupplychainlawblog.com/supply-chain/supply-chain-legislation-on-the-horizon/; Milo McBride, “Leaping Ahead: U.S. Innovation and the Future of Clean Energy,” Carnegie Endowment for International Peace, October 24, 2024, https://carnegieendowment.org/events/2024/10/leaping-ahead-us-innovation-and-the-future-of-clean-energy?lang=en; Milo McBride, “Catching Up or Leaping Ahead? How Energy Innovation Can Secure U .S . Industrial Stature in a Net-Zero World,” Carnegie Endowment for International Peace, September 19, 2024, https://carnegieendowment.org/research/2024/09/energy-innovation-us-industrial-stature?lang=en.
Reducing the federal workforce will directly undermine these efforts. As senior regulatory experts warn, reducing agency expertise doesn’t just slow permitting—it can turn a one-year process into 3-4 years. The benefits of the supposed end of permitting might be overblown—business-community watchdogs say Chicago left its unfinished Block 37 Superstation a boondoggle because officials spent money without process and public input.9292. Phil Rogers and Katy Smyser, “How Chicago Spent $400M On a Subway Superstation to Nowhere,” NBC Chicago (blog), February 23, 2015, https://www.nbcchicago.com/news/local/how-chicago-spent-400m-on-a-subway-superstation-to-nowhere/59087/.
The new administration faces a pivotal choice: embrace the hundreds of thousands of jobs (especially in the trades) and $450 billion in private investment already flowing from landmark legislation, or jeopardize America’s energy advantage and force household energy costs up at least 10% monthly.93Chiara Boye, “Removal of Technology-Neutral Clean Energy Tax Credits Could Cost Upwards of $336 Billion In Investment, Increase Electricity Bills 10% For Consumers,” Aurora Energy Research (blog), January 6, 2025, https://auroraer.com/media/reform-to-clean-energy-tax-credits/. With families already struggling with inflation, blocking clean energy deployment directly threatens both household budgets and national competitiveness.94Bryan Bennett, “More Than Seven in Ten Americans Support the Inflation Reduction Act,” Navigator (blog), April 30, 2024, https://navigatorresearch.org/more-than-seven-in-ten-americans-support-the-inflation-reduction-act/. Companies across the country stand ready to leverage these laws to create jobs and consumer savings95Shah, “Jigar Shah on X.”Shah.—but reversing course on energy modernization would be like deliberately breaking all our eggs on the way home from the store. Competitive entrepreneurs, conservatives, conservationists, and consumer advocates will find much to appreciate in a new report charting the path to a better energy future.96“Innovating Future Power Systems: From Vision to Action,” American Enterprise Institute – AEI (blog), accessed February 27, 2025, https://www.aei.org/research-products/report/innovating-future-power-systems-from-vision-to-action/.
Citizen Action in Utility Decisions
State Public Utilities Commissions (PUCs) make crucial decisions about energy costs and data center impacts through public proceedings that directly affect utility bills and community health. These venues offer concrete opportunities for citizen involvement.97“How to Talk to Your Friendly Neighborhood Public Utility Regulator – Heatmap News,” accessed March 10, 2025, https://heatmap.news/podcast/shift-key-s2-e27-charles-hua. When informed, organized residents participate in PUC hearings, they’ve successfully challenged utilities’ inflated projections and secured better protections against unfair cost allocation. Groups focused on urgent pollution reduction and affordable clean energy are playing crucial roles in preventing costly boondoggles while building sustainable, reliable alternatives.
Power to the Public: Scaling Solutions
Addressing AI’s environmental impact requires systemic change, not just individual usage modifications or incremental efficiency improvements. Big Tech’s carbon-free energy promises clash with exploding AI demand, showing the urgent need for:
- Hardening grid infrastructure and proven technologies like GETs
- Moving from Texas’s usage data to New York’s comprehensive cost transparency
- Ending utility secrecy and data center subsidy schemes
- Protecting consumers from hidden data center subsidies
- Using laws that promote private investment and consumer savings.
While technical and regulatory solutions can address AI’s environmental impact, they represent just one aspect of a broader accountability challenge. As AI systems grow more powerful and autonomous, we must establish ensure responsible development and deployment.
Governing AI: Legal Frameworks for a New Era
AI’s Accountability Challenge
The Westfield High School incident in New Jersey—where students allegedly used Clothoff’s AI to generate non-consensual intimate images—at least offers a difficult-to-find identifiable party for accountability.98Michael Safi et al., “Revealed: The Names Linked to ClothOff, the Deepfake Pornography App,” The Guardian, February 29, 2024, sec. Technology, https://www.theguardian.com/technology/2024/feb/29/clothoff-deepfake-ai-pornography-app-names-linked-revealed; Charles Toutant, “An AI Took Her Clothes Off. Now a New Lawsuit Will Test Rules for Deepfake Porn,” New Jersey Law Journal, February 5, 2024, https://www.law.com/njlawjournal/2024/02/05/an-ai-took-her-clothes-off-now-a-new-lawsuit-will-test-rules-for-deepfake-porn/. Yet as AI tools multiply, victims increasingly face anonymous harassment from untraceable sources, rendering traditional legal remedies futile.
These aren’t theoretical problems. Meta’s “FungiFriend” AI chatbot with a friendly wizard icon gave potentially lethal mushroom foraging advice, demonstrating AI systems can cause immediate public safety risks even operating exactly as designed.99Jason Koebler, “AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom,” 404 Media, November 12, 2024, https://www.404media.co/ai-chatbot-added-to-mushroom-foraging-facebook-group-immediately-gives-tips-for-cooking-dangerous-mushroom/. Unlike traditional published content, which courts have historically protected from liability, AI systems actively generate new, potentially dangerous content in real-time interactions with users.
Corporate personhood, our traditional framework for limiting business liability, assumes corporations act through human agents. However, AI breaks this model – it can develop capabilities and take actions beyond what any human intended or controlled. When AI systems operate autonomously across jurisdictions or through distributed networks, even identifying responsible parties could become nearly impossible.100Hon. Katherine B. Forrest (Fmr.), “The Ethics and Challenges of Legal Personhood for AI,” accessed January 8, 2025, https://www.yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai.
Tort law’s fundamental premise—identifying who caused harm—faces unprecedented challenges in this new reality. While courts will play a crucial role in addressing AI harms, traditional case-by-case solutions cannot fully address systemic challenges.
At the core of the debate: whether product liability or negligence standards better suit AI harms?101Bryan H. Choi, “Negligence Liability for AI Developers,” Lawfare Media, September 25, 2024, https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers. Jr Henderson, ed., “Learned Hand’s Paradox: An Essay on Custom in Negligence Law,” California Law Review, 2017, https://doi.org/10.15779/Z38585V; Joshua Turner and Nicol Turner Lee, “Misrepresentations of California’s AI Safety Bill,” Brookings, September 27, 2024, https://www.brookings.edu/articles/misrepresentations-of-californias-ai-safety-bill/; James M. Beck, “New Decision Directly Addresses the ‘Is Software a Product’ Question,” Drug & Device Law, May 2, 2022, https://www.druganddevicelawblog.com/2022/05/new-decision-directly-addresses-the-is-software-a-product-question.html; Catherine Sharkey, “Products Liability for Artificial Intelligence,” September 25, 2024, https://www.lawfaremedia.org/article/products-liability-for-artificial-intelligence; Hon John G Browning, “A Product by Any Other Name? The Evolving Trend of Product Liability Exposure for Technology Platforms,” Elon Law Review 16, no. 1 (September 22, 2023): 181–219. While product liability could strengthen software safety without requiring plaintiffs to prove specific development failures, AI systems pose unique challenges to this framework. Unlike traditional products, AI platforms continuously evolve through updates and user interactions, often invisibly to users. Such evolution fundamentally challenges the definition of a “product” under the law. Negligence law’s reasonableness standard might offer a more adaptable solution, focusing on developer behavior rather than product definitions, and holding AI creators to objective standards of proper conduct rather than merely common practice.
Two approaches emerge as potential solutions:
- The Insurance Model: A former judge suggests a mandatory insurance model requiring entities deploying AI systems publicly to carry coverage similar to no-fault auto insurance. Though careful distinctions might be needed between major AI developers and individual users, we should enable compensation for harms while incentivizing responsible development.102Hon. Katherine B. Forrest (Fmr.), “The Ethics and Challenges of Legal Personhood for AI.”
- The Superfund Approach: An AI Superfund could address broader societal impacts. Following established environmental law principles, modest fees on computational resources or data used in AI development could create funds for cleanup and mitigation. While acknowledging inevitable harms, this approach ensures financial preparedness for remediation. Superfund’s enduring bipartisan support demonstrates how programs with direct community impact can bridge political divides.103Kevin Frazier, “Building an AI Superfund: Lessons from Climate Change Legislation | TechPolicy.Press,” Tech Policy Press, October 10, 2024, https://techpolicy.press/building-an-ai-superfund-lessons-from-climate-change-legislation; E.A. Crunden, “Trump Leaves Murky Superfund Legacy – E&E News by POLITICO,” January 13, 2021, https://www.eenews.net/articles/trump-leaves-murky-superfund-legacy/.
These aren’t future problems. AI is already concentrating wealth and power while externalizing costs to society. While tech leaders profit from disruption, communities shoulder the consequences. Whistleblower protections are also urgently needed.104Sophie Luskin, “Need for Whistleblower Protections in Artificial Intelligence Industry Discussed in Senate Judiciary Hearing,” Whistleblower Network News (blog), September 24, 2024, https://whistleblowersblog.org/corporate-whistleblowers/need-for-whistleblower-protections-in-artificial-intelligence-industry-discussed-in-senate-judiciary-hearing/; Mary Allain, “Catastrophic AI Risks Highlight Need for Whistleblower Laws,” Government Accountability Project (blog), June 10, 2024, https://whistleblower.org/in-the-news/techtarget-catastrophic-ai-risks-highlight-need-for-whistleblower-laws/; Courtney Hague Andrews et al., “DOJ to Evaluate AI Risk Management and Whistleblower Protections in Corporate Compliance Programs | White & Case LLP,” October 4, 2024, https://www.whitecase.com/insight-alert/doj-evaluate-ai-risk-management-and-whistleblower-protections-corporate-compliance; Ben Kingsley, Jen Hitchcock, and Ran Ben-Tzur, “Important Whistleblower Protection and AI Risk Management Updates,” The Harvard Law School Forum on Corporate Governance (blog), October 30, 2024, https://corpgov.law.harvard.edu/2024/10/30/important-whistleblower-protection-and-ai-risk-management-updates/.
From Code to Consequences: Preventing Systemic AI Failures
As AI systems grow more autonomous, fundamental questions of control and ownership emerge: Can AI ‘entities’ legally hold and manage assets? Who maintains ultimate control? At its core, algorithmic transparency represents a power dynamic. When platforms hide their decision-making processes, they’re effectively running black-box governance over our digital lives.
Recent “pro-social media” research argues these systems should strengthen rather than erode the social fabric they depend on. This requires dual transparency: both verified original provenance showing where content comes from and social provenance indicating which communities it serves by bridging divides or balancing diverse perspectives.105E. Glen Weyl et al., “Prosocial Media” (arXiv, March 14, 2025), https://doi.org/10.48550/arXiv.2502.10834.
The 2024 Crowdstrike incident illustrates how black-box systems of all types create systemic risk: failures paralyzed operations for thousands of businesses, hospitals and governments—shows how ignoring basic software safety practices can cascade into widespread societal harm.106Sean Michael Kerner, “CrowdStrike Outage Explained: What Caused It and What’s next,” TechTarget, July 26, 2024, https://www.techtarget.com/whatis/feature/Explaining-the-largest-IT-outage-in-history-and-whats-next; Lakshmi Varanasi, “CrowdStrike CEO Has Twice Been at Center of Global Tech Failure – Bus…,” archive .is, July 22, 2024, https://archive.is/i7jdU; Internet Security Research Group, “What Is Memory Safety and Why Does It Matter?,” Prossimo, accessed December 31, 2024, https://www.memorysafety.org/docs/memory-safety/; National Security Agency press release, “NSA Releases Guidance on How to Protect Against Software Memory Safety Issues,” National Security Agency/Central Security Service, November 10, 2022, https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/http%3A%2F%2Fwww.nsa.gov%2FPress-Room%2FPress-Releases-Statements%2FPress-Release-View%2FArticle%2F3215760%2Fnsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues%2F. When companies deploy powerful tools like kernel drivers that can affect millions of users, they should face meaningful accountability, including potential criminal penalties, balanced with clear safe harbor provisions for adopting memory-safe programming languages and other proven safety measures.
As AI systems become increasingly integrated into complex software infrastructure, the potential for cascading failures grows exponentially. This demonstrates why we need algorithmic transparency, stronger preventive measures for basic software safety and clear liability frameworks that account for AI’s unique capabilities and risks.
Policy Pathways & State Leadership
State-Led Innovation: Laboratory for AI Governance
While federal action remains crucial, states are emerging as vital laboratories for AI governance, using inherent powers and (new FN) developing models that can inform national policy.107Brian S. Mandell et al., “Eight Types of Power City Leaders Can Use When Negotiating,” Bloomberg Harvard City Leadership Initiative, February 5, 2024, https://live-bloomberg-harvard-city-leadership-initiative-2023.pantheonsite.io/resources/eight-types-of-power-city-leaders-can-use-when-negotiating/. Twenty states now have privacy laws, with California and Maryland setting the highest bar.108Caitriona Fitzgerald and Matt Schwartz, “A New Model for State Privacy Legislation | TechPolicy.Press,” Tech Policy Press, January 6, 2025, https://techpolicy.press/a-new-model-for-state-privacy-legislation; “U.S. State Privacy Laws,” EPIC – Electronic Privacy Information Center (blog), accessed January 11, 2025, https://epic.org/issues/privacy-laws/state-laws/. Through global forums and interstate coalitions, state legislators and attorneys general can champion human-centered values in AI development while federal action lags. Successful privacy frameworks embrace data minimization, enable individuals to protect their own rights, eliminate deceptive design practices, and reinforce existing civil rights.
Tech companies’ stance on state privacy laws is revealing: while publicly supporting state frameworks, they often lobby for weaker standards than California’s comprehensive protections. Yet these same companies comply profitably with California’s stricter rules in their home state, and many simply apply California’s standards nationwide. Some even make privacy a marketing advantage, demonstrating how strong protections can align with business interests while reducing regulatory complexity.
Digital Rights and User Control: Strengthening Privacy & Infrastructure
Recently-ended preliminary Federal Trade Commission (FTC) investigations reveal how companies increasingly use personal data—from mouse movements to shopping cart abandonment—to set individualized prices, often without consumer knowledge or consent. This “surveillance pricing” affects everything from cosmetics to groceries, fundamentally altering how companies compete and consumers shop. Despite law enforcement’s claims that facial recognition merely provides investigative leads, the technology has directly led to wrongful arrests and detentions.109Matthew Guariglia, “Police Use of Face Recognition Continues to Wrack Up Real-World Harms,” Electronic Frontier Foundation, January 15, 2025, https://www.eff.org/deeplinks/2025/01/police-use-face-recognition-continues-wrack-real-world-harms. Where you live matters too, merchants charge different prices for hotel rooms based on where they think you’re coming from.110“FTC Surveillance Pricing Study Indicates Wide Range of Personal Data Used to Set Individualized Consumer Prices,” Federal Trade Commission, January 17, 2025, https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-surveillance-pricing-study-indicates-wide-range-personal-data-used-set-individualized-consumer; Erin Cabrey, “New FTC Chair Shuts down Public Comment on Retailers’ Surveillance Pricing,” Retail Brew, January 24, 2025, https://www.retailbrew.com/stories/2025/01/24/new-ftc-chair-shuts-down-public-comment-on-retailers-surveillance-pricing; Jake Johnson, “‘Unthinkable’: Trump FTC Chair Shuts Down Public Comments on Corporate Pricing Tactics | Common Dreams,” accessed February 3, 2025, https://www.commondreams.org/news/trump-ftc-chair. Beyond just price discrimination, these practices represent a broader crisis in digital rights. Legislation could establish frameworks that shift power from platforms back to citizens—enabling them to understand why they see specific content, control how their data is used for pricing, and choose algorithms that promote “time better spent” rather than maximizing engagement.111Casey Newton, “‘Time Well Spent’ Is Shaping up to Be Tech’s next Big Debate,” The Verge, January 18, 2018, https://www.theverge.com/2018/1/17/16903844/time-well-spent-facebook-tristan-harris-mark-zuckerberg. This approach builds on successful precedents like phone number portability, which transformed telecommunications by giving consumers real choice without micromanaging business practices.112“Porting: Keeping Your Phone Number When You Change Providers | Federal Communications Commission,” November 17, 2023, https://www.fcc.gov/consumers/guides/porting-keeping-your-phone-number-when-you-change-providers.
In a break with several other courts ruling the FCC has authority over high-speed Internet—a circuit court decision stripped the Federal Communications Commission of its authority over access to the Internet. This ruling could let ISPs charge premium fees for AI access or force consumers into expensive bundles for basic AI tools. While six states have net neutrality laws that effectively protect consumers nationwide (as networks avoid operating dual systems), this patchwork approach leaves significant gaps in cybersecurity oversight.113Jason Fuller, Juana Summers, and Patrick Jarenwattananon, “What May Be next after a Federal Court Struck down the FCC’s Net Neutrality Rules: NPR,” accessed January 11, 2025, https://www.npr.org/2025/01/06/nx-s15247750/what-may-be-next-after-a-federal-court-struck-down-the-fccs-net-neutrality-rules.
Congress gave the FCC authority in Title II of its law to protect network reliability and security; no other federal agency can establish even basic cybersecurity standards for our critical Internet infrastructure. Incoming FCC Chair Brendan Carr’s Project 2025 chapter proposes abandoning traditional consumer protection in favor of sweeping new powers over social media and AI.114Brian Stelter, “Brendan Carr Wrote the FCC Chapter in ‘Project 2025.’ Now He’s Trump’s Pick for the Agency | CNN Business,” CNN, November 18, 2024, https://www.cnn.com/2024/11/18/media/brendan-carr-trump-fcc-nominee- project-2025/index.html; Megan Lebowitz, “Trump Picks Brendan Carr to Lead the Federal Communications Commission,” NBC News, November 18, 2024, https://www.nbcnews.com/politics/donald-trump/trump-brendan-carr-federal-communications-commission-rcna180567; John R. Vile, “Brendan Carr,” The Free Speech Center, January 4, 2024, https://firstamendment.mtsu.edu/article/brendan-carr/. Expanding authority to regulate speech while weakening established consumer protections raises significant concerns.
As digital threats multiply at breakneck speed, the Federal Trade Commission’s antiquated rule-making procedures—which can take up to five years to implement—leave Americans increasingly vulnerable to privacy breaches and online fraud. Either the FTC should be empowered with streamlined Administrative Procedures Act authority, or Congress needs to establish a dedicated digital protection agency built for the modern era. A comprehensive digital strategy should include national Right-to-Repair legislation, federal anti-SLAPP protections, enhanced e-government services for all citizens, and policies ensuring researcher access to platform data while maintaining privacy safeguards—all crucial steps toward a more transparent and equitable digital marketplace.115Harold Feld, “Digital Platform Act – A New E-Book From Public Knowledge,” accessed January 11, 2025, https://www.digitalplatformact.com/.
Consumer Reports leads the way with rigorous, unbiased research and testing of AI technologies, often collaborating with other consumer advocacy groups. While many new AI-focused organizations are emerging—some with questionable motives—established groups like the Electronic Privacy Information Center, Public Knowledge, Center for Humane Technology, Electronic Frontier Foundation116Rindala Alajaji, “Key Issues Shaping State-Level Tech Policy,” Electronic Frontier Foundation, February 3, 2025, https://www.eff.org/deeplinks/2025/02/key-issues-shaping-state-level-tech-policy. and others have proven track records of protecting digital rights and offer concrete ways to shape our future.
Beyond Traditional Frameworks: The Need for New Solutions
The AI industry’s reluctance to spell out their vision for human flourishing in an AGI world makes the need for thoughtful, proactive governance even more urgent. We cannot afford to stumble blindly into a transformed society without carefully considering and shaping the human experience we want to preserve and enhance .
We face a watershed moment rivaling the Industrial Revolution—but compressed into years, not decades. AI stands at a crossroads: it could accelerate social decay, worker exploitation, and environmental destruction, or catalyze societal renewal and sustainable progress.
This pivotal moment demands immediate action to address root causes:
- Strengthening consumer protections and digital rights
- Ensuring fair allocation of AI’s benefits and costs
- Safeguarding creator rights and quality journalism
- Preserving human agency over technological systems
- Building sustainable digital infrastructure
What emerges from this analysis is the revelation of an interlocking machinery of reality distortion that we must collectively dismantle before it’s too late. It’s not just isolated problems of misinformation, economic exploitation, or environmental damage. Rather, these systems work in concert: AIs feed the internet’s ‘justification machine,’ which is powered by the attention economy, which drives data center expansion, which burdens communities with hidden costs, which erodes trust in institutions, which weakens our collective ability to distinguish truth from fiction and solve problems.
This problem is global, creating a global prisoner’s dilemma where the race for AI supremacy forces safety shortcuts and rushed deployment. This risks not just direct conflict but AI capabilities spreading to non-state actors—while transforming military strategy, enabling unprecedented cyber operations, and accelerating weapons innovation. In the age of artificial general intelligence, the stakes of getting governance right extend far beyond any single nation’s borders.
Understanding this machinery of our potential undoing is crucial to preventing it. While this analysis raises alarming concerns about Big Tech’s consolidated power, many proposed reforms actually dovetail with Silicon Valley’s self-interest: reduced energy costs, improved network infrastructure, and reliable human-curated content all serve both corporate profits and the public good.
The facts are clear: too much power in too few hands has created an unprecedented machinery of reality distortion. The solution must involve redistributing this power—to citizens, communities, and civic oversight mechanisms—while establishing guardrails that ensure technology serves humanity rather than the reverse. The vision is not better digital Caesars with improved intentions, but what some call ‘a world without Caesars’—replacing centralized control with distributed governance, algorithmic transparency, and genuine user agency .
The stakes could not be higher: our response to these challenges will determine whether AI accelerates societal decay or catalyzes sustainable progress. Stay engaged—America needs you now. Hold elected officials accountable by tracking their votes and showing up at town halls. Organize around just one issue you care about—doom scrolling doesn’t count. Join a group taking action or start one yourself. With proper governance frameworks and decisive action, we can preserve human agency, advance shared prosperity, and ensure AI serves rather than subjugates humanity.
Solutions in Action: Real Progress, Real Results
These solutions don’t require waiting for Washington. States, businesses, and communities can act:
Protecting Truth & Building Trust
- Finland’s world-leading digital literacy program proves success: Their K-12 integration of essential literacies creates the world’s most misinformation-resistant population, offering a proven model for building critical thinking before AI tools (reading/writing, math/ statistics, scientific/critical thinking, financial/civic understanding).
- Report for America’s success placing journalists in news deserts, and public technology like Watch Duty’s life-saving disaster coverage, shows how targeted support for local journalism and vital information can effectively counters synthetic news.
- While fearless journalism exposes self-dealing and corporate corruption, the pace of misconduct outstrips accountability. We must relentlessly expose and eliminate corruption wherever concentrated power undermines public good—especially as oversight systems face unprecedented attacks and regulatory capture accelerates.
- Documentary filmmakers and podcasts like “The Middle” demonstrate how thoughtful dialogue can bridge divides by focusing on shared challenges and evidence-based solutions.
- User control works: When people can aggregate content across platforms and apply their own algorithms, they create healthier information diets—just as RSS readers once empowered users before platforms seized control.
- Support transparency in algorithmic decision-making.
- Preserve human expertise and documentary knowledge.
- Support community-based outreach to deliver reliable news to underserved areas.
- Consider link rot and paywalled expertise erosion, newspapers could consider alternatives to subscriptions for out-market readers.
Economic Security & Worker Protection: Proven Pathways
- RECOMPETE pilot programs demonstrate how combining regional planning with addressing root causes leads to measurable economic renewal—creating jobs while lowering household costs.
- CHIPS & Science Act already catalyzing $450 billion in private investment, creating innovation hubs and corridors that combine research, manufacturing, and workforce development.
- Communities using federal funding to creates American jobs building resilient infrastructure that prepares for future challenges and drives down household energy costs at least 10%.
- Evidence shows comprehensive support systems—including sick leave, child care, and job training—strengthen both families and businesses, though implementation remains fragmented.
- As AI systems train on creators’ work without permission or compensation, we must establish frameworks that protect creative rights while fostering innovation.
- Real-time data shows AI’s uneven impact across professions: While some workers harness AI tools to increase productivity, others face sudden displacement—as seen with artists, translators, and musicians. Communities need preparation, not just reaction.
Infrastructure & Energy: From Hidden Costs to Community Control
- Deploy proven ‘all hands on deck’ solutions to meet AI’s energy challenge:
- Grid-enhancing technologies increase transmission capacity 33%+.
- Smart load management maximizes existing infrastructure.
- Fast, standardized interconnection rules like Texas speed clean energy deployment.
- Clean microgrids reduce dependency on fossil backup generation.
- Protect communities through smarter utility oversight:
- Performance-based regulation drives efficiency over wasteful spending.
- Replicating Texas’ energy data transparency and interconnection enables 6-18% customer savings and new competition for energy generation.
- Block unfair cost-shifting of data center expansions.
- Address ‘bad harmonics’ that damage electronics and create fire hazards.
- Enable meaningful public participation in utility decisions.
- Expose hidden subsidies that burden local budgets.
- Reduce AI’s footprint through smarter computing:
- Use on-device AI to reduce environmental impact and cost.
- Prioritize energy-aware AI development.
Digital Rights & Public Oversight: Building Safeguards That Work
- Establish fundamental digital rights:
- Right to aggregate content across platforms
- Right to apply our own algorithms, not just platforms’
- Right to port our data and relationships between services
- Right to understand how our data shapes content and pricing
- Right to create our own tools for digital autonomy
- Right to repair our devices
- Create real accountability:
- Address liability gaps for AI actions beyond human control
- Protect whistleblowers exposing AI and platform abuse
- Enable researcher access while maintaining privacy
- Maintain FCC authority over Internet infrastructure
- Establish basic cybersecurity standards
- Prevent premium fees for AI access
- Modernize FTC’s response capabilities for digital harms
- Protect creators and public knowledge:
-
- Stop unauthorized AI training on creators’ work
- Enforce basic digital standards including robots.txt
- Preserve access to human expertise and verified information
- Prevent isolation of important information – react to stories inside paywalls
- Enable researcher access while maintaining privacy
- Support federal anti-SLAPP protections
- Enable enhanced e-government services
-
ENDNOTES
- @cosmicdealheather, “Honestly It’s Worse than I Thought #ai #aigenerated #aiart #boycott #w… | Hobby Lobby | TikTok,” accessed December 26, 2024, https://www.tiktok.com/@cosmicdealheather/video/7447325330475568430.
- Hannah Murphy and Cristina Criddle, “Meta Envisages Social Media Filled with AI-Generated Users,” Financial Times, December 27, 2024, sec. Meta Platforms, https://www.ft.com/content/91183cbb-50f9-464a-9d2e-96063825bf-cf.
- Luis Prada, “Kroger Asked About Surge Pricing and Facial Recognition at Grocery Stores,” October 16, 2024, https://www.vice.com/en/article/surge-pricing-facial-recognition-surveillance-grocery-stores/; “Wendy Davis on X: ‘.@BedoyaFTC Notes That New @FTC Chair Andrew Ferguson Has Quietly Removed Five Requests for Public Comment from Consideration — Including One That Sought Comment on “Surveillance Pricing.” https://T.Co/e7dXS8PmOW https://T.Co/pk5fgBT9ah’ / X,” X (formerly Twitter), January 23, 2025, https://x.com/wendyndavis/status/1882490346443313500.
- Sam Biddle, “Leaked Meta Rules: Users Are Free to Post ‘Mexican Immigrants Are Trash!’ Or ‘Trans People Are Immoral,’” The Intercept, January 10, 2025, https://theintercept.com/2025/01/09/facebook-instagram-meta-hate-speech-content-moderation/; Clare Duffy, “Calling Women ‘Household Objects’ Now Permitted on Face- book after Meta Updated Its Guidelines | CNN Business,” CNN, January 7, 2025, https://www.cnn.com/2025/01/07/tech/meta-hateful-conduct-policy-update-fact-check/index.html; Christopher Wiggins, “What LGBTQ+ People Should Know about Meta’s New Rules,” accessed January 8, 2025, https://www.advocate.com/news/meta-policies-lgbtq-attacks; John Shinal, “Online Threats Lead to Real-World Harm, Say Security Experts,” CNBC, August 29, 2017, https://www.cnbc.com/2017/08/29/online-threats-real-world-harm.html.
- James A. Piazza, “Fake News: The Effects of Social Media Disinformation on Domestic Terrorism,” Dynamics of Asymmetric Conflict 15, no. 1 (January 2, 2022): 55–77, https://doi.org/10.1080/17467586.2021.1895263; Zach Bastick, “Would You Notice If Fake News Changed Your Behavior? An Experiment on the Unconscious Effects of Disinformation,” Computers in Human Behavior 116 (March 1, 2021): 106633, https://doi.org/10.1016/j.chb.2020.106633; Jennifer Kavanagh and Michael D. Rich, “Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life” (RAND Corporation, January 15, 2018), https://www.rand.org/pubs/research_reports/RR2314.html.
- Derek Thompson, “The Anti-Social Century,” The Atlantic, January 8, 2025, http://theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/?gift=o6MjJQpusU9ebnFuymVdsMKo25mQ7_Dim-WNHAJNoVhY.
- Emily Schmidt, “Reading the Numbers: 130 Million American Adults Have Low Literacy Skills,” APM Research Lab, March 16, 2022, https://www.apmresearchlab.org/10x-adult-literacy.
- Evan Halper et al., “A Utility Promised to Stop Burning Coal. Then Google and Meta Came to Town.,” Washington Post, October 12, 2024, https://www.washingtonpost.com/business/2024/10/08/google-meta-omaha-data-centers/.
- Alexander Meinke et al., “Frontier Models Are Capable of In-Context Scheming” (arXiv, December 6, 2024), https://doi.org/10.48550/arXiv.2412.04984; Apollo Research, “Scheming Reasoning Evaluations,” Apollo Research, accessed January 10, 2025, https://www.apolloresearch.ai/research/scheming-reasoning-evaluations.
- Ethan Mollick, “Prophecies of the Flood,” January 10, 2025, https://www.oneusefulthing.org/p/prophecies-of-the-flood.
- Charlie Warzel and Mike Caulfield, “January 6 and the Triumph of the Justification Machine – The Atlantic,” accessed January 12, 2025, https://www.theatlantic.com/technology/archive/2025/01/january-6-justification-machine/681215/?gift=otEsSHbRYKNfFYMngVFweDjW4HMOQ6NwGolHc9wUdb0.
- Samson Nivins et al., “Long-Term Impact of Digital Media on Brain Development in Children,” Scientific Reports 14, no. 1 (June 6, 2024): 13030, https://doi.org/10.1038/s41598-024-63566-y; Joseph Firth et al., “The ‘Online Brain’: How the Internet May Be Changing Our Cognition,” World Psychiatry 18, no. 2 (June 2019): 119–29, https://doi.org/10.1002/wps.20617; Fathima Basheer and Sudha Bhatia, Repercussion of Social Media Usage on Neuroplasticity, 2019; Maria T. Maza et al., “Association of Habitual Checking Behaviors on Social Media With Longitudinal Functional Brain Development,” JAMA Pediatrics 177, no. 2 (February 1, 2023): 160–67, https://doi.org/10.1001/jamapediatrics.2022.4924; Michael Landon-Murray and Ian Anderson, “Thinking in 140 Characters: The Internet, Neuroplasticity, and Intelligence Analysis,” Journal of Strategic Security 6, no. 3 (October 2013): 73–82, https://doi.org/10.5038/1944-0472.6.3.7; Martin Korte, “The Impact of the Digital Revolution on Human Brain and Behavior: Where Do We Stand?,” Dialogues in Clinical Neuroscience 22, no. 2 (June 2020): 101–11, https://doi.org/10.31887/DCNS.2020.22.2/mkorte.
- Timothy Graham and Mark Andrejevic, “A Computational Analysis of Potential Algorithmic Bias on Platform X during the 2024 US Election,” Working Paper, November 1, 2024, https://eprints.qut.edu.au/253211/; TOI Tech Desk, “Elon Musk Targets Google and Microsoft: ‘Even with Best of Intentions, They Can’t Help but Introduce Bias,’” The Times of India, September 24, 2024, https://timesofindia.indiatimes.com/technology/tech-news/elon-musk-targets-google-and-microsoft-even-with-best-of-intentions-they-cant-help-but-introduce-bias/article- show/113619683.cms.
- Wyatt Myskow and Martha Pskowski, “Misinformation Spreads Like Wildfire Online While LA Neighborhoods Burn,” Inside Climate News (blog), January 10, 2025, https://insideclimatenews.org/news/10012025/misinformation-spreads-like-wildfire-as-los-angeles-burns/.
- Michael Townsen Hicks, James Humphries, and Joe Slater, “ChatGPT Is Bullshit,” Ethics and Information Technology 26, no. 2 (June 8, 2024): 38, https://doi.org/10.1007/s10676-024-09775-5.
- Aryan Gulati et al., “Putnam-AXIOM: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning,” 2024, https://openreview.net/forum?id=YXnwlZe0yf¬eId=yrsGpHd0Sf.
- Corbin Boiles, “MAGA Newspaper Owner’s AI Bot Defends KKK,” The Daily Beast, March 5, 2025, https://www.thedailybeast.com/maga-newspaper-owners-ai-bot-defends-kkk/.
- Rosyna Keller [@rosyna], “Here’s an Example of @Google’s AI Overview Hallucinating. Not Only Hallucinating, but Hallucinating a Hallucination You’d Only Recognize If You Have Deep Subject Matter Knowledge. https://T.Co/6AUn2mfQvX,” Tweet, Twitter, December 26, 2024, https://x.com/rosyna/status/1872344266653171774.
- ItsTheTalia, “ItsTheTalia on X: ‘You Cannot Comprehend the RAGE That Rushed through Me as I Saw That the First Google Result for Googling Hieronymus Bosch Is Incestuous AI Sloop. What Are We Even Doing Here…? Https://T.Co/yHP5by93NY’ / X,” X (formerly Twitter), September 14, 2024, https://x.com/ItsTheTalia/status/1835092917418889710; “Learn How Google’s Featured Snippets Work – Google Search Help.”
- Tara S. Emory and Maura R. Grossman, “AI Agents: The Next Generation of Artificial Intelligence,” accessed January 8, 2025, https://natlawreview.com/article/next-generation-ai-here-come-agents.
- Tharin Pillay, “Social Media Fails Many Users. Experts Have an Idea to Fix It,” TIME, February 18, 2025, https://time.com/7258238/social-media-tang-siddarth-weyl/; Brett M. Frischmann and Susan Benesch, “Friction-In-Design Regulation as 21st Century Time, Place, and Manner Restriction,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, August 1, 2023), https://doi.org/10.2139/ssrn.4178647; Luke Hogg and Renée DiResta, “Shaping the Future of Social Media with Middleware | The Foundation for American Innovation,” December 17, 2024, https://www.thefai.org/posts/shaping-the-future-of-social-media-with-middleware; The Future of Free Speech, “Preventing ‘Torrents of Hate’ or Stifling Free Expression Online?,” The Future of Free Speech, May 28, 2024, https://futurefreespeech.org/preventing-torrents-of-hate-or-stifling-free-expression-online/; Jonathan Stray, Ravi Iyer, and Helena Puig Larrauri, “The Algorithmic Management of Polarization and Violence on Social Media,” Knight First Amendment Institute, August 22, 2023, http://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media; Alex Moehring and Alissa Cooper, “Better Feeds: Algorithms That Put People First,” Knight-Georgetown Institute, March 4, 2025, https://kgi.georgetown.edu/research-and-commentary/better-feeds/; Mike Masnick, “Empowering Users, Not Overlords: Overcoming Digital Helplessness,” Techdirt, January 27, 2025, https://www.techdirt.com/2025/01/27/empowering-users-not-overlords-overcoming-digital-helplessness/.
- “Aaron Reichlin-Melnick on X: ‘Further Evidence That Musk Turned Twitter into a Right-Wing Propaganda Machine. I Haven’t Touched This Account—Which Only Follows @NBA—in a Month. I Just Checked and Majority of Content the Algorithm Put in the Account’s Notifications Is Trump, Laura Loomer, and Election Denial. Https://T.Co/uUkG1whHg6’ / X,” X (formerly Twitter), October 21, 2023, https://x.com/ReichlinMelnick/status/1725882073422987629.
- Matthew Hutson, “Forget ChatGPT: Why Researchers Now Run Small AIs on Their Laptops,” Nature 633, no. 8030 (September 16, 2024): 728–29, https://doi.org/10.1038/d41586-024-02998-y; Katherine Bourzac, “Fixing AI’s Energy Crisis,” Nature, October 17, 2024, https://doi.org/10.1038/d41586-024-03408-z.
- Sarah Perez, “Signal President Meredith Whittaker Calls out Agentic AI as Having ‘profound’ Security and Privacy Issues,” TechCrunch (blog), March 7, 2025, https://techcrunch.com/2025/03/07/signal-president-meredith-whittak- er-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/.
- Anna Edgerton, “US Heads Into Post-Truth Election as Platforms Shun Arbiter Role,” Com, January 22, 2024, https://www.bloomberg.com/news/articles/2024-01-22/us-heads-into-post-truth-election-as-platforms-shunarbiter-role.
- Kurt Wagner, “Twitter, Facebook Reach Trump Breaking Point After Siege of Capitol,” Com, January 7, 2021, https://www.bloomberg.com/news/articles/2021-01-07/twitter-facebook-reach-trump-breaking-point-after-siege-of-capitol.
- Daniel Zuidijk, “Fight Against Misinformation Suffers Defeat on Multiple Fronts,” Com, July 8, 2024, https://www.bloomberg.com/news/newsletters/2024-07-08/fight-against-misinformation-suffers-defeat-on-multi-ple-fronts.
- Claire E. Robertson, Kareena S. del Rosario, and Jay J. Van Bavel, “Inside the Funhouse Mirror Factory: How Social Media Distorts Perceptions of Norms,” Current Opinion in Psychology 60 (December 1, 2024): 101918, https://doi.org/10.1016/j.copsyc.2024.101918; Alex Wickham et al., “UK Riots: Suspected Foreign Groups Using TikTok, Telegram to Incite Violence – Bloomberg,” August 7, 2024, https://www.bloomberg.com/news/articles/2024-08-07/suspected-foreign-agitators-boost-uk-extremists-to-inflame-riots?srnd=undefined.
- Aisha Counts and Eari Nakano, “Twitter’s Surge in Harmful Content a Barrier to Advertiser Return,” Com, July 19, 2023, https://www.bloomberg.com/news/articles/2023-07-19/twitter-s-surge-in-harmful-content-a-barrier-to-advertiser-return; Aisha Counts, “Social Media Platforms Show Little Interest in Stopping Spread
of Misinformation – Bloomberg,” August 8, 2024, https://www.bloomberg.com/news/newsletters/2024-08-08/social-media-platforms-show-little-interest-in-stopping-spread-of-misinformation?cmpid=BBD080824_TECH&utm_medium=email&utm_source=newsletter&utm_term=240808&utm_campaign=tech. - Casey Newton, “Meta Surrenders to the Right on Speech,” Platformer, January 8, 2025, https://www.platformer.news/meta-fact-checking-free-speech-surrender/.
- Casey Newton, “Meta Goes Mask-Off,” Platformer, January 14, 2025, https://www.platformer.news/meta-trump-pivot-messenger-themes-labor-zuckerberg-wishlist/; Rui Fan, Ke Xu, and Jichang Zhao, “Weak Ties Strengthen Anger Contagion in Social Media” (arXiv, May 5, 2020), https://doi.org/10.48550/arXiv.2005.01924.
- Fazio, Rand, and Pennycook, “Repetition Increases Perceived Truth Equally for Plausible and Implausible Statements”; My Mixtapez [@mymixtapez], “Four Days after Sending $500 Million to Ukraine, Biden Announces a ‘One-Time Payment of $770’ to California Fire Victims.🇺🇸🫡Https://T.Co/rxdsBNvsgu”; “MyMixtapez on Instagram.”
- Eric W. Dolan, “TikTok’s Algorithm Exhibited pro-Republican Bias during 2024 Presidential Race, Study Finds,” PsyPost – Psychology News, February 4, 2025, https://www.psypost.org/tiktoks-algorithm-exhibited-pro-republican-bias-during-2024-presidential-race-study-finds/; Esat Dedezade, “TikTok Users Report Anti-Trump Content Being Hidden Following Platform’s Unbanning,” Forbes, January 22, 2025, https://www.forbes.com/sites/esatdedezade/2025/01/22/tiktok-users-report-anti-trump-content-being-hidden-following-platforms-unbanning/; Prithvi Iyer, “New Research Points to Possible Algorithmic Bias on X | TechPolicy.Press,” November 15, 2024, https://www.techpolicy.press/new-research-points-to-possible-algorithmic-bias-on-x/.
- Wes Davis, “The Head of a Biden Program That Could Help Rural Broadband Has Left,” The Verge, March 16, 2025, https://www.theverge.com/news/630954/rural-broadband-equity-program-head-leaves-trump-musk-starlink; Mike Masnick, “Musk Shows Us What Actual Government Censorship On Social Media Looks Like,” Techdirt, February 3, 2025, https://www.techdirt.com/2025/02/03/musk-shows-us-what-actual-government-censorship-on-social-media-looks-like/; US Representative Frank Pallone, “Pallone Slams Republicans for Undermining Broadband Program and Standing by Silently While Musk Grifts Off the American People,” March 5, 2025, http://democrats-energycommerce.house.gov/media/press-releases/pallone-slams-republicans-undermining-broad-band-program-and-standing-silently.
- Creston Brooks, Samuel Eggert, and Denis Peskoff, “The Rise of AI-Generated Content in Wikipedia” (arXiv, Oc- tober 10, 2024), https://doi.org/10.48550/arXiv.2410.08044; Ethan Mollick, “Ethan Mollick on X: ‘The Ourouborous Has Begun. Wikipedia Is an Important Source of Training Data for AIs. At Least 5% of New Wikipedia Articles in August Were AI Generated (To Be Clear, This Does Not Mean That AI Will Fail as It Trains on Its Own Data, Synthetic Data Is Already a Part of Training) https://T.Co/kuDkfEgJQv’ /X,” X (formerly Twitter), October 14, 2024, https://x.com/emollick/status/1845881632420446281.
- “FTC Issues Staff Report on AI Partnerships & Investments Study,” Federal Trade Commission, January 17, 2025, https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-issues-staff-report-ai-partnerships-investments-study.
- Caroline Haskins, “The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery,’” Wired, May 22, 2024, https://www.wired.com/story/low-paid-humans-ai-biden-modern-day-slavery/.
- Randall Lane, “Why Perplexity’s Cynical Theft Represents Everything That Could Go Wrong With AI,” Forbes, June 11, 2024, https://www.forbes.com/sites/randalllane/2024/06/11/why-perplexitys-cynical-theft-represents-everything-that-could-go-wrong-with-ai/; Tim Marchman, “Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine,” Wired, June 21, 2024, https://www.wired.com/story/perplexity-plagiarized-our-story-about-how-perplexity-is-a-bullshit-machine/; Dhruv Mehrotra and Tim Marchman, “Perplexity Is a Bullshit Machine,” Wired, June 19, 2024, https://www.wired.com/story/perplexity-is-a-bullshit-machine/.
- Dhruv Mehrotra and Tim Marchman, “Perplexity Is a Bullshit Machine | WIRED,” June 19, 2024, https://www.wired.com/story/perplexity-is-a-bullshit-machine/; Lane, “Why Perplexity’s Cynical Theft Represents Everything That Could Go Wrong With AI”; Marchman, “Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine”; Dhruv Mehrotra, “Amazon Is Investigating Perplexity Over Claims of Scraping Abuse,” Wired, June 27, 2024, https://www.wired.com/story/aws-perplexity-bot-scraping-investigation/.
- Sara Fischer, “‘Pink Slime’ News Outlets Outpacing Local Daily Newspapers,” Axios, June 11, 2024, https://www.axios.com/2024/06/11/partisan-news-websites-dark-money.
- Miranda Green [@mirandacgreen], “Where Did Trump Voters Get Their News?📰Yes, There’s Social and Partisan Sites, but There Is Another Influential Strategy That Isn’t Getting Enough Attention: Manipulated, Pay-to-Play and All out Fake News Sites I’ve Been Covering a Mix of Those for Years. Here’s a Primer 🧵,” Tweet, Twitter, November 8, 2024, https://x.com/mirandacgreen/status/1854967274202935803.
- Dandan Qiao, Huaxia Rui, and Qian Xiong, “AI and Freelancers: Has the Inflection Point Arrived?,” n.d., https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content; Amanda Williams, “How Google and AI Are Killing Travel Blogs Like Mine – A Dangerous Business,” January 15, 2025, https://www.dangerous-business.com/how-google-and-ai-are-killing-travel-blogs-like-mine/.
- Liz Pelly, “The Ghosts in the Machine,” Harper’s Magazine, accessed January 13, 2025, https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/.
- Chairman Wright Patman, “Automation and Recent Trends (85th Congress) – United States Joint Economic Committee,” November 14, 1957, https://www.jec.senate.gov/public/index.cfm/reports-studies?ID=CF015E66-5427-45D5-B619-89F5EAC8258D; United States Joint Economic Committee, “JEC Examines Impact of Robots and Automation on Workforce and Economy – JEC Examines Impact of Robots and Automation on Workforce and Economy – United States Joint Economic Committee,” May 26, 2016, https://www.jec.senate.gov/public/index. cfm/republicans/2016/5/jec-examines-impact-of-robots-and-automation-on-workforce-and-economy/.
- Peter Wildeford 🇺🇸 [@peterwildeford], “@brandonwilson I’m 50% Sure We’re Going to All Be Unemployed Due to Technology within 10 Years,” Tweet, Twitter, January 17, 2025, https://x.com/peterwildeford/status/1880229517798830566; Peter Wildeford 🇺🇸 [@peterwildeford], “I Was Speaking Too Loosely When I Said ‘All’, I Meant to Say There Still Will Be Jobs Where There Is ‘Pure Human Preference’ (Even If Maybe AI Could Be More Competent in Some Ways and AI Is Generally More Skilled),” Tweet, Twitter, January 18, 2025, https://x.com/peterwildeford/status/1880600538628362449.
- Tina Reed, “Universities Feel Ripple Effects of DOGE Cuts to Health,” Axios, February 26, 2025, https://www.axios.com/2025/02/26/musk-doge-science-cuts-universities-fallout; David Deming, “DOGE Is Failing on Its Own Terms,” The Atlantic (blog), February 11, 2025, https://www.theatlantic.com/ideas/archive/2025/02/nih-nsf-sci-ence-doge/681645/; Brian Buntz, “Steep Budget Cuts and Layoffs Coming to NSF,” R&D World, February 21, 2025, https://www.rdworldonline.com/nsf-layoffs-in-2025-deep-budget-cuts-headed-for-u-s-research-sector/.
- Andrew Yamakawa Elrod, “What Was Bidenomics? | Andrew Yamakawa Elrod,” Phenomenal World (blog), September 26, 2024, https://www.phenomenalworld.org/analysis/what-was-bidenomics/.
- Gabriel A. Benavidez, “Chronic Disease Prevalence in the US: Sociodemographic and Geographic Variations by Zip Code Tabulation Area,” Preventing Chronic Disease 21 (2024), https://doi.org/10.5888/pcd21.230267.
- Brian Deer, “Opinion | I’ll Never Forget What Kennedy Did During Samoa’s Measles Outbreak,” The New York Times, November 25, 2024, sec. Opinion, https://www.nytimes.com/2024/11/25/opinion/rfk-jr-vaccines-samoa-measles.html.
- Talmon Joseph Smith and Karl Russell, “The Greatest Wealth Transfer in History Is Here, With Familiar (Rich) Winners,” The New York Times, May 14, 2023, sec. Business, https://www.nytimes.com/2023/05/14/business/econo-my/wealth-generations.html.
- Rafael A. Corredoira et al., “The Changing Nature of Firm R&D: Short-Termism & Influential Innovation in US Firms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March 18, 2022), https://doi.org/10.2139/ssrn.4071191; Rachelle C. Sampson and Yuan Shi, “Are US Firms Becoming More Short-Term Oriented? Evidence of Shifting Firm Time Horizons from Implied Discount Rates, 1980-2013,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2019), https://doi.org/10.2139/ssrn.2837524.
- Bobby Kogan, “Tax Cuts Are Primarily Responsible for the Increasing Debt Ratio,” Center for American Progress (blog), March 27, 2023, https://www.americanprogress.org/article/tax-cuts-are-primarily-responsible-for-the-increasing-debt-ratio/; Heidi Peltier, “The Cost of Debt-Financed War: Public Debt and Rising Interest for Post-9/11 War Spending,” January 1, 2020; “Historical Debt Outstanding | U.S. Treasury Fiscal Data,” accessed October 5, 2024, https://fiscaldata.treasury.gov/datasets/historical-debt-outstanding/; “FiscalData Explains the National Debt,” FiscalData.Treasury.Gov Explains the National Debt, September 25, 2024, https://fiscaldata.treasury.gov/americas-finance-guide/national-debt/.
- Carol Graham, “Despair Underlies Our Misinformation Crisis,” John Templeton Foundation, June 27, 2022, https://www.templeton.org/news/despair-underlies-our-misinformation-crisis; Carol Graham, “Our Twin Crises of Despair and Misinformation,” Brookings, July 22, 2024, https://www.brookings.edu/articles/our-twin-crises-of-despair-and-misinformation/.
- Aaron Flaaen and Justin Pierce, “Disentangling the Effects of the 2018-2019 Tariffs on a Globally Connected U.S. Manufacturing Sector,” Finance and Economics Discussion Series 2019.0, no. 86 (December 2019), https://doi.org/10.17016/feds.2019.086.
- Sam Kessler, “David Sacks Responds to U.S. Crypto Reserve Conflict of Interest Allegations,” CoinDesk, March 3, 2025, https://www.coindesk.com/policy/2025/03/03/david-sacks-investments-complicate-trump-s-crypto-reserve-plans.
- David French and Jillian Weinberger, “Opinion | Elon Musk and the Useless Spending-Cut Theater of DOGE,” The New York Times, March 5, 2025, sec . Opinion, https://www.nytimes.com/2025/03/05/opinion/musk-useless-spending-cuts-doge.html.
- “A Distributional Analysis of Donald Trump’s Tax Plan,” ITEP, October 7, 2024, https://itep.org/a-distributional-analysis-of-donald-trumps-tax-plan-2024/; “Elon Musk Is Failing to Cut American Spending,” The Economist, accessed February 12, 2025, https://www.economist.com/finance-and-economics/2025/02/12/elon-musk-is-failing-to-cut-american-spending.
- David A. Fahrenthold and Jeremy Singer-Vine, “DOGE Makes Its Latest Errors Harder to Find,” The New York Times, March 13, 2025, sec . U .S ., https://www.nytimes.com/2025/03/13/us/politics/doge-errors-funding-grants-claims.html.
- Evan Halper and Hannah Natanson, “How DOGE Detonated a Crisis at a Highly Sensitive Nuclear Weapons Agency,” The Washington Post, March 2, 2025, https://www.washingtonpost.com/business/2025/03/02/doge-nuclear-worker-firings-musk-trump/; Aimee Picchi, “USDA Cancels $1 Billion in Funding for Schools and Food Banks to Buy Food from Local Suppliers – CBS News,” March 13, 2025, https://www.cbsnews.com/news/usda-cancels-local-food-purchasing-food-banks-school-meals/; Jace Dicola, “USDA Cuts $13 Million Program for Western Slope Farmers, Food Banks, Schools,” The Grand Junction Daily Sentinel, March 15, 2025, https://www.gjsentinel.com/news/western_colorado/usda-cuts-13-million-program-for-western-slope-farmers-food-banks-schools/article_4b7c62e6-002a-11f0-b10f-b31de2e08a0f.html.
- Keri Putnam, “What’s at Risk in the Streaming Media Age,” Shorenstein Center (blog), January 18, 2024, https://shorensteincenter.org/commentary/whats-risk-streaming-media-age/.
- Andrew Boryga, “Why I’m Banning Student AI Use This Year,” Edutopia, August 2, 2024, https://www.edutopia.org/article/banning-student-ai-use-chanea-bond/.
- Jenny Gross, “How Finland Is Teaching a Generation to Spot Misinformation,” The New York Times, January 10, 2023, sec . World, https://www.nytimes.com/2023/01/10/world/europe/finland-misinformation-classes.html; Shane Horn and Koen Veermans, “Critical Thinking Efficacy and Transfer Skills Defend against ‘Fake News’ at an International School in Finland,” Journal of Research in International Education 18, no. 1 (April 1, 2019): 23–41, https://doi.org/10.1177/1475240919830003; Eliza Mackintosh, “Finland Is Winning the War on Fake News. Other Nations Want the Blueprint,” May 2019, https://www.cnn.com/interactive/2019/05/europe/finland-fake-news-intl; Amelia Nash, “Media Literacy A to Z: How Finland Is Arming Students Against Misinformation,” PRINT Magazine, August 27, 2024, https://www.printmag.com/culturally-related-design/media-literacy-a-to-z-how-finland-is-arming-students-against-misinformation/.
- Nash, “Media Literacy A to Z”; “ABCs of Media,” ABCs of Media, August 2024, https://abcsofmedia.com/.
- Jackie Davalos and Leon Yin, “AI Detectors Falsely Accuse Students of Cheating—With Big Consequences,” Com, October 18, 2024, https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations; Ethan Mollick, “Post-Apocalyptic Education,” August 20, 2024, https://www.oneusefulthing.org/p/post-apocalyptic-education.
- Maya King, “More Money Urgently Needed to Reach Younger and Minority Voters, Organizers Warn Harris Donors,” The New York Times, September 24, 2024, sec. U.S., https://www.nytimes.com/2024/09/24/us/politics/young-minority-voters-harris-campaign.html.
- “The Great Horse Manure Crisis of 1894,” Historic UK, accessed January 9, 2025, https://www.historic-uk.com/HistoryUK/HistoryofBritain/Great-Horse-Manure-Crisis-of-1894/.
- Amine Benayad et al., “Landing the Economic Case for Climate Action with Decision Makers,” March 12, 2025; Ruth Newkeen, “New Report from BCG and Cambridge on Climate-Change Investment – News & Insight,” Cambridge Judge Business School, March 12, 2025, https://www.jbs.cam.ac.uk/2025/new-report-from-bcg-and-cambridge-on-climate-change-investment/.
- Andy Masley, “Using ChatGPT Is Not Bad for the Environment,” Substack newsletter, The Weird Turn Pro (blog), January 13, 2025, https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for.
- Isabel O’Brien, “Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?,” The Guardian, September 15, 2024, sec. Technology, https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech.
- Arman Shehabi et al., “United States Data Center Energy Usage Report,” June 1, 2016, https://doi.org/10.2172/1372902.
- “Ben Inskeep on X: ‘5) Data Centers Provide Very Few Jobs. Other Economic Development Activity Produces 100x More Jobs per MW of Power Demand. https://T.Co/fpeZI9j0Ay’ / X,” X (formerly Twitter), January 29, 2025, https://x.com/Ben_Inskeep/status/1884731768915267837
- Ethan Howland, “Existing US Grid Can Handle ‘Significant’ New Flexible Load: Report,” Utility Dive, February 11, 2025, https://www.utilitydive.com/news/us-grid-headroom-flexible-load-data-center-ai-ev-duke-report/739767/.
- Sharon Goldman, “Entergy’s Stock Is Surging after a $10 Billion Meta Deal — the CEO Says Mega AI Data Centers Are Changing the Utility Business,” Yahoo Finance, February 20, 2025, https://finance.yahoo.com/news/entergy-stock-surging-10-billion-223534445.html.
- Complaint of Kentucky Public Service Commission, Attorney General of the Commonwealth of Kentucky v. American Electric Power Service Corporation, et al. under EL25-67.,” Federal Energy Regulatory Commission Accession Number 20250312-5238, accessed March 30, 2025, https://elibrary.ferc.gov/eLibrary/filelist?accession_number=20250312-5238&optimized=false
- Goldman Sachs, “AI Is Poised to Drive 160% Increase in Data Center Power Demand,” May 14, 2024, https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand.
- Gavin Maguire, “US Power System Becomes More Fossil-Dependent than China’s,” Reuters, October 25, 2024, sec. Energy, https://www.reuters.com/business/energy/us-power-system-becomes-more-fossil-dependent-than-chinas-maguire-2024-10-25/.
- Oliver Milman, “The Trump EPA’s Baffling New Agenda Consists of Throttling Major Environmental Rules – Mother Jones,” March 13, 2025, https://www.motherjones.com/politics/2025/03/lee-zeldin-epa-deregulation-donald-trump-pollution-climate-emissions-rules-environmental-protection/.
- Dara Kerr, “How Memphis Became a Battleground over Elon Musk’s xAI Supercomputer,” NPR, September 11, 2024, sec. Business, https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution.
- Mandy Hrach, “FOX13 INVESTIGATES: Why More Children Suffer from Asthma in Memphis than Other Cities,” FOX13 Memphis, June 29, 2023, https://www.fox13memphis.com/news/fox13-investigates-why-more-children-suffer-from-asthma-in-memphis-than-other-cities/article_88d71b52-16bd-11ee-987f-d3bed23374ba.html; Myracle Wicks, Tarvarious Haywood, and Bria Bolden, “Elon Musk’s xAI to Build Multi-Billion-Dollar Supercomputer Project in Memphis,” WMC Action 5 News, June 5, 2024, https://www.actionnews5.com/2024/06/05/elon-musk-build-multi-billion-dollar-ai-supercomputer-project-memphis/.
- Ethan Howland, “Talen-Amazon Interconnection Agreement Needs Extended FERC Review: PJM Market Monitor,” Utility Dive, July 11, 2024, https://www.utilitydive.com/news/talen-amazon-interconnection-agreement-ferc-constellation-vistra/721066/; “FERC Rejects Interconnection Pact for Talen-Amazon Data Center Deal at Nuclear Plant | Utility Dive,” accessed December 25, 2024, https://www.utilitydive.com/news/ferc-interconnection-isa-talen-amazon-data-center-susquehanna-exelon/731841/; “FERC Rejects Interconnection Deal for Talen-Amazon Data Centers,” accessed December 26, 2024, https://www.ans.org/news/article-6534/ferc-rejects-interconnection-deal-for-talenamazon-data-centers/.
- Ethan Howland, “AEP, Exelon Oppose Talen-Amazon Interconnection Pact to Protect Rate Base Growth Potential: Constellation,” Utility Dive, July 24, 2024, https://www.utilitydive.com/news/aep-exelon-talen-amazon-interconnection-rate-base-ferc-constellation/722246/.
- Eliza Martin and Ari Peskoe, “Extracting Profits from the Public: How Utility Ratepayers Are Paying for Big Tech’s Power,” March 5, 2025.
- Leonardo Nicoletti, Naureen Malik, and Andre Tartar, “AI Needs So Much Power, It’s Making Yours Worse,” Com, December 27, 2024, https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/.
- Mora, “Update”; Shaw, “MAHA?”; Clark, “‘A Slap In The Face.’”
- Keith J. Benes, Joshua E. Porterfield, and Charles Yang, “AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy – United States Department of Energy,” April 2024.
- ,“Fast,Scalable,Clean,andCheapEnough,”December18,2024, https://www.offgridai.us/; Tyler Norris, “Tyler Norris on X: ‘@duncan__c Most Balancing Authorities Have Vastly Underutilized Existing Gen Capacity & T&D Networks in Most Hours, so Going 100% BTM Is Redundant No? Not to Mention Huge Land Requirement and Need for on-Site Gas Delivery…’ / X,” X (formerly Twitter), December 19, 2024, https://x.com/tylerhnorris/status/1869590251339161907; Jigar Shah, “Jigar Shah on X: ‘@tylerhnorris @duncan__c Exactly, You Might Be Forced to Undersize the Grid Connection, but It Is Never Economically Better to Go off-Grid from an Economic Optimization Standpoint.’ / X,” X (formerly Twitter), December 19, 2024, https://x.com/JigarShahDC/status/1869592001022407117. Baranko et al., “Fast, Scalable, Clean, and Cheap Enough”; Norris, “Tyler Norris on X”; Shah, “Jigar Shah on X.”
- William Driscoll, “Bringing ERCOT’s Speedy Interconnection Process to the Rest of the U.S.,” pv magazine USA, September 5, 2023, https://pv-magazine-usa.com/2023/09/05/bringing-ercots-speedy-interconnection-process-to-the-rest-of-the-u-s/; Ethan Howland, “Can ERCOT Show the Way to Faster and Cheaper Grid Interconnection?,” Utility Dive, November 27, 2024, https://www.utilitydive.com/news/connect-and-manage-grid-interconnection-ferc-ercot-transmission-planning/698949/.
- “Mission:Data on X: ‘In Texas, without Doing *anything* Special (No Requests or Signing up for Programs), This REP Provides Helpful Weekly Email Reports of Household Usage. One of the Many Benefits of Consistent 15-Minute Metering and Centralized Data Access. https://T.Co/GssvUuYITQ’ / X,” X (formerly Twitter), October 21, 2024, https://x.com/mission_data/status/1848421782048674252; Mission:Data, “Texas Decision Enhances Customer Choice of Advanced Energy Providers,” Mission:data, May 10, 2018, https://www.missiondata.io/news/2018/5/10/texas-decision-enhances-customer-choice-of-advanced-energy-providers. “Mission”; Mission:Data, “Texas Decision Enhances Customer Choice of Advanced Energy Providers .”
- “The Texas Big Freeze: How a Changing Climate Pushed the State’s Power Grid to the Brink,” Utility Dive, accessed January 31, 2025, https://www.utilitydive.com/news/the-texas-big-freeze-how-a-changing-climate-pushed-the-states-power-grid/601098/; “Mission.”“The Texas Big Freeze”; “Mission.”
- National Fire Protection Association Wildfire Division, “NFPA – Preparing Homes for Wildfire,” FireWise, accessed September 20, 2024, https://www.nfpa.org/education-and-research/wildfire/preparing-homes-for-wildfire.
- “Over 130 Industry Leaders Express Support for Supply Chain Provisions in America COMPETES Act | U.S. Representative Lisa Blunt Rochester,” November 6, 2024, https://web.archive.org/web/20241106205924/ https://bluntrochester.house.gov/news/documentsingle.aspx?DocumentID=2833. “Rep. Blunt Rochester Praises House Passage of Her Bipartisan Promoting Resilient Supply Chains Act | U.S. Representative Lisa Blunt Rochester,” May 16, 2024, https://web.archive.org/web/20240516183243/ https://bluntrochester.house.gov/news/documentsingle.aspx-?DocumentID=4160; Sarah Rathke, “Supply Chain Legislation On The Horizon,” Global Supply Chain Law Blog, June 12, 2024, https://www.globalsupplychainlawblog.com/supply-chain/supply-chain-legislation-on-the-horizon/; Milo McBride, “Leaping Ahead: U.S. Innovation and the Future of Clean Energy,” Carnegie Endowment for International Peace, October 24, 2024, https://carnegieendowment.org/events/2024/10/leaping-ahead-us-innovation-and-the-future-of-clean-energy?lang=en; Milo McBride, “Catching Up or Leaping Ahead? How Energy Innovation Can Secure U .S . Industrial Stature in a Net-Zero World,” Carnegie Endowment for International Peace, September 19, 2024, https://carnegieendowment.org/research/2024/09/energy-innovation-us-industrial-stature?lang=en.
- Phil Rogers and Katy Smyser, “How Chicago Spent $400M On a Subway Superstation to Nowhere,” NBC Chicago (blog), February 23, 2015, https://www.nbcchicago.com/news/local/how-chicago-spent-400m-on-a-subway-superstation-to-nowhere/59087/.
- Chiara Boye, “Removal of Technology-Neutral Clean Energy Tax Credits Could Cost Upwards of $336 Billion In Investment, Increase Electricity Bills 10% For Consumers,” Aurora Energy Research (blog), January 6, 2025, https://auroraer.com/media/reform-to-clean-energy-tax-credits/.
- Bryan Bennett, “More Than Seven in Ten Americans Support the Inflation Reduction Act,” Navigator (blog), April 30, 2024, https://navigatorresearch.org/more-than-seven-in-ten-americans-support-the-inflation-reduction-act/.
- Shah, “Jigar Shah on X.”Shah.
- “Innovating Future Power Systems: From Vision to Action,” American Enterprise Institute – AEI (blog), accessed February 27, 2025, https://www.aei.org/research-products/report/innovating-future-power-systems-from-vision-to-action/.
- “How to Talk to Your Friendly Neighborhood Public Utility Regulator – Heatmap News,” accessed March 10, 2025, https://heatmap.news/podcast/shift-key-s2-e27-charles-hua.
- Michael Safi et al., “Revealed: The Names Linked to ClothOff, the Deepfake Pornography App,” The Guardian, February 29, 2024, sec. Technology, https://www.theguardian.com/technology/2024/feb/29/clothoff-deepfake-ai-pornography-app-names-linked-revealed; Charles Toutant, “An AI Took Her Clothes Off. Now a New Lawsuit Will Test Rules for Deepfake Porn,” New Jersey Law Journal, February 5, 2024, https://www.law.com/njlawjournal/2024/02/05/an-ai-took-her-clothes-off-now-a-new-lawsuit-will-test-rules-for-deepfake-porn/.
- Jason Koebler, “AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom,” 404 Media, November 12, 2024, https://www.404media.co/ai-chatbot-added-to-mush- room-foraging-facebook-group-immediately-gives-tips-for-cooking-dangerous-mushroom/.
- Katherine B. Forrest (Fmr.), “The Ethics and Challenges of Legal Personhood for AI,” accessed January 8, 2025, https://www.yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai .
- Bryan H. Choi, “Negligence Liability for AI Developers,” Lawfare Media, September 25, 2024, https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers. Jr Henderson, ed., “Learned Hand’s Paradox: An Essay on Custom in Negligence Law,” California Law Review, 2017, https://doi.org/10.15779/Z38585V; Joshua Turner and Nicol Turner Lee, “Misrepresentations of California’s AI Safety Bill,” Brookings, September 27, 2024, https://www.brookings.edu/articles/misrepresentations-of-californias-ai-safety-bill/; James M. Beck, “New Decision Directly Addresses the ‘Is Software a Product’ Question,” Drug & Device Law, May 2, 2022, https://www.druganddevicelawblog.com/2022/05/new-decision-directly-addresses-the-is-software-a-product-question.html; Catherine Sharkey, “Products Liability for Artificial Intelligence,” September 25, 2024, https://www.lawfaremedia.org/article/products-liability-for-artificial-intelligence; Hon John G Browning, “A Product by Any Other Name? The Evolving Trend of Product Liability Exposure for Technology Platforms,” Elon Law Review 16, no. 1 (September 22, 2023): 181–219.
- Katherine B. Forrest (Fmr.), “The Ethics and Challenges of Legal Personhood for AI.”
- Kevin Frazier, “Building an AI Superfund: Lessons from Climate Change Legislation | TechPolicy.Press,” Tech Policy Press, October 10, 2024, https://techpolicy.press/building-an-ai-superfund-lessons-from-climate-change-legislation;A. Crunden, “Trump Leaves Murky Superfund Legacy – E&E News by POLITICO,” January 13, 2021, https://www.eenews.net/articles/trump-leaves-murky-superfund-legacy/.
- Sophie Luskin, “Need for Whistleblower Protections in Artificial Intelligence Industry Discussed in Senate Judiciary Hearing,” Whistleblower Network News (blog), September 24, 2024, https://whistleblowersblog.org/corporate-whistleblowers/need-for-whistleblower-protections-in-artificial-intelligence-industry-discussed-in-senate-judiciary-hearing/; Mary Allain, “Catastrophic AI Risks Highlight Need for Whistleblower Laws,” Government Accountability Project (blog), June 10, 2024, https://whistleblower.org/in-the-news/techtarget-catastrophic-ai-risks-highlight-need-for-whistleblower-laws/; Courtney Hague Andrews et al., “DOJ to Evaluate AI Risk Management and Whistleblower Protections in Corporate Compliance Programs | White & Case LLP,” October 4, 2024, https://www.whitecase.com/insight-alert/doj-evaluate-ai-risk-management-and-whistleblower-protections-corporate-compliance; Ben Kingsley, Jen Hitchcock, and Ran Ben-Tzur, “Important Whistleblower Protection and AI Risk Management Updates,” The Harvard Law School Forum on Corporate Governance (blog), October 30, 2024, https://corpgov.law.harvard.edu/2024/10/30/important-whistleblower-protection-and-ai-risk-management-updates/.
- Glen Weyl et al., “Prosocial Media” (arXiv, March 14, 2025), https://doi.org/10.48550/arXiv.2502.10834.
- Sean Michael Kerner, “CrowdStrike Outage Explained: What Caused It and What’s next,” TechTarget, July 26, 2024, https://www.techtarget.com/whatis/feature/Explaining-the-largest-IT-outage-in-history-and-whats-next; Lakshmi Varanasi, “CrowdStrike CEO Has Twice Been at Center of Global Tech Failure – Bus…,” archive .is, July
22, 2024, https://archive.is/i7jdU; Internet Security Research Group, “What Is Memory Safety and Why Does It Matter?,” Prossimo, accessed December 31, 2024, https://www.memorysafety.org/docs/memory-safety/; National Security Agency press release, “NSA Releases Guidance on How to Protect Against Software Memory Safety Issues,” National Security Agency/Central Security Service, November 10, 2022, https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/http%3A%2F%2Fwww.nsa.gov%2FPress-Room%2FPress-Releases-Statements%2FPress-Release-View%2FArticle%2F3215760%2Fnsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues%2F - Brian S. Mandell et al., “Eight Types of Power City Leaders Can Use When Negotiating,” Bloomberg Harvard City Leadership Initiative, February 5, 2024, https://live-bloomberg-harvard-city-leadership-initiative-2023.pantheonsite.io/resources/eight-types-of-power-city-leaders-can-use-when-negotiating/.
- Caitriona Fitzgerald and Matt Schwartz, “A New Model for State Privacy Legislation | TechPolicy.Press,” Tech Policy Press, January 6, 2025, https://techpolicy.press/a-new-model-for-state-privacy-legislation; “U.S. State Privacy Laws,” EPIC – Electronic Privacy Information Center (blog), accessed January 11, 2025, https://epic.org/issues/privacy-laws/state-laws/.
- Matthew Guariglia, “Police Use of Face Recognition Continues to Wrack Up Real-World Harms,” Electronic Frontier Foundation, January 15, 2025, https://www.eff.org/deeplinks/2025/01/police-use-face-recognition-continues-wrack-real-world-harms.
- “FTC Surveillance Pricing Study Indicates Wide Range of Personal Data Used to Set Individualized Consumer Prices,” Federal Trade Commission, January 17, 2025, https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-surveillance-pricing-study-indicates-wide-range-personal-data-used-set-individualized-consumer; Erin Cabrey, “New FTC Chair Shuts down Public Comment on Retailers’ Surveillance Pricing,” Retail Brew, January 24, 2025, https://www.retailbrew.com/stories/2025/01/24/new-ftc-chair-shuts-down-public-comment-on-retailers-surveillance-pricing; Jake Johnson, “‘Unthinkable’: Trump FTC Chair Shuts Down Public Comments on Corporate Pricing Tactics | Common Dreams,” accessed February 3, 2025, https://www.commondreams.org/news/trump-ftc-chair.
- Casey Newton, “‘Time Well Spent’ Is Shaping up to Be Tech’s next Big Debate,” The Verge, January 18, 2018, https://www.theverge.com/2018/1/17/16903844/time-well-spent-facebook-tristan-harris-mark-zuckerberg.
- “Porting: Keeping Your Phone Number When You Change Providers | Federal Communications Commission,” November 17, 2023, https://www.fcc.gov/consumers/guides/porting-keeping-your-phone-number-when-you-change-providers.
- Jason Fuller, Juana Summers, and Patrick Jarenwattananon, “What May Be next after a Federal Court Struck down the FCC’s Net Neutrality Rules: NPR,” accessed January 11, 2025, https://www.npr.org/2025/01/06/nx-s15247750/what-may-be-next-after-a-federal-court-struck-down-the-fccs-net-neutrality-rules.
- Brian Stelter, “Brendan Carr Wrote the FCC Chapter in ‘Project 2025.’ Now He’s Trump’s Pick for the Agency | CNN Business,” CNN, November 18, 2024, https://www.cnn.com/2024/11/18/media/brendan-carr-trump-fcc-nominee- project-2025/index.html; Megan Lebowitz, “Trump Picks Brendan Carr to Lead the Federal Communications Commission,” NBC News, November 18, 2024, https://www.nbcnews.com/politics/donald-trump/trump-brendan-carr-federal-communications-commission-rcna180567; John R. Vile, “Brendan Carr,” The Free Speech Center, January 4, 2024, https://firstamendment.mtsu.edu/article/brendan-carr/.
- Harold Feld, “Digital Platform Act – A New E-Book From Public Knowledge,” accessed January 11, 2025, https://www.digitalplatformact.com/.
- Rindala Alajaji, “Key Issues Shaping State-Level Tech Policy,” Electronic Frontier Foundation, February 3, 2025, https://www.eff.org/deeplinks/2025/02/key-issues-shaping-state-level-tech-policy.