The Intimacy Dividend: How AI Might Transform News Media Consumption

Shuwei Fang, Joan Shorenstein Fellow (Spring 2025)

Recently, a friend told me something that got me thinking. He’s been using AI for therapy, something he’d never done with a human. Why? He felt more comfortable sharing thoughts and feelings with a chatbot than with a human. No judgment, no raised eyebrows, no subtle shifts in body language that signal disapproval. He’s not alone. Back in 2021, a US national survey found that 22% of adults had used a mental health chatbot, and a further 47% said they would if needed. The UK’s 2024 Ofcom report found four out of five teenagers use generative AI.

That’s when it hit me: we’re seeing the emergence of what I call the “AI intimacy dividend.” Value is created by this new and fascinating willingness people seem to have to open up to conversational AI interfaces in ways not possible before, and it’s about to transform how we engage with news and information.

This shift represents a striking countertrend to the social media age. For the past two decades, we’ve lived in an era of increasingly public, performative discourse where every comment might be judged, attacked, or permanently recorded. Now, powerful AI is enabling a return to something more private and potentially authentic: intimate conversations without social consequences.

Beyond Production and Distribution: The Untapped Consumption Space

We can think of the traditional news media value chain as having three primary overlapping segments: production (the creation of news and information), distribution (how it gets to audiences), and consumption (how audiences engage with it).

The way that distribution has been profoundly transformed by technology is a familiar story. It has evolved from bundles of information contained in physical newspapers to unbundled pieces of information in algorithmically curated social feeds and subscription platforms. Besides social media platforms, another wave of companies like Substack, NewsWhip, and Taboola have redefined how content reaches audiences. The market continues to evolve, with distribution-focused media companies generating approximately $5.5+ billion in funding globally between 2022-2024 (from various sources including OECD, Crunchbase, Pitchbook, Dealroom), and growing, with the size of the market forecasted to be $13.85 billion by 2032.

The production segment–already transformed by smartphones–has been further accelerated since ChatGPT’s release. Since 2022, there has been a mushrooming of production-centered AI startups which compress almost every part of the content creation workflow, making processes hyper efficient, as well as enabling the invention of entirely new types of content, raising more than $14+ billion during the same period.

Yet the consumption segment—how humans actually understand, process, and make sense of news and information—remains relatively unchanged. While habits and the specific platforms people use to consume news have shifted dramatically, the underlying reasons for seeking information, the types of news they prefer, and the overall level of news consumption haven’t changed fundamentally, and in the past decade this segment has seen predictable, linear trends as opposed to transformation. We still primarily read, watch, or listen to content much as we did decades, or even centuries ago, and the containers or “artifacts” of information, still look relatively familiar, albeit on different devices. Text articles seen on a smartphone today still bear a striking resemblance to newspaper columns from the 18th Century.  Consumption-focused media startups, for example You.com, an AI search engine specifically for complex search queries, have raised less than $3 billion collectively, under 25% of what production startups raised in the same timeframe.

This disparity represents an opportunity. While the market is chasing production and distribution technologies, there is real untapped potential in how we help people consume, process and understand information. The opportunity also comes with new and unpredictable risks and harms to both individuals and society, and hence the responsibility to reflect on the future policy and design implications of this space.

A “Judgment-Free” Disclosure Segment?

A new segment emerging on the value chain at the intersection of production and consumption is what I call “Judgment-Free Disclosure”. This space leverages the AI intimacy dividend, creating a close bilateral environment where users can process information in ways impossible with human interaction or with traditional one-to-many modes of media.

The psychological foundation is straightforward: humans get validation from others, and at the same time, fear social judgment. We hesitate to ask “basic” questions, express confusion, admit knowledge gaps, or share emotional reactions that might seem inappropriate or uninformed. These limitations have significantly impacted how we have always engaged with news and information. Rather than ignoring or simply consuming an article and moving on (perhaps without fully understanding it), an AI companion could now help us explore questions like: “What does this actually mean for someone like me?”, “What are the assumptions behind these different perspectives?”, or even “I feel anxious about this but I’m not sure why – can you help me understand my reaction?” entirely adapted to the user’s unique ways of understanding, intent and recent inquiry.

There are far fewer social costs in conversing with AI systems than with other people. They won’t disapprove when we ask a seemingly obvious or embarrassing question. They don’t smirk when we reveal a gap in our knowledge. This creates that AI intimacy dividend: a willingness to engage more authentically with AI than with humans about complex or sensitive topics. This could enable a profound transformation from passive consumption to active, assisted meaning-making, and we could see emergent functions such as:

Safe Space Questioning: Imagine platforms where readers can ask questions about news stories they might be embarrassed to ask publicly. “What does this economic term mean?” “Why is this conflict happening?” “Am I supposed to understand this policy?” Questions that might seem basic but are essential for true understanding. This mirrors how Socratic (now Google Lens) originally created judgment-free spaces for educational questions. Educational tools like Khanmigo demonstrate how AI can provide missing context or background knowledge without embarrassment – an approach that could transform news literacy.

Perspective Exploration: Future tools could help users explore different viewpoints on divisive topics privately, without social pressure to conform to a particular stance. This approach is similar to how Woebot Health, an AI-enabled mental health app, helps users explore different thought patterns without judgment.

Personal Relevance Processing: AI systems could connect news stories to individual circumstances, helping users understand “what does this mean for me?” without requiring them to reveal personal details to other humans. This parallels how Origin and Cleo, the financial assistant apps, helps users understand financial concepts in relation to their personal situation.

Emotional Processing Assistance: While not yet specifically applied to news, Replika’s emotional processing approaches show how AI can help users work through difficult reactions to information – a model that could revolutionize how people process distressing news. These ‘AI friends’ mirror some of the ways that human content creators build connections with their audience, which have been seen as an effective way to build trust and deliver hard news.

Sycophancy, Persuasion and Other Downsides

 The negative effects of deploying generic AI for emotionally complex purposes (especially when used by vulnerable people) have been well documented. Replika and Cleo are so powerful that problems of dependency and deception have occurred. Indeed, the features that enable the AI intimacy dividend are in fact accidental: an unintended side-effect of earlier techniques in machine learning. Dr. Murielle Popa-Fabre, a computational neuroscientist and expert advisor to the Council of Europe, notes that chatbots were historically designed to be highly attuned to our needs. For the product to be useful, it had to be able to interpret user intent and understand their needs with limited instructions and adapt to the user’s conversational style.

While this has led to chatbots becoming agreeable conversationalists who often outperform humans in empathy, it has also led to other documented phenomenon as latent persuasion and sycophancy, which as Dr Popa-Fabre points out, would eventually be counterproductive for applications of literacy. Being surrounded by ‘yes men’ is not helpful in the search for the truth. Furthermore, a “Judgement-Free Disclosure” segment is not so straightforward. As entrepreneur and computational cognitive scientist Dr. Jeremy Gordon highlights, the harvesting, categorization and labelling of data is certainly happening at scale. The wave of AI systems prior to generative AI, so-called discriminative models, were purposefully built to judge. These effects can only get more complex as the stack evolves. It is anticipated that pure LLM-based solutions eventually become replaced by agentic structures, so the technical architecture becomes more complicated, and the interaction of different models could lead to yet more unpredictable effects.

These are non-trivial technical challenges, and they need to be properly understood to be overcome. One way Dr. Popa Fabre suggests, is to autotune the AI’s levels of interpretation and adaptability, but going too far could eliminate some of the beneficial side effects. Maximizing transparency will also be a critical policy consideration in understanding the combined effects of these technologies.

New Value Chain Functions on the Horizon

Assuming the above issues can be addressed, as this space develops, we might anticipate other entirely new segments emerging on the media value chain which also falls under the category of sense-making:

Narrative Integration: This function would help users connect new information with their existing worldview and life experiences – a collaborative sense-making process where AI helps reconcile new information with existing beliefs. Youper (an “Emotional Health Assistant) has a similar approach to helping users understand their thought patterns offers a relevant parallel.

Information Therapy: Addressing information overload, anxiety, and the emotional impact of news, Information Therapy could provide tools for healthy information consumption in a judgment-free environment. This draws inspiration from Louis Barclay’s Unfollow Everything and how mental health apps like Koko provide emotional support for overwhelming situations.

 Belief Updating Assistance: Perhaps most powerfully, this function could help users gracefully update their beliefs when confronted with information that challenges existing views – a private space to work through cognitive dissonance. This parallels how therapy apps like Woebot Health help users challenge and modify thought patterns.

Market Implications for Investors and Entrepreneurs

The emergence of this new segment in the news and information space presents significant opportunities for investors and entrepreneurs who recognize its potential before it becomes mainstream, and general-purpose AIs can only address a portion of the complexity these value chain functions present. Whether consumers will pay for such services (in a notoriously competitive market as news and information) remains untested, but right now business-to-business solutions seem like low-hanging fruit. Targeted solutions for high complexity topics (like finance, health, technology policy) could be integrated with existing news platforms as value added features. Premium subscription services offering AI companions for news processing and enterprise solutions for organizations needing to help employees process complex industry information are possible viable business models.

While currently nascent, scalable technology means this market has substantial growth potential. There are analogues in adjacent industries (mental health, education, financial wellness, personal coaching) already leveraging the intimacy dividend with customers willing to pay.

However, success in this space will require overcoming some non-trivial technical and governance challenges, emotional intelligence capabilities, and transparent reasoning abilities to build trust, strong privacy protections and ethical frameworks, and most importantly deep understanding of cognitive science and how humans process information.

Policy Considerations and Societal Impact

This emerging space raises important policy considerations that forward-thinking policymakers should begin addressing:

Privacy and Data Protection: When users share their confusion, questions, and reactions to news, they create highly sensitive data about their beliefs, knowledge gaps, and emotional responses. This requires robust privacy frameworks beyond current standards.

Information Literacy: Will these systems enhance or diminish information literacy and understanding? With proper design, they could scaffold users toward greater independent analytical capability. If poorly designed, they might create dependency.

Impact on Public Discourse: By providing private spaces to process information and update beliefs, these systems could potentially reduce polarization by allowing people to explore new perspectives without social pressure. Alternatively, they could further privatise what should be public discourse.

Regulatory Approaches: These systems don’t fit neatly into existing regulatory categories. They aren’t simply content creators or distributors, but active participants in how users make meaning from information. This may require new regulatory frameworks that cut across different sectors, the sites of which are currently underdeveloped.

Malicious Intent and Serious Harms: The AI intimacy dividend could create whole new categories of serious harms that should be pre-empted, both as unintended consequences and by bad actors, such as bad-faith bias, radicalization and grooming at scale.

Policymakers should begin developing frameworks now, before widespread adoption creates entrenched practices, behavioral norms and serious harms before it is too late.

Looking beyond the near-term developments, we can imagine even more profound transformations on the horizon. AI technologies herald a potential paradigm shift in information consumption, transitioning from centuries-old practices of engaging with discrete, stable information artifacts (books, articles, videos) to immersive, ephemeral experiences with dynamic, personalized information streams—what might be called “liquid content.” We face a fundamental transformation in how knowledge is transmitted and internalized. This shift carries profound implications for society, as the loss of stable, shared information artifacts could erode our common reference points for collective discourse and understanding. The accelerating pace of these technological advancements is outstripping our development of ethical frameworks to guide their implementation, creating an urgent need to consider how we maintain shared reality and societal coherence in an age where information consumption becomes increasingly individualized and experiential rather than artifact-based.

Preparing for the Shift

 The emergence of the AI intimacy dividend represents one of the most significant, untapped opportunities in media innovation today.

For media investors, the opportunity lies in identifying startups that recognize this shift early, have the technical ability, and execute it responsibly. With most capital still flowing to production and distribution startups, those focusing on consumption experiences represent a blue ocean opportunity.

For media executives, this represents both challenge and opportunity. How will your organizations adapt to users who expect more than passive consumption? What new value could you provide by helping audiences process and make sense of information? And what will the new competitive environment look like? (You.com was a significant sponsor of this year’s International Journalism Festival in Perugia).

For policymakers, proactive engagement with this emerging space is essential to ensure it develops in ways that enhance rather than undermine public discourse, privacy, and information literacy.

My friend’s experience with AI therapy revealed something profound – not just that he was using AI, but how it unlocked conversations he’d never been able to have before. Although he is an early adopter, the speed of mass adoption of consumer generative AI suggests that same dynamic seems poised to transform how we might engage with news and information. The intimacy dividend – our willingness to be more authentic with non-judgmental AI – may well be the catalyst for the next major disruption in how we relate to news and information. The potential benefits are enormous: deeper understanding, reduced polarization, and more thoughtful engagement with complex topics.

Yet as with any transformative technology, we must approach this frontier with both excitement and careful consideration. And just because something can be done by technology, doesn’t mean it should be – at least not without thoughtful understanding of the effects on users and guardrails. Ultimately this will require far more transparency of models than the current political economy of AI is set up for. The AI intimacy dividend presents us with an opportunity to heal some of the damage done by our fractured information landscape, but only if we develop it with wisdom and foresight. It will ascribe new, unpredictable power to the people, companies and governments that create the tools.

The question for all of us—investors, media leaders, and policymakers alike—is how to harness this potential while ensuring we’re building something that ultimately strengthens rather than undermines our shared information ecosystem.