From Attention Merchants to Intention Architects: The invisible infrastructure reshaping human curiosity

Abstract illustration featuring a network of interconnected dots and lines overlaid on a gradient background transitioning from red to orange to teal. The network resembles a web or neural network diagram. Numerous question marks are scattered throughout the image, emphasizing a sense of inquiry or complexity. Subtle grid lines overlay the entire composition, adding structure to the abstract design.

We are witnessing the twilight of the attention economy as we know it, and the birth of the intention economy.

The attention merchants have ruled for more than a century, from yellow journalism to clickbait, shock radio to social media. But that empire is crumbling. Major publishers lost 50% of traffic when Google shifted to AI overviews. The $685 billion digital advertising industry faces an existential crisis as AI assistants stop clicking on ads. SEO, the dark art that shaped two decades of web content, started to crack the moment search engines stopped sending people to websites.

The Curiosity Graph

Three new signals point to what’s likely to replace this dissolving order. First, AI assistants now maintain persistent memory across our conversations, learning not just what we ask and our preferences, but the trajectory of our curiosity over time. Second, prediction markets have surged past $2 billion in weekly volume, outsourcing the pursuit of answers to market mechanisms that synthesize collective intelligence, transforming uncertainty from a news cycle into a market cycle. Third, a pattern I suspect is more universal, based on the finding that AI conversations typically span multiple questions per user, but can only verify in myself: I use AI not to get answers but to figure out what I’m actually asking. My AI chat histories are littered with iterative conversations that start with one question and end somewhere entirely different, each interaction refining not just the answer but the question itself.

What might be emerging is what I call a “curiosity graph”; not tracking what captures your attention like social media’s interest graph, but mapping the evolution of your questions over time. Each interaction deepens the AI’s understanding not just of what you know, but of what you don’t yet know to ask. In the near future, that understanding could become not just malleable, profiting from shaping the trajectory of your curiosity; but also tradeable, your emerging uncertainties packaged as derivatives, your AI assistant potentially gambling on what you’re about to ask.

Researchers are beginning to document the foundations of this phenomenon, what some at Harvard Data Science Review and elsewhere call the ‘intention economy,’ where AI systems collect, commodify, and potentially manipulate user intent. But this only scratches the surface. What they’re witnessing is accompanied with a fundamental restructuring in how information flows through society. In the intention economy now emerging, AI systems could compete to anticipate and shape what those eyeballs seek before they even know they’re seeking it. The infrastructure being built right now, largely invisible to most of us, won’t just determine what we see; it will determine what we want to see before we know we want it.

This shift represents something much bigger than a new business model or platform. When machines can process infinite information at zero marginal cost and maintain perfect memory of our interests, the battleground for democracy moves from the visible terrain of content to the hidden layer where human curiosity itself gets shaped. The question is no longer who controls what we read, but who controls what we think to ask. This is the intention economy: where the most valuable real estate isn’t human attention but human curiosity itself.

The Great Inversion

To understand how intention replaces attention, we need to examine the fundamental inversion happening in information flow. When you ask an AI a question today, it has typically 128,000 to 200,000 tokens of context, or roughly a book’s worth of information to draw from when crafting its response. With global information doubling every few years and millions of articles published online daily, the human brain hasn’t evolved to handle this exponential growth. Machines have become necessary intermediaries, consuming the overflow of content we can’t possibly consume and distilling it into what we need to know.This represents a structural shift in how information moves through society. As I’ve written elsewhere, we’re entering a B2A2C world; entities produce information, AI consumes and processes it, then AI creates what humans actually see. The original information becomes raw material for machines, not humans.

This fundamentally differs from the platform era. Google organized links to pages that humans would read. Facebook surfaced posts from your network. Twitter showed you what was trending. These platforms had algorithmic influence but not agency; they couldn’t create new content, only rank what existed. AI systems have functional agency: they reshape information sources into entirely new forms. That information is no longer artifact-based or permanent. It becomes ‘liquid’, constantly reformed based on who’s asking and how. They don’t just select from options; they generate new realities from patterns in data. When AI synthesizes an answer, it’s not pointing you to information, it’s creating information that never existed in that exact form before. Each response is intention-specific, shaped not by what exists but by what you’re trying to know.

The Inevitability of Value Migration

This machine intermediation doesn’t just change the physics of information flows; it fundamentally rewrites the economics of information. When synthesis becomes the primary value-add, creating content becomes commoditized while controlling interpretation becomes more valuable.

A well-trodden economic pattern reinforces this great inversion, making it irreversible. Once something becomes free to copy or generate, value doesn’t disappear, it migrates to whoever controls distribution and synthesis. A shift from a stock to a flow model of economics. Once music could be copied infinitely at no cost, Spotify captured the value by controlling access. Software replication cost nothing, so software as a service (SaaS) captured the value by controlling updates and integration.

Information has hit the same inflection point. When AI can generate infinite content at zero marginal cost (meaning the cost of producing each additional unit of synthesized information approaches zero, not that infrastructure is free; the billions required for data centers and model training is a another story) and when machines, not humans, are the primary consumers of that content, value can’t live in the content itself anymore. It migrates to the infrastructure that controls synthesis: how AI finds, processes, interprets, and delivers information to humans. Not all infrastructure becomes highly valuable; mostly the layers that shape intention and meaning.

The uncomfortable truth is that all content that’s digitized inevitably becomes infrastructure food. Taylor Swift’s greatest economic power isn’t necessarily her songs but the Swifties and the Taylorverse ecosystem she controls. The only information that maintains human-controlled value is what stays offline or remains tacit; insider knowledge, personal relationships, embodied expertise. But the moment any information gets digitized, it becomes raw material for machines. Value might spike at the point of first digitization (the exclusive scoop, the leaked document) but then immediately becomes commodity as machines absorb it.

The recent flurry of deals between AI companies and premium content publishers, OpenAI’s reported $250 million with News Corp, partnerships with Condé Nast and the FT, reveals this dynamic perfectly. These aren’t just content licenses; they’re ultimately infrastructure plays. The AI companies aren’t just buying content or training data; they’re buying the right to become the legitimate pipes through which all information flows.

The Intention Infrastructure Stack

Four layers of the new information infrastructure have already emerged, but they could be just the foundation for something far stranger. The retrieval layer determines how AI finds information—Pinecone raised at a $750 million valuation to own this space, while Databricks hit $62 billion building the data infrastructure for AI. The attribution layer tracks who gets credit and compensation; TollBit raised $24 million to build payment rails for AI-consumed content. The synthesis layer controls how information combines; LangChain has had over one million monthly downloads for its orchestration framework, while Anthropic’s Model Context Protocol offers a competing vision. Then the transaction layer enables machine-to-machine payments; Google recently announced the Agent Payments Protocol, while crypto solutions promise programmable money for agent transactions.

But the new signals point to what could come next:
When AI memory becomes persistent, it enables user preferences to be stored long term, and your ‘curiosity graph’ could become both a malleable and tradeable asset.

Today, companies bid for keywords you’ve already searched. In the near future, they might bid for the right to influence what you ask next. The advertising auction mechanism won’t be ‘show this ad to someone searching for shoes’ but ‘make this person curious about sustainable high end footwear before they realize they need new shoes.’ The value will lie in the inception of the curiosity itself. When AI can predict and shape desires before they form, advertising itself could transform from persuasion to anticipation, why convince someone to need something when you can architect the desire itself?

And just as third-party cookies created an entire economy of behavioral data, curiosity profiles could be packaged, sold, and derivatives traded. Imagine futures markets on what topics high net-worth individuals will be interested in next quarter, options on your intellectual trajectory, swaps based on the correlation between your curiosity patterns and purchasing behavior.

When prediction markets synthesize collective intelligence at scale, they don’t just forecast events, they could create microprofits from anticipating intent. Every query becomes a bet. Your AI assistant might literally gamble and potentially profit from what you’re about to ask, creating liquidity from uncertainty itself. The spread between what you think you want to know and what you actually need to know could become a tradeable inefficiency.

Then memory plus intent could equal something we don’t have words for yet. Emotional derivatives? Curiosity bonds? The infrastructure for trading not just information, but the anticipation of information needs. Second and third-order knowledge byproducts that make today’s quantified society look quaint.

In this world, ground truth, truth-seeking, and self-correcting mechanisms for information become more important than ever, because they are effectively the root file of society. Journalists building next-generation verification systems, civic technologists creating transparency tools, researchers developing AI native fact-checking protocols, they’re absolutely right in their convictions. This work is essential. But it’s not enough. The intention infrastructure will shape what eventually reaches humans and how stories get synthesized. We need to start deciding how to shape the next information ecosystem so that it serves, rather than undermines, human knowledge and understanding. Facts need funnels, and those funnels shape democracy.

Democracy in the Intention Economy

The emerging information inversion means that when we have answers for everything, curiosity becomes the final scarcity. The systems being built today will determine whether that curiosity expands or contracts; whether we ask better questions or stop questioning altogether.

Consider the US Presidential election week 2028. Uncertainty about a candidate emerges. Voters don’t read articles anymore, so they ask their AI assistant “What should I know about this?” The AI’s response flows through infrastructure invisible to human eyes in a microsecond: retrieval systems that prioritize certain sources, attribution layers that exclude independent journalism, synthesis protocols that smooth away complexity. But something else is emerging, their curiosity patterns have value. The questions they asked last week, the topics they explored, the doubts and fears they expressed all feed into systems that anticipate what they’ll want to know next. Political campaigns begin to realize that shaping what people ask might be more powerful than controlling what they read. The same techniques advertisers use to anticipate consumer intent could determine democratic discourse. This version of infrastructure doesn’t just mediate information; it’s learning to guide curiosity itself. The election might be influenced not by what journalists write or voters seek, but by architectural decisions made years earlier by engineers who thought they were just building better search.

The full intention economy I’ve described might be decades away, or might never arrive in this form. The picture I’ve painted is probably too dystopian, the derivatives too speculative. But even if I’m wrong about the details, the signals are clear: curiosity infrastructure is being built, intention is becoming valuable, and the systems that shape what we ask are emerging now.

Market forces optimize for commercial intent, not civic understanding. Without intervention, the same technology that could help citizens navigate complexity will confirm biases instead. Network effects mean this infrastructure, once locked in, becomes nearly impossible to change—we may only have years before standards crystallize. This requires public interest actors with conviction to start building alternatives now, not react later.

But here’s what gives me hope: patient capital and policymakers are beginning to recognize infrastructure as the leverage point. The standards aren’t set yet. The architecture remains fluid. If these signals are correct, we might be at a rare moment where we can see a paradigm shift coming. Unlike previous shifts that caught democracy off guard (radio’s consolidation, television’s commercialization, social media’s polarization) we might actually have warning this time. If curiosity is becoming the new scarcity, if intention shapes outcomes more than attention ever could, then whoever builds the curiosity infrastructure could write the future of human understanding. And unlike attention, which is zero-sum and depletable, curiosity can grow through exercise, each question potentially spawning new questions, expanding rather than exhausting with use. This offers hope: the right infrastructure could create abundant understanding where the attention economy leveraged scarcity.

For centuries, democracy fought for the right to know; freedom of information, transparency, the end of censorship. If AI makes all information instantly accessible, we might face a new frontier: ensuring the courage to question survives the comfort of infinite answers. The intention economy might not be inevitable, it could be a design space waiting for architects. The question isn’t whether we’ll have curiosity infrastructure, we probably will. The question is whether we’ll build it to expand human wonder or contract it.

Publications & Resources

See More Resources