For Obvious Reasons

When Realism Meets Optimism
FERNANDO DE FRUTOS, CFA, PhD | JANUARY 2026

• Global growth is firming; the U.S. remains the marginal leader: A broader pickup in investment and productivity supports a constructive global cycle. Within that, continued U.S. outperformance and any narrowing in fiscal and trade gaps at the margin are typically dollar-supportive and leave less room for debasement narratives.
• AI is already lifting growth, not just promising to: It is visible in a measurable capex upcycle across data centers, chips, networking, and software, and it is visible indirectly in shadow GDP as LLMs deliver market-comparable work at near-zero marginal cost.
• AI value capture will concentrate where inference is metered and embedded: As inference becomes a critical input of production, the leading LLM platforms are best positioned to convert usage into recurring revenue through subscriptions, enterprise licensing, and workflow integration.

Entering any new year, the temptation is to invent a fresh narrative, often spiced with worries, especially after three consecutive years of above-average performance. After extended gains, the gambler’s fallacy is tempting: assuming a reversal is owed. It is not. Each year, month, and day starts with its own distribution of risks and returns. Absent a specific shock on the horizon, the realistic starting point is continuity, and continuity still looks constructive.

MACRO: The cycle is intact
Three numbers carry most of the macro message. First, growth: the U.S. economy is still expanding at a higher-than-expected pace, with Q3 GDP up 4.3% at an annual rate. Second, productivity: nonfarm business productivity rose 4.9% annualized in Q3. Growth with productivity is the best foundation for a durable expansion because it raises profits and real incomes. Third, inflation: core inflation eased to 2.6% in in late 2025. Taken together, these data points describe a continuation of the cycle.

The labor market fits that benign view. It is cooling at the margin without breaking. With unemployment around 4.4%, there is enough slack to reduce wage pressure, but not enough weakness to trigger a broad demand shock. If that glide path holds, the regime looks increasingly like a workable Goldilocks: growth that remains acceptable, inflation that continues to ease, and a clearer path toward a less restrictive policy backdrop. Risks remain, but the data does not justify a default pessimistic stance.

MICRO: AI is already in the data
AI is the clearest reason the productivity ceiling may be rising. Its impact is still under-measured because much of what people get from large language models arrives at near-zero marginal cost. GDP records transactions; it does not do a good job recording “free” services. Yet a meaningful share of what LLMs produce is comparable to paid work: drafting, analysis, coding support, translation, troubleshooting, and parts of legal and medical workflows. If the value of these services were fully imputed, AI would likely already look like a materially larger contributor to output than official statistics suggest (see the AI Shadow GDP box).

This leads to the key bridge. LLMs are already generating real economic value that shows up as surplus rather than revenue—shadow GDP. Markets, meanwhile, are pricing a future in which a larger share of that value becomes paid and captured through subscriptions, enterprise licenses, usage-based billing, and embedded workflow tools. That shift matters because captured value shows up as profits, investment, and eventually measured GDP. Either way, the economy gets a tailwind. The difference is whether it appears inside the national accounts or outside them.

AI is also supporting growth in the most measurable way possible: investment. Even if GDP misses part of the value of the output, it does capture what is paid for: data centers, chips, networking, and the software that turns models into scalable infrastructure. AI is not only a productivity story for tomorrow; it is a capex story today.

POSITIONING: Obvious does not mean crowded
The first implication is that debasement narratives should fade. The case for flight into non-cash-flow “havens” rests on near-term monetary breakdown, which does not fit the current mix of strong growth, rising productivity, and cooling inflation. The U.S. remains the relative outlier. Firm nominal growth makes deficits easier to carry in ratio terms, and policy that favors domestic substitution can improve the external balance at the margin. The dollar does not need a flawless backdrop. It needs relative outperformance, and the U.S. still has it.

The second implication is how to think about AI exposure. In most nascent technologies, value creation is clear long before value capture is. The early internet was an open protocol: it created enormous surplus, and it took years for the eventual winners to emerge. AI is different. The core asset is a product with high barriers to entry: frontier large language models. Building and operating them demands scale in compute, data, engineering, distribution, and increasingly security and compliance. That should concentrate value capture more than in many past technology waves.

In practice, this means focusing up the stack. The firms that control the leading LLMs and can deliver inference reliably at scale are best positioned to set commercial terms through metered usage, enterprise licensing, and workflow integration. Ecosystems will form around these models, but pricing power is likely to sit with the small group that keeps pushing the frontier and serving it at industrial scale. Chips and data centers matter, but they are the build-out around the product. ChatGPT pricing already sketches the path: a ladder from 0 to 20 to 200 dollars per month. As inference becomes embedded in professional and enterprise workflows, that ladder is likely to keep adding zeroes. Monetization may be uneven, but the direction is clear.

None of this is a call for complacency. Scenario thinking still matters. The risks are real: geopolitics, policy shocks, and the open question of whether the labor market cools smoothly or starts to crack, especially if AI-driven task substitution accelerates unevenly. On the AI side, the critical uncertainties sit in the translation layer: how quickly adoption becomes durable monetization; how competition evolves and compresses pricing; and how regulation shapes platform boundaries and economics.

Markets can always correct. But the asymmetry is hard to ignore. The upside from a compounding productivity technology, combined with a macro backdrop that is normalizing rather than deteriorating, can be structurally larger than the downside of a routine drawdown. For obvious reasons, this does not feel like a year to position around fear. It feels like a year to recognize that the bigger risk is missing the compounding.

1. Conceptual lineage + scope disclaimer
This replacement-cost estimate is conceptually related to the literature on valuing free digital goods and welfare-adjusted GDP (e.g., BEA experimental work on “free” digital services and the GDP-B framework of Brynjolfsson et al.). It differs in method by valuing services delivered using observed LLM interactions and human-equivalent production time, rather than willingness-to-pay or consumer-surplus surveys. This is an order-of-magnitude replacement-cost estimate—not an official GDP statistic. Measured GDP rises only to the extent that these services are priced and recorded as revenues, wages, or investment—hence the distinction between value created (shadow GDP) and value captured.
2. Usage inputs
Sources for the usage inputs: OpenAI-to-Axios reporting, as cited by major outlets, puts ChatGPT at ~2.5B prompts per day globally and ~330M per day from the U.S.; The Verge. StatCounter’s U.S. AI chatbot share (Dec 2025) provides a transparent basis to scale Gemini and Claude relative to ChatGPT; StatCounter Global Stats.
3. Human-equivalent minutes assumption
“Human-equivalent minutes per interaction” reflect how long a competent professional would typically need to research, draft, and format a comparable first output (e.g., summarizing information, drafting text, outlining analysis, producing a code snippet). Even modest tasks routinely require 1–6 minutes of skilled human time—typically longer than the LLM’s response time—making this a reasonable range for a replacement-cost valuation.
4. Revenue run-rate
Reuters (OpenAI annualized revenue run-rate of approximately $10 billion as of June 2025).

 

* This document is for information purposes only and does not constitute, and may not be construed as, a recommendation, offer or solicitation to buy or sell any securities and/or assets mentioned herein. Nor may the information contained herein be considered as definitive, because it is subject to unforeseeable changes and amendments.

Past performance does not guarantee future performance, and none of the information is intended to suggest that any of the returns set forth herein will be obtained in the future.

The fact that BCM can provide information regarding the status, development, evaluation, etc. in relation to markets or specific assets cannot be construed as a commitment or guarantee of performance; and BCM does not assume any liability for the performance of these assets or markets.

Data on investment stocks, their yields and other characteristics are based on or derived from information from reliable sources, which are generally available to the general public, and do not represent a commitment, warranty or liability of BCM.

Leave a Reply