The Deeper
You Go
"Never get out of the boat."
— Apocalypse Now, 1979 (Francis Ford Coppola, adapted from Conrad's Heart of Darkness)
Nobody decided to make OpenAI critical infrastructure. It happened one API call at a time. A workflow here, a product feature there, a board deck built on GPT outputs, a hospital intake form, a legal brief, a trading signal. Each step seemed reasonable. Each step went deeper. The shore receded without anyone noticing — because when the river is moving smoothly, nobody checks how far they've come.
Today OpenAI closed a $110 billion funding round — SoftBank $30B, Amazon $50B, Nvidia $30B — at a $730 billion pre-money valuation. Combined with ~$40 billion already on hand, Sam Altman now has approximately $150 billion in cash and a "very long runway." The headlines are writing themselves. This paper is about what comes next, and what the numbers actually say when you read them carefully.
The raise is real. The runway is real. What it doesn't change is the fundamental race OpenAI is now running: two races running in opposite directions, neither of which stops because a funding round closed. The burn race — how fast cash depletes against a cost structure that has no peer in corporate history. And the competition race — how fast Google, Anthropic, and open-source models erode the market position that justifies raising the next round. These clocks are not independent. Every move that addresses one accelerates the other.
Deutsche Bank: "No startup in history has operated with losses on anything approaching this scale. We are firmly in uncharted territory." In H1 2025, OpenAI generated $4.3 billion in revenue and lost $13.5 billion — $3.14 for every dollar earned, burning $575,000 per hour around the clock. Through 2029, it projects $115–143 billion in cumulative negative free cash flow. The $150B raise extends the runway. It does not bend the cost curve.
And underneath all of it: 92% of Fortune 500 companies now run production workflows on OpenAI infrastructure. Seven more years of runway means seven more years of deepening dependency across the global enterprise stack. The deeper you go, the larger the blast radius becomes — not if things go wrong immediately, but if things go wrong at the end of a very long river.
This paper maps both the race and the blast radius.
How the Market Is Reading This
OpenAI will be one of the "expected long-term winners in AI." Corporate backers are publicly all-in — the largest are betting on dominance, not just survival.
Technically "not in a bubble yet" — his framework requires equity issuance (IPOs) as the fourth horseman, which hasn't materialized. Goldman Sachs agrees: the current AI run is underpinned by real earnings growth, unlike dotcom.
900M weekly active users. 50M paying consumers, 9M business users. $280B+ projected 2030 revenue split consumer/enterprise. The scale of the user base is real, even if the path to profitability is contested.
Circular financing at scale. Nvidia invests in OpenAI → OpenAI buys Nvidia chips → money moves in circles. Reuters: the round "exacerbates Wall Street concerns about circular financing agreements where firms invest in and sign supply deals with each other, inflating demand and revenue." INSEAD flags dotcom-era vendor financing echoes at troubling scale.
The burn math is brutal. $74 billion in operating losses projected for 2028 alone. HSBC projects a $207 billion funding shortfall by 2030 — the gap between what OpenAI generates and what it needs to spend. Today's $150B in total cash covers less than half of that gap.
In August 2025, Altman himself told The Verge the AI market is in a bubble. Apollo's Torsten Slok argued the current AI bubble is bigger than the internet bubble. Neither has retracted. Today's raise didn't address the structural question — it pushed the reckoning further out.
The answer to "how are you going to pay for all this?" turned out to be the same one it always has been — convince enough CEOs and rich people you're going to change the world and get them to write very large checks. Also: Stargate has turned out to be mostly bilateral deals rather than a real coordinated entity. The coherent joint venture it was announced as doesn't appear to exist.
The bull case is real: 900 million weekly users, $280B revenue projections, and backers who have staked corporate reputations on the outcome. The bear case is also real: $74B in projected 2028 losses, a $207B funding gap by 2030, and circular financing structures that make it difficult to distinguish genuine demand from money moving in circles. Both can be simultaneously true — and that coexistence is precisely the systemic risk this paper maps. A company that is strategically important, competitively contested, and structurally cash-negative at scale is not a standard vendor risk. It is a new risk category. One that most institutional risk frameworks are not yet equipped to model.
Two Clocks,
One Race
The $110B raise reframes the risk — OpenAI already had ~$40B on hand; this is not a $150B war chest conjured from nothing. This is no longer primarily a story about whether OpenAI can make payroll in 2027. It is a story about whether OpenAI can win two simultaneous races — against its own cost structure, and against competitors who are structurally cheaper and accelerating. The complication: these races are not independent. The moves required to win one actively impede winning the other.
The Amazon Trainium pivot embedded in the raise is the bet on changing the cost structure. OpenAI will purchase Trainium chips for its new Frontier enterprise offering, pivoting away from Nvidia GPUs and Microsoft Azure compute. This is the right strategic move. It is also a multi-year capital and engineering program, unproven at scale, executed while running full-speed in production. The $35B Amazon tranche is milestone-contingent — if those milestones slip, effective immediate liquidity is closer to $115B. The burn clock doesn't pause for infrastructure transitions.
The competitive threat is not that a rival builds a better model. It is that the premium pricing OpenAI commands — the pricing that makes the revenue thesis work — erodes as alternatives become good enough. "Good enough" does not have to mean better. It means cheaper, integrated, and already in the stack. Google has all three advantages simultaneously.
The scenario risk managers need to model is not sudden death. It is a slow-motion squeeze: OpenAI executes reasonably well on both clocks, makes genuine progress, and arrives at its next major raise in 2030–2031 having burned through most of the $150B — without having achieved the cost structure or market dominance to justify a follow-on at a sane valuation. Not a collapse. A compression. The kind that forces terms nobody wants to accept.
When the Stated Story and the Primary Sources Diverged
$3.14 to Make
a Dollar
The most important structural analysis of OpenAI's position doesn't appear in any startup narrative. It appears in the mathematics of comparative cost structure — specifically, the permanent, compounding gap between what it costs OpenAI to deliver a unit of intelligence and what it costs Google to deliver the same unit. This gap has a name: margin structure inversion. And unlike most competitive disadvantages, it doesn't close with scale. It widens.
SemiAnalysis, the semiconductor research firm, quantified the gap in November 2025: Google's TPU infrastructure delivers 30% lower total cost of ownership than NVIDIA's GB200, and 44% lower from Google's internal perspective when accounting for full three-dimensional torus configuration. This is not a gap that closes with scale. It is a gap that widens with scale — because every additional query OpenAI serves multiplies the margin disadvantage.
Two independent data points; same picture. Data point one — P&L: H1 2025 financials show $13.5B in losses on $4.3B revenue. That is $3.14 in costs for every $1.00 earned — the figure this section leads with. Data point two — Microsoft filings: Microsoft's Q1 FY26 10-Q disclosed a $3.1B quarterly net income decrease from its equity method investment in OpenAI. Microsoft holds a 27% diluted stake. Backing out the full loss: $3.1B ÷ 0.27 = approximately $11.5B in quarterly losses, or ~$46B annualized. Against ~$20B in annualized revenue that implies a cost ratio closer to $3.30 — consistent with H1 2025 directionally, somewhat higher, likely reflecting continued burn acceleration in H2 2025. The two figures are not contradictory. They are from different periods and should not be cited in the same sentence as if they confirm each other precisely.
The competitive consequence is already visible. Gemini reached 650 million monthly active users in Q3 2025. ChatGPT's US traffic declined 35% in November 2025. Marc Benioff, the Salesforce CEO who used ChatGPT daily for three years, publicly posted he wasn't going back — not because of capability, but because of the inexorable logic of cost structures expressing themselves through product decisions that even CEOs can feel without being able to articulate.
This is the mechanism the consensus has not modeled: OpenAI's cost disadvantage is permanent, not temporary. It does not resolve with the next fundraising round. It does not resolve with GPT-5. It resolves only if OpenAI builds its own silicon — which requires a capital expenditure timeline measured in years at a scale that makes its current burn look modest by comparison. Or if a buyer with its own silicon absorbs it.
The Dependency
Footprint
The dependency problem is not simply about who uses OpenAI. It's about how they use it — and whether the integration is surface-level or load-bearing. A company that uses ChatGPT for employee brainstorming is in a very different position from one that has built an automated underwriting workflow, a medical triage decision tree, or a real-time trading signal generator on top of the GPT-4 API.
The latter category is larger than comfortable. Across finance, healthcare, legal, and technology, enterprises have spent the better part of 18 months converting OpenAI's models from experimental tools into production infrastructure. The OpenAI enterprise report published in December 2025 quantified this shift: ChatGPT Enterprise message volume grew 8x year-over-year, while structured workflow usage (Projects, Custom GPTs) increased 19x. The models aren't just being used — they're being operationalized.
| Counterparty Class | Exposure Type | Severity |
|---|---|---|
| Microsoft | $13B equity, Azure revenue dependency, M365 Copilot product line, 27% ownership stake | Critical |
| SoftBank Vision Fund | $30B committed (Feb 28 2026 round); marks down to near-zero on failure; LP pressure cascades | Critical |
| AI-Native Startups | 35% of top-funded AI startups list OpenAI as foundational provider; API unavailability = product unavailability | High |
| Healthcare Systems | Clinical decision support, triage, documentation workflows now in production; regulatory complexity on replacement | High |
| Financial Services | Risk model inputs, document analysis, compliance workflows; largest enterprise AI scale per Menlo Ventures | High |
| Legal / Professional Svcs | Document review, drafting workflows; $650M+ legal AI market built on LLM infrastructure | Medium |
| Salesforce Ecosystem | GPT-powered CRM integrations in 11,000+ companies via Einstein | Medium |
| Adobe / Canva / Notion | Multi-year partnership renewals; product features dependent on API continuity | Medium |
| 14 National Governments | Civic services and public education licensing agreements | Medium |
| NVDA / Cloud Infra Players | Revenue exposure from $1.4T committed datacenter spend; construction contracts at risk | Medium |
We've spent a decade debating whether AI would replace us. The more immediate question is what happens when the AI we've made indispensable is suddenly unavailable.
The Risk Manager's
Playbook
This paper is not investment advice, and it's not predicting OpenAI's failure. It is arguing that the failure scenario is underweighted in current risk frameworks — that the combination of unprecedented financial losses, complex governance, deep operational dependencies, and a political environment hungry for AI accountability creates a tail risk worth explicitly modeling. Here's how to start.
Model, Stress-Test, Visualize
The Capital Structure
Nobody Drew
OpenAI doesn't have a conventional debt stack. No public bonds, no syndicated credit facility, no publicly rated paper. This is the fact the market tends to lead with when dismissing debt-related contagion risk — and it's technically accurate and substantially misleading. The true obligation load is best understood not as a capital structure in the traditional sense, but as a series of instruments that are economically equivalent to debt, structured to appear as something else.
Assembled in one place — which no single disclosure does — the picture looks considerably different from the equity-round narrative.
The Default Chain:
Who Falls First
Systemic risk analysis requires mapping not just who is exposed, but in what order exposure becomes loss — and which losses trigger subsequent losses. The OpenAI contagion chain has multiple transmission pathways, some financial, some operational, some reputational. They do not all fire simultaneously, but they do not operate in isolation either.
The operational contagion is more immediate than the financial contagion — and harder to hedge. Financial exposure can be partially offset with puts, shorts, or diversified positioning. Operational dependency is binary: either the API works or it doesn't.
Consider the healthcare pathway. Multiple health systems have deployed GPT-4-based clinical decision support tools in production environments. Regulatory approval processes for AI medical tools are lengthy — switching to an alternative model isn't a weekend project. A sudden API outage creates a forced fallback to prior manual processes, with staff who may have meaningfully reduced their familiarity with those processes. The litigation exposure alone would be substantial.
The financial services pathway is more nuanced. The largest financial institutions are legally required to conduct vendor due diligence and maintain business continuity plans — which means their OpenAI dependencies are, in theory, better documented and more substitutable. In practice, the pace of deployment has outrun the compliance frameworks. Many production AI workflows were stood up faster than the accompanying vendor risk assessments.
THE MICROSOFT KNOT Microsoft's entanglement with OpenAI is so deep that it creates an unusual dynamic in the failure scenario: Microsoft is simultaneously the most exposed counterparty and the most plausible rescuer. With $13B+ invested, a 20% revenue share through 2032, and Azure's AI revenue growth at 175% year-over-year, Microsoft has every incentive to prevent an uncontrolled failure.
THE SOFTBANK VARIABLE SoftBank's $30B commitment in the Feb 28 2026 round is the more volatile element. Vision Fund I and II have demonstrated that SoftBank is capable of writing large checks and absorbing large losses. The milestone-contingent structure of the Amazon $35B tranche means a prolonged governance crisis or enterprise adoption shortfall could simultaneously trigger the failure and eliminate a significant backstop. Andy Jassy described OpenAI as "one of the very big winners long term" on CNBC — but the Trainium chip deal that anchors the Amazon relationship is a compute infrastructure bet that takes years to play out.
THE $1.4T SHADOW OpenAI committed to $1.4T in datacenter infrastructure spending over eight years. If OpenAI fails, those commitments collapse. The counterparties — hyperscalers, chip manufacturers, construction companies, real estate developers — face project cancellations at a scale that would ripple through capital markets with a force well beyond the AI sector.
Picking Up
the Pieces
"The horror! The horror!"
— Kurtz, Heart of Darkness, Joseph Conrad, 1899 (adapted in Apocalypse Now, 1979)
The aftermath of an OpenAI failure would not be a clean transition to a world of competing alternatives. It would be a prolonged, messy, legally complex scramble — part bankruptcy proceedings, part emergency remediation, part market restructuring. There are several distinct dynamics worth modeling.
The model weights question is the most consequential technical issue in any failure scenario. GPT-4 and its successors represent billions of dollars of training compute. Who owns them in a failure is not obvious. OpenAI's nonprofit-to-for-profit conversion changed the ownership structure; Microsoft has rights but not full control; investors have claims; and regulators may have views about whether these assets should be treated as public goods.
In a Chapter 11-style proceeding, model weights become a contested asset in a bankruptcy estate. The proceedings could take years. In the interim, API access disappears. The operational damage is done long before the legal questions are resolved.
Market concentration would paradoxically increase in the immediate aftermath. The two obvious beneficiaries — Google and Anthropic — would absorb displaced demand rapidly. But this creates its own systemic risk: a market that was arguably too dependent on one provider becomes dependent on two. The open-source alternatives (Meta's Llama, Mistral) would benefit from accelerated adoption but lack the enterprise support infrastructure for rapid large-scale deployment.
The prediction markets angle is instructive here. A functioning derivatives market in AI operational risk would price the probability of disruption continuously — giving risk managers a forward signal before the event, not a post-mortem after. The signals would have been there. The question is whether anyone was reading them.
What If the
Skeptics Are Wrong?
Fair is fair. The bear case — that OpenAI bleeds out on SaaS economics while Google's structural advantages compound in the background — is coherent, well-supported, and argued by people who are not idiots. But it rests on one assumption worth pulling apart: that OpenAI's actual business is the one it has today. There is mounting evidence the business it's building is something rather different — and if that business works, the current burn rate looks less like structural failure and more like a very expensive land grab.
The bear case is about the business OpenAI has today. The counter-case is about the business OpenAI is building — and whether its current burn rate is rational investment in that second business, not structural failure in the first.
| Scenario | 2030 Revenue | Viability |
|---|---|---|
| Data moat fully materializes | $400B+ | Profitable 2028 |
| Partial moat (health + browser) | $220B | Near Break-Even |
| SaaS only (OpenAI projection) | $145B | $30B net · Marginal |
| SaaS + margin inversion | $80B eff. | Crisis · 2027 |
The counter-thesis doesn't require believing OpenAI will build AGI. It requires believing that the combination of health data, browsing intent, conversational history, ambient sensing, and eventually neural signal — assembled into a unified personal profile at scale — creates an advertising and data licensing business that makes Google's current position look like a warm-up act.
Sam Altman is not hiding this ambition. He stated explicitly that college students use ChatGPT as an operating system. His investment in Merge Labs — a brain-computer interface startup working toward non-invasive ultrasound neural monitoring — is not a research curiosity. It is a stake in the input layer of the next computing paradigm, several years before that layer becomes commercially viable.
The honest risk manager's position is this: the failure thesis and the counter-thesis are not mutually exclusive on the same timeline. The Feb 28 2026 raise dramatically extends the near-term runway — but it does not eliminate the structural question. OpenAI can face a genuine funding crisis in 2029 or 2030 while also being correct about its long-run data moat thesis. The Amazon Trainium dependency that funded the raise creates a new concentration risk that replaces the Nvidia/Azure one. The gap between the current burn structure and the eventual data-moat monetization is where the systemic risk lives — in the operational disruption that occurs during that interval. A strategically correct company, adequately capitalized today, can still cause a very messy event if the timeline slips.
The River Has
an End
The financial crisis of 2008 wasn't unforeseeable — it was unforeseen, which is different. The instruments that eventually failed were sitting on balance sheets, the counterparty exposures were documentable, and the conditions for collapse were present years before the collapse itself. The gap was not analytical capacity; it was the unwillingness to take seriously a scenario that would have been inconvenient to price.
The $150B raise is the largest private funding round in history. It is also, in the context of what OpenAI must accomplish, a number with an end. The risk this paper describes is not imminent. It is structural — built into the four scenarios, the cost gap, the deepening enterprise dependency, and the governance fragility that no amount of capital resolves. Seven more years on the river means seven more years of going deeper. The question risk managers should be asking is not whether OpenAI survives 2026. It is what their exposure looks like at the end of a runway that is long, but finite.
This paper is an invitation to do the work before urgency makes it academic: map the dependency, model the exposure, build the contingency. The boat is moving. The shore is already gone. That is not a reason for alarm. It is a reason for a plan.
ABOUT THIS ANALYSIS This paper draws on publicly available financial disclosures, industry research from Menlo Ventures, Sacra, Deutsche Bank, and Fortune, as well as OpenAI's own enterprise reporting. Published March 2026. Scenarios modeled February 27 2026, the day OpenAI announced its $110B funding round (SoftBank $30B, Amazon $50B, Nvidia $30B), $730B pre-money valuation, and $150B total cash on hand ($110B raise + ~$40B prior). Probability estimates for failure scenarios represent qualitative risk assessments, not actuarial calculations. All financial figures are as reported or estimated as of February 2026.
TS IMAGINE TS Imagine provides institutional trading and risk management infrastructure through TradeSmart and RiskSmart platforms, serving banks, trading houses, and financial institutions managing complex derivative, commodity, and emerging market exposures. Our research series examines structural risks at the intersection of technology and financial markets.
DISCLAIMER This document is for informational purposes only and does not constitute investment advice or a recommendation to buy, sell, or hold any security. The views expressed are those of TS Imagine Research and are subject to change without notice.