Vertical AI Agents — Investment Thesis

Horizontal LLM platforms commoditize fast. The durable value sits in vertical agents that own a workflow end-to-end inside a single domain — legal review, claims processing, financial-statement audit, clinical documentation. This thesis lays out the structural reasons, the supporting evidence, the leading indicators to watch, and the disqualifying conditions that would invalidate it.

Thesis at a glance

Bull Case

Domain-specific agents win on data depth, workflow integration, and liability ownership. Each vertical can support 1–3 category leaders with $500M+ ARR within 5 years.

Bear Case

Frontier model gains compress the gap. A horizontal model with strong tool use plus a thin vertical wrapper captures most of the value. Vertical agents become features, not companies.

Core claims

Risks

What would invalidate the thesis

Quantitative backdrop

Dataset: vertical-ai-funding
schema:
  vertical: string
  funded_companies: number
  total_funding_usd_m: number
  median_arr_growth_yoy: number
rows:
  - [legal, 14, 1280, 3.4]
  - [healthcare, 22, 2650, 2.9]
  - [financial-audit, 9, 540, 4.1]
  - [insurance, 11, 720, 3.6]
  - [construction, 6, 290, 2.7]
4.103.753.403.052.70 legalhealthcarefinancial-auditinsuranceconstruction
Median ARR YoY growth by vertical (2026) type bar · source 5 points
2.6k2.1k1.5k880.0290.0 legalhealthcarefinancial-auditinsuranceconstruction
Total funding raised by vertical ($M) type line · source 5 points

Watchlist (positions, not recommendations)

VerticalPublic proxyPrivate leaderNote
LegalRELX, Thomson ReutersHarvey, EvenUpWatch incumbents' agent rollouts
Clinical docsAbridge, AmbienceAcceptance rate is the leading metric
Financial auditIntuit, S&PNumeric, TrullionAudit-trail UX is the moat
Insurance claimsVeriskSixfold, EvolutionIQLoss-ratio impact is the proof point

Deltas since last update

Quarterly review task

Every quarter (Q1: Mar 31, Q2: Jun 30, Q3: Sep 30, Q4: Dec 31), walk this document and:

  1. For each claim, check whether new public evidence supports or

contradicts. If material, add a fresh evidence or counterevidence block and adjust the confidence attribute.

  1. For each risk, check whether the leading indicators have moved.

Adjust severity if warranted.

  1. For each open_question invalidator, check whether the trigger

condition has been met. If yes, escalate to a decision block recommending exit or rebalance.

  1. Do not delete prior evidence. Append, don't overwrite — the audit

trail is the value.

Stale-evidence guard

Every two weeks, scan all evidence blocks for source attributes older than 90 days. Propose (do not apply) a replace_block patch for each, citing the latest available source.

Export

A thesis is only useful if you can revisit it. The blocks above are structured so a future-you (or a future agent acting on your behalf) can update only what changed, leave the rest alone, and produce a clean Git diff that shows exactly which beliefs moved.