
August 28, 2025
Artificial intelligence is no longer a sidecar to the digital economy; it is becoming the engine. What began as narrow tools for prediction and ranking now orchestrates workflows, writes and debugs code, reasons over documents, plans experiments, and increasingly manipulates the physical world through robotics. The question for leaders is no longer whether AI will matter, but how it will translate into measurable national and firm-level output—and at what speed.
Yet credible growth is never magic. It arrives through concrete channels that economists can name and measure: automation of certain tasks, augmentation of the rest, the creation of entirely new categories of goods and services, faster discovery, investment surges, cheaper unit costs, wider markets, and fewer frictions. This article unpacks twelve such channels and shows how they add up—carefully, without double counting—to move real GDP.
Our organizing lens is simple and powerful: output rises when productivity (TFP) improves and when the effective capital stock per worker deepens. Hulten’s aggregation intuition ties micro task savings to macro productivity; Solow’s decomposition reminds us that capex waves contribute directly to growth. If you want a bigger economy, you either do more with what you have or equip people with more and better tools—the AI era demands both.
But AI is not one lever; it is a stack. Services see cognitive task automation and copilot-driven augmentation; factories and logistics add robotics; organizations redesign to compress cycle times; deflators fall and real quantities rise; exportable platforms sell capability and compliance abroad. The most aggressive paths layer an acceleration of idea production itself, where AI becomes a method of invention for software, materials, bio, and energy.
Realizing these gains depends on complements. Compute, energy, data, and integration talent must scale together; procurement and measurement must reward throughput and quality, not just headcount; labor markets must move people quickly into AI-complementary roles; and capital formation must be easy to finance and fast to build. Without these complements, AI’s technical promise stalls in pilots and slide decks.
Governance is growth’s hidden input. Standardized evaluations, secure MLOps, liability clarity, and incident response reduce tail risks and lower risk premia, unlocking adoption and capex that would otherwise hesitate. Good rules make the best ideas deployable; bad or missing rules invite shocks that can erase years of progress. In the frontier race, assurance is an economic policy.
What follows is a practical map: twelve channels, each with a plain-English name, a compact equation tying assumptions to growth, the preconditions that must be true for impact to materialize, and the amplifiers that push the numbers higher. Read it as a design space, not a prophecy. If you align the complements and manage the risks, AI does not merely cut costs—it changes the production function of your economy.
AI fully takes over well-specified slices of white-collar work—drafting, extracting, classifying, reconciling, first-pass coding/tests, triage. That frees human time, cuts rework, compresses cycle times, and lowers delivered cost per unit of output. The macro lever is simple: the share of tasks you can actually replace at production standards multiplied by adoption and net savings. It moves quickly in admin-heavy sectors and wherever processes are already digitized. The ceiling rises as reliability, tool-use, and structured outputs improve.
Most work isn’t replaced outright; it’s amplified. Copilots help professionals reason, search, draft, code, analyze, and review with fewer errors. The gain shows up as higher throughput and quality on the tasks people still perform. It’s especially powerful when copilots are embedded directly in systems of record (IDEs, EMRs, CRMs), retrieval is solid, and the human-in-the-loop pattern is well designed. Augmentation compounds week by week as teams learn better prompts, playbooks, and agent workflows.
Every big tech wave creates brand-new jobs and markets. AI’s “greenfield” frontier includes always-on personal tutors and clinicians, synthetic biomanufacturing services, design-to-factory software agents, and autonomous research tools. This is new value, not just cost cutting; it stabilizes labor demand and turns AI into a growth engine rather than a pure substitution shock. The speed depends on monetization, distribution, regulatory pathways, and compute/data access for new entrants.
Beyond production, AI speeds discovery itself. Agents search literature, generate code, design experiments, simulate, and close loops in software, materials, bio, and energy. When a larger fraction of R&D becomes tool-assisted, the TFP trend (the economy’s underlying productivity growth) steepens. The key is translation: the more that research outputs are reproducible, evaluable, and quickly embodied in software, equipment, and processes, the more the “idea shock” shows up in GDP now, not just later.
A wave of capex—datacenters, accelerators, storage, networks, robots, software—raises the capital stock per worker. In standard growth accounting, that alone adds percentage points to output growth, on top of productivity gains. It’s durable when returns stay above the cost of capital, permitting and interconnects are predictable, and the integration talent (MLOps, SRE, robotics integrators) is available to convert spend into usable capacity fast.
AI leaves the screen and enters the warehouse, factory, hospital, and field. With perception, planning, and manipulation improving, robots/cobots take over or assist material handling, assembly, inspection, cleaning, some construction tasks, and pieces of care work. Unit economics drive it: when reliability plus safety plus maintenance beat fully loaded human cost, adoption scales. Gains are biggest when sites redesign flow (digital twins, MES/WMS integration) so robots lift throughput and quality, not just swap heads for arms.
The famous IT “J-curve” flips once firms rebuild around AI: fewer handoffs, tighter feedback loops, automated QA gates, event-driven workflows, agent-executed steps with audit trails. Then the system-level wins arrive—faster releases, fewer rollbacks, lower coordination cost. This is different from “task savings”: it’s the payoff from changing roles, decision rights, and process topology so AI becomes the backbone, not a bolt-on. Measured productivity jumps when learning and redesign are complete.
As models, logistics, scheduling, forecasting, and maintenance improve, unit costs fall in many digital and some physical services. In elastic markets, lower prices mean higher real quantities—more tutoring, analysis, creative output, testing, fulfillment—so real GDP rises even if nominal margins compress. The effect hinges on pass-through (competitive intensity, procurement rules) and on supply being ready to meet the extra demand (energy, compute, labor complements).
Models, agents, assurance stacks, and managed services scale across borders. If your firms host, orchestrate, and govern AI well—with data-residency and reliability—foreign demand adds to GDP. Standards leadership matters: when your evaluation and compliance frameworks become the default, you sell not only services but trust, and imports of competing platforms rise more slowly. This is how domestic capability becomes a tradable advantage.
Cheaper $/token and cheaper $/kWh change the ROI line. As training/inference and electricity costs drop, more use-cases become profitable, so adoption expands even if models don’t get smarter. The lever is the adoption elasticity to cost: steeper cost declines and better pass-through mean more organizations flip from pilot to production. Abundant energy near datacenters, hardware–model co-design, and clever scheduling (shift loads to cheap hours) all amplify the effect.
Growth stalls if people can’t move into the new high-productivity roles. The fix is frictionless reallocation: modular micro-credentials, recognition of prior learning, fast placement markets, wage insurance, and on-the-job copilots so workers ramp quickly. That reduces unemployment duration, lifts effective labor input, and spreads AI gains beyond superstar firms. The faster the job-to-job switch and the better the match quality, the more of AI’s technical potential turns into actual output.
Scaling without safety backfires. Standardized evaluations, third-party audits, incident reporting, provenance, secure MLOps, and clear liability lower tail risk (bio, cyber, misinformation, systemic outages) and reduce risk premia. That unlocks capex and adoption that would otherwise sit on the sidelines, and it prevents shocks that could erase years of gains. Exportable compliance frameworks double as a trade asset, easing entry into regulated foreign markets.
(replacement of well-specified cognitive sub-tasks with machine execution)
Formula:
g1=s⋅a⋅c⋅φ
Parameters (what they mean):
s — Impacted task share of GDP: fraction of total value-added performed by tasks that are actually automatable with current/near-term AI (e.g., drafting, extraction, classification, first-pass analysis, routine coding sub-tasks) within the time window you’re measuring.
a — Realized adoption this year: fraction of those impacted tasks that are, in practice, executed by AI (automation rate in production—not pilots), after accounting for guardrails, QA, and change management.
c — Average cost saving per automated task (or equivalent output gain at constant cost): includes labor time saved, fewer rework loops, lower error remediation.
φ — Pass-through to measured output: converts internal cost savings into measured real GDP. It captures price/quantity effects, margin behaviors, and that some savings are reinvested in capacity rather than immediately showing up in volumes.
High scenario values (from your table):
s=0.40, a=0.70, c=0.35, φ=0.95
Step-by-step:
g1=0.40×0.70×0.35×0.95=0.0931 ⇒ 9.31 pp of real GDP per year
(pp = percentage points; this is before any overlap haircut with other effects.)
Task mapping is granular and conservative. “Impacted” means tasks that are cheap and safe to automate at production standards, not merely “technically demo-able.”
Savings are net of QA and oversight. If humans spend time supervising automated work, that remaining human time is not counted under ccc.
Pass-through < 1 acknowledges partial absorption of savings in margins, prices, or investment rather than immediate output volume.
No double counting with augmentation (Effect #2): the automated slices here are treated as replaced, not assisted.
Model capability & reliability on target task clusters (low hallucination under constraints, robust tool-use, strong retrieval, deterministic output formats).
Productionized pipelines (MLOps, evals, guardrails, red-team, audit logs) so pilots convert into scaled automation.
Clean, labeled data interfaces (APIs, schemas, ontologies) so AI can “see” the work exactly where it happens.
Legal/compliance clarity (who’s liable, what documentation is required, what’s acceptable automation in regulated domains).
Economics that clear (inference cost per task well below human marginal cost; steady latency/SLA).
Org process redesign so remaining humans are reallocated to higher-value work (otherwise cost savings become idle time rather than extra output).
Tool-use & function calling: reliably invoking databases, search, CRMs, ERPs; more tasks become automatable → s↑.
Domain-specialized finetunes / adapters: raises accuracy → a↑ at given compliance thresholds.
Inference cost curve falls: models get cheaper/faster → more slices clear ROI → a↑, c↑
Better prompting & structured output contracts (JSON schemas, Pydantic validators): improves pass-through → φ↑
Government/enterprise procurement at scale: concentrated demand pulls ecosystems forward → a↑.
Because g1=sacφ, each variable is multiplicative:
At the high point, ∂g1/∂a≈scφ =0.40×0.35×0.95≈0.133
→ A +0.10 increase in adoption adds +1.33 pp.
∂g1/∂c≈saφ≈0.266 → A +0.10 increase in savings adds +2.66 pp.
The biggest upside often comes from unlocking harder tasks (raising s) and driving realized savings (raising c).
(complementary “copilot” effect that boosts throughput/quality on tasks humans retain)
Formula:
g2 = saug⋅a⋅q⋅β
Parameters:
saug — Share of GDP in tasks that remain human-held but are amenable to AI assistance (reasoning, review, judgment).
a — Share of those tasks actually performed with AI assistance (copilots used regularly, not sporadically).
q — Average productivity uplift per augmented task (time saved and/or quality-adjusted output increase).
β — Conversion factor from quality/latency improvements to value-added (0–1). Some quality gains aren’t fully priced into GDP immediately.
High scenario values:
saug=0.45, a=0.65, q=0.25, β=0.80
Step-by-step:
g2=0.45×0.65×0.25×0.80=0.0585 ⇒ 5.85 pp of real GDP per year
Augmentation ≠ automation. We count only the uplift on work still done by people (e.g., analysts, lawyers, PMs, clinicians), not replaced by machines (that’s #1).
Quality is monetized imperfectly. Faster cycle time and better accuracy increase throughput and reduce scrap; not all of it shows up in measured GDP right away, hence β<1
No double counting with new tasks (#3): if augmentation spawns entirely new offerings, those revenues are accounted for there.
High-frequency, in-workflow copilots (inside IDEs, office suites, CRMs, EMRs) to keep assist usage aaa high.
Reliable retrieval + tool-use for context (documents, tickets, logs, knowledge graphs).
Human-in-the-loop patterns (checklists, sign-off thresholds, uncertainty displays) that raise output while holding risk constant.
Training & change management to shift habits: people actually use the copilots and trust them appropriately.
Measurement stack (task timers, error tracking, rework audits) to capture true q and refine prompts/workflows.
Larger, more capable models with long context: better reasoning and less lost context → q↑q \uparrowq↑.
Agentic orchestration (multi-step tool sequences with self-checks): deeper assistance → q↑a↑
Role redesign (split work into “AI-strong” and “human-strong” sub-tasks): maximizes complementarity → saug↑q↑
Structured output and evals: higher acceptance by compliance/risk teams → a↑β↑
Tacit knowledge capture (playbooks, reusable prompts, prompt libraries): compounding gains → q↑
∂g2/∂q=saugaβ≈0.45×0.65×0.80≈0.234
→ Each +0.05 uplift in qqq adds +1.17 pp.
∂g2/∂a=saugq/β≈0.09
→ A +0.10 increase in adoption adds +0.90 pp.
The big lever is q: design better workflows and tools to convert model capability into measured output.
(AI enables entirely new goods/services or revenue lines that didn’t previously exist)
Formula:
g3 = mnew⋅gnew⋅a
Parameters:
mnew — New-market share created this year: the value-added share of the economy composed of brand-new AI-native offerings (e.g., always-on personal tutors/clinicians, autonomous experiment design, synthetic bio manufacturing services, agentic developer platforms).
gnew — Internal growth rate of that nascent sector over the year (think hypergrowth typical of platform on-ramps).
a — Realization factor (distribution, regulatory approvals, willingness to pay, and supply-side scale). It scales theoretical market size to what actually ships and is billable within the year.
High scenario values:
mnew=0.04, gnew=0.70, a=0.70
Step-by-step:
g3=0.04×0.70×0.70=0.0196 ⇒ 1.96 pp of real GDP per year
These are new categories, not cheaper versions of old ones (to avoid double counting with #1/#2).
Adoption bottlenecks are real. Even with powerful products, it takes time to acquire customers, obtain approvals (e.g., medical, education), stand up infrastructure, and staff post-sales—hence the explicit a.
Measured GDP recognizes value-add. Some AI value is consumer surplus (e.g., free tutoring) and won’t show fully in GDP. The estimate assumes a paid market forms for a meaningful fraction.
Clear monetization & pricing (per seat, per agent, per successful action) and low friction billing.
Distribution channels (marketplaces, app stores, B2B sellers) to scale quickly.
Regulatory pathways (e.g., digital health reimbursement codes, education accreditation) for safety-critical domains.
Compute & data access so startups can enter and scale (credits, public datasets, shared evals).
Go-to-market readiness: customer success, integration partners, and SLAs to serve enterprise buyers.
Public procurement & vouchers to catalyze first critical mass of demand (schools, clinics, agencies).
Interoperability standards (identity, data portability, event schemas) so new services plug into existing systems.
Talent liquidity (easier hiring/contracting of AI product engineers, safety evaluators, compliance leads).
Exportability by default (localization, compliance templates) turning domestic wins into NX gains.
Financing mechanisms (sovereign/mission funds) for compute-heavy but high-spillover categories (AI for science, biofoundries, materials).
At the high point, ∂g3/∂mnew=gnewa=0.49
→ Each additional +1 pp of new-market share (i.e., Δmnew=0.01) adds +0.49 pp to GDP growth.
∂g3/∂a=mnewgnew=0.028
→ +0.10 better realization adds +0.28 pp.
#1 Automation delivers the largest single push in the short run when the impacted task share and realized savings are high.
#2 Augmentation compounds it by lifting the remaining human work (and is often faster to deploy because it avoids hard “replace vs. not” decisions).
#3 New tasks/products are the seed of durable growth: they convert AI from a cost-cutter into a market creator, capturing value that wouldn’t exist otherwise and stabilizing labor demand.
Numerically (high scenario, before overlap):
#1: 9.31 pp, #2: 5.85 pp, #3: 1.96 pp → 17.12 pp combined.
In a full economy-wide plan you’d apply an overlap haircut later (to avoid double counting where automation and augmentation touch the same workflows), but the breakdown above shows how each pillar works and what you must do to dial it up.
AI-accelerated science & engineering: models + agents that search, reason, simulate, design, and iteratively run R&D loops (including code, experiments, and evaluations).
g4=(μ−1) g0 σ
Parameters
g0 — Baseline TFP trend from “normal” innovation (e.g., 1.5–2.0%/yr).
μ — Research-productivity multiplier (AI makes each researcher/unit of R&D spending μ\muμ× as productive at generating usable advances).
σ — In-year spillover/translation factor: the fraction of newly created “ideas” that diffuse into production this year (the rest arrives later).
High scenario values
g0=0.02 (2% baseline TFP)
μ=5.0 (5× research productivity)
σ=0.70 (70% of incremental ideas realized/embodied this year)
Step-by-step
g4=(5−1)⋅0.02⋅0.70=0.056 ⇒ 5.60 pp of real GDP per year
(before any economy-wide overlap haircut)
R&D → TFP mapping holds: A large share of AI-enabled discoveries (algorithms, materials, biotech, production methods, process designs) gets embodied in capital, software, and workflows quickly (σ).
Productivity multiplier reflects true frontier improvements (not just more papers): better search/synthesis, automated code/experiments, and higher-quality negative results that prune bad branches.
No double counting with capital deepening (Effect #5): this channel is the knowledge shock itself, not the subsequent capex that may embody it.
Tool-using, evaluation-rich agents: code + notebooks, lab instruments, CAD/CAE/CFD tools, EHR/LIMS connections, simulation frameworks, auto-eval pipelines.
Compute + data availability for scientific workloads (HPC, GPUs/AI accelerators, simulation clusters, rich domain datasets).
Reproducibility stack: experiment tracking, versioned datasets, result provenance, causal inference checks—so outputs are trustworthy.
IP/licensing clarity for AI-generated designs; freedom to operate.
High-bandwidth handoff from research to engineering (red-teamed designs, manufacturing-readiness levels, validation).
Closed-loop lab automation (robots + active learning) → raises μ and improves σ.
Domain-specialized models (bio, materials, energy systems, chip design) → larger μ at lower cost.
Open tooling & precompetitive consortia → faster diffusion (σ↑\).
Compute/energy cost declines → more experiments per dollar (μ↑).
Outcome-linked R&D incentives (prizes, AMCs) → pull-through to deployment (σ↑).
∂g4/∂μ=g0σ=0.014.
A +0.10 bump to μ → +0.14 pp.
∂g4/∂g0=(μ−1)σ=2.8
A +0.10 pp to baseline TFP (0.001) → +0.28 pp.
∂g4/∂σ=(μ−1)g0=0.08
A +0.10 to σ → +0.8 pp.
Takeaway: Diffusion speed σ is a huge lever—pair lab breakthroughs with deployment muscle.
The investment super-cycle: massive, sustained private (and some public) capex into AI-relevant capital that raises the capital stock per worker.
g5=sK⋅ΔK/K
Parameters
sK — Capital share in income (≈ 0.35–0.40 typical for many economies).
ΔK/K — Net growth of the relevant capital stock this year (datacenters, AI chips, software, robots, networking, storage), after depreciation.
High scenario values
sK=0.40
ΔK/K=0.15 (capital stock rising 15% this year)
Step-by-step
g5=0.40⋅0.15=0.06 ⇒ 6.00 pp of real GDP per year
Marginal product of capital remains high enough that firms rationally invest at these rates; complementary factors (skills, software, data) keep pace so capital is well-utilized.
This is the Solow “capital deepening” term—separate from TFP (ideas). We avoid double counting by treating idea shocks in #4 and deployment capex here.
Supply chain & permitting for data centers, substations, cooling, fiber, grid interconnects; predictable timelines.
Energy availability (firm, cheap, and clean enough) to power clusters and edge deployments.
Favorable financing conditions (ROIC > WACC), stable policy, accelerated depreciation/expensing for digital/robotic capital.
Integration talent (MLOps, SRE, robotics integrators, facilities engineers) so capital turns into productive capacity fast.
Risk & uptime assurances (SLAs, redundancy) to justify at-scale deployments.
Tax incentives & accelerated depreciation → raises ΔK/K
Public-private partnerships on grid, substations, and dark fiber → lower capex bottlenecks.
Standardized modular DC designs → faster time-to-commission (effectively increasing ΔK/K)
Export credit/green finance for energy + compute clusters → lower WACC.
Software leverage (platformization, multi-tenant orchestration) → raise the effective capital services per unit K.
∂g5/∂(ΔK/K)=sK=0.40
A +0.01 to ΔK/K (i.e., +1 pp capital growth) → +0.40 pp.
∂g5/∂sK=ΔK/K=0.15
A +0.01 to sKs_KsK → +0.15 pp.
Takeaway: The volume & pace of capex (ΔK/K) is the dominant lever; make building easier and cheaper.
Moving automation from “on-screen” to the real world: perception, manipulation, mobility, and workflow integration that replace or augment physical tasks.
g6=sphys⋅a⋅c⋅φ
Parameters
sphys — GDP share in affected physical sectors (manufacturing, logistics, warehousing; portions of construction, agriculture, cleaning, and some care tasks).
a — Realized adoption this year (portion of target tasks actually executed by robots/cobots/autonomous systems).
c — Average cost saving per automated task (or quality-adjusted output gain).
φ — Pass-through to measured output (as with #1).
High scenario values
sphys=0.40, a=0.50, c=0.30, φ=0.90
Step-by-step
g6=0.40×0.50×0.30×0.90=0.054 ⇒ 5.40 pp of real GDP per year
Unit economics of robots beat fully-loaded labor costs in targeted task bundles (including maintenance, downtime, safety, insurance, facilities changes).
Process redesign captures quality/cycle-time benefits (less scrap, fewer injuries, higher uptime), not just headcount substitution.
No double counting with software automation (#1) for overlapping tasks; here we focus on physical task replacement/augmentation.
Reliable perception and manipulation (long-tail object handling, deformables, varied lighting, clutter).
Safety-certified systems (cobots, autonomous vehicles, forklifts), clear regulations for shared human-robot spaces.
Integration into MES/WMS/ERP so robots “see” jobs, priorities, and constraints; digital twins for layout/flow optimization.
Hardware supply & service networks (spares, field service, integrators).
Workforce transition (operator-to-technician reskilling, job redesign, labor relations) so realized savings become output.
Foundation models for robotics (few-shot generalization, visuomotor policies) → a↑c↑
Teleoperation + shared autonomy as a backstop for edge cases → effective uptime ↑, a↑
Simulation-to-real & synthetic data generation → faster deployment, lower per-site tuning costs.
Cheaper hardware (learning curves, commodity actuators/sensors) → more tasks clear ROI (a↑).
Clustered deployments (robot-ready industrial parks) with shared integrators/tooling → time-to-value falls, a↑.
∂g6/∂c=sphysaφ=0.18
A +0.10 to savings ccc → +1.8 pp.
∂g6/∂a=sphyscφ=0.108
A +0.10 to adoption aaa → +1.08 pp.
∂g6/∂sphys=acφ=0.135
A +0.05 to sector coverage sphyss_{\text{phys}}sphys → +0.675 pp.
∂g6/∂φ=sphysac=0.06
A +0.05 to pass-through → +0.30 pp.
Takeaway: The savings per task ccc and adoption a are the strongest levers; push unit economics and integration.
(rebuilding firm architecture around AI to cut coordination, search, and cycle time costs)
g7 = sorg⋅a⋅r
Parameters
sorg — Share of GDP produced in organizations whose output is materially constrained by coordination/search/handoffs (most services and complex manufacturing supply chains).
a — Share of those orgs that actually complete the post-J-curve redesign this year (teams, roles, decision rights, process maps).
r — Realized cycle-time reduction converting to value-added (not the “IT illusion” before the org catches up).
High scenario values
sorg=0.50, a=0.55, r=0.12
Step-by-step
g7 = 0.50×0.55×0.12 = 0.033 ⇒ 3.30 pp of real GDP per year
The IT “J-curve”: productivity looks flat or down while firms learn; once they redesign around AI (fewer layers, agentic workflows, automated QA, self-serve analytics, tighter customer feedback loops), the cycle-time cut r finally shows up in measured output.
This channel is distinct from task automation: it captures system-level gains—less rework, fewer meetings, faster releases, lower defect rates—that only appear after role/process redesign.
Operating model overhaul: value-stream mapping, removal of redundant handoffs, decision rights pushed to AI-augmented frontlines.
Agentic workflows embedded in systems of record (CRM/ERP/PLM/EMR) with deterministic handoffs and audit trails.
Governance that rewards throughput, not headcount or utilization; OKRs tied to cycle-time and customer-value metrics.
Managerial training for AI-era leadership (prompt/agent literacy, statistically literate decision-making).
Instrumentation: time-stamped worklogs, DORA-style metrics, defect/rollback tracking to verify r.
Event-driven architectures and shared data contracts (schemas, lineage) → fewer blockers → r↑r
Product platformization (internal APIs/SDKs, reusable agents) → orgs adopt faster → a↑
In-product telemetry & A/B infra → faster learn/iterate loops → r↑
Outcome-based vendor contracts (SLO-tied) → alignment → r↑, a↑.
∂g7/∂r=sorga=0.275
+0.05 to r → +1.38 pp.
∂g7/∂a=sorgr=0.06
+0.10 to a → +0.60 pp.
∂g7/∂sorg=ar=0.066
+0.05 to sorg → +0.33 pp.
Takeaway: The size of the realized cycle-time cut r is the kingmaker—design the org for AI, not just deploy tools.
Failure: tooling without authority redesign → “pilot purgatory,” r stays near zero.
KPIs: lead time for change, deployment frequency, change-failure rate, MTTR, % automated QA gates, % agent-executed handoffs, cycle-time distribution (p50/p90).
(AI makes many goods/services cheaper; real output expands)
g8 ≈ w⋅(−ε⋅ΔP)
Parameters
w — Weighted share of the economy where AI drives meaningful price declines and demand responds (digital services, targeted manufacturing/services with AI forecasting/logistics).
ε — Demand elasticity (absolute value) for that composite basket.
ΔP — Average price change (negative for price cuts) after AI-driven efficiency.
High scenario values
w=0.45, ε=1.0, ΔP=−0.12
Step-by-step
g8 = 0.45×(1.0×0.12) = 0.054 ⇒ 5.40 pp of real GDP per year
With elastic demand, lower prices → higher real quantities. In national accounts, real output rises when deflators fall faster than nominal output, provided quantity expands.
This is not the same as “cost cutting” in #1/#6; it tracks the macro demand response to broad unit-cost declines (inference, logistics, inventory, predictive maintenance, scheduling).
Actual pass-through to prices (competitive markets, or policy that promotes it); otherwise gains remain as margins and show less in real output.
Capacity to meet higher demand (supply elastic enough; energy, compute, labor complements available).
Frictionless distribution (digital or near-digital; low marginal cost of serving added demand).
Measurement (statisticians able to attribute quality-adjusted price declines correctly; hedonic adjustments where needed).
Learning curves in compute/energy/logistics → steeper ∣ΔP∣
Better forecasts & routing (LLM+OR hybrids) → less waste → larger price declines.
Competition policy that reduces pass-through frictions → higher effective ε\varepsilonε and realized −ΔP
Interoperable payments & fulfillment → capacity scales with demand.
∂g8/∂ΔP=−wε=−0.45
Additional −0.05 price drop → +2.25 pp.
∂g8/∂ε=−wΔP=0.054
+0.5 to ε → +2.70 pp.
∂g8/∂w=−εΔP=0.12
+0.05 to w → +0.60 pp.
Takeaway: Bigger price drops and higher elasticities (i.e., competitive, scalable markets) are the strongest multipliers.
Failure: oligopoly keeps price cuts as excess margin → smaller real-output gains.
KPIs: sectoral deflators, pass-through rate (% of cost decline reflected in prices), order fill-rates, stockouts, backlog days, fulfillment time.
(selling models/agents, compliance & assurance, and managed AI services cross-border)
g9 = sexp ⋅gexp⋅ a − simp⋅ gimp
Parameters
sexp — Current exports share of GDP for AI-addressable services.
gexp — Growth rate of that export segment this year.
a — Realization factor (compliance, localization, distribution, contracts) turning pipeline into billables.
simp — Import share for the same category (substitution risk).
gimp — Growth rate of imports in that category (what foreigners sell to you).
High scenario values
sexp=0.07, gexp=0.35, a=0.80
simp=0.03, gimp=0.08
Step-by-step
g9 = (0.07⋅0.35⋅0.80) − (0.03⋅0.08) = 0.0196 − 0.0024 = 0.0172 ⇒ 1.72 pp of real GDP per year
What this means (logic)
Software/services scale globally. If your domestic firms host models, run agent platforms, or sell compliance/assurance stacks, you can earn non-rival rents from foreign customers.
The term subtracting imports recognizes that foreign platforms can displace local providers if you don’t build competitive offerings or standards.
World-class platforms (latency, uptime, assurance, model governance) with data residency options.
Interoperable compliance stack (evals, audits, documentation) exportable as a product—so others can adopt your standards.
Cross-border data/compute pathways (legal, privacy-preserving, efficient peering).
Trade agreements or adequacy findings for AI services, IP clarity for model weights/outputs.
Localization (language, domains, billing, support) and channel partners in target markets.
Sovereign-friendly offerings (sovereign controls, on-prem, air-gapped modes) → larger aaa abroad.
Reference deployments in government/regulated sectors → trust export → gexp↑, a↑
Standards leadership (you publish the eval/assurance canon) → path-dependence favors your platforms.
Export finance & guarantees for AI infrastructure deals abroad.
∂g9/∂a=sexpgexp=0.0245
+0.10 to aaa → +0.245 pp.
∂g9/∂gexp=sexpa=0.056
+0.10 to gexp → +0.56 pp.
∂g9/∂sexp=gexpa=0.28
+0.01 to sexp → +0.28 pp.
∂g9/∂gimp=−simp=−0.03
+0.10 to foreign import growth → −0.30 pp.
Takeaway: Grow export share and export growth rate, and keep import growth muted via competitiveness and standards.
Failure: export wins that don’t scale due to data residency/compliance blockers or lack of local presence.
KPIs: AI services export revenue, export pipeline conversion rate, foreign logo adds, % deals using your assurance standard, foreign DC/PoP coverage, cross-region latency SLOs.
(cheaper compute & electricity make more AI use-cases cross the ROI line)
g10 = sen [η⋅(−ΔC/C)] cˉ φ
Parameters
sen: Share of the economy exposed to compute/energy-driven AI cost declines (the demand side ready to scale when costs fall).
η: Adoption elasticity w.r.t. unit cost (how strongly lower $/token or $/kWh raises AI adoption).
−ΔC/C: % cost decline (positive number; e.g., 35% cheaper means 0.35).
cˉ: Avg cost saving per newly-adopted use case (net of supervision/QA).
φ: Pass-through to measured output (share of savings that shows up as real GDP).
High values used
sen=0.70, η=0.90, −ΔC/C=0.35, cˉ=0.28, φ=0.95
Step-by-step
Δa=η(−ΔC/C)=0.90⋅0.35=0.315
g10=0.70⋅0.315⋅0.28⋅0.95≈0.0587 ⇒ 5.87 pp/yr
Cost declines are broad-based and persistent (architecture, hardware, compiler, datacenter efficiency, and cheaper electricity).
Newly viable use cases truly clear ROI at production standards (SLAs, latency, security).
cˉ\bar{c}cˉ reflects net savings including guardrails and integration costs.
Steep learning curves in training/inference hardware & software (dense/sparse, compilation, KV-caching, batching).
Energy abundance near datacenters (renewables+storage, firm baseload, efficient cooling) with grid interconnects/permits.
Elastic demand: plenty of backlogged use cases ready to switch on as price falls.
Procurement & billing that pass cheaper compute/energy through to customers (no margin traps).
Ops maturity (MLOps, FinOps) to exploit lower unit costs at scale.
Model/toolchain co-design (hardware-aware architectures, quantization) → bigger −ΔC/C
On-prem + sovereign options where egress costs fall → raises sens and η.
Time-of-use scheduling & load shifting to cheap hours → effective −ΔC/C rises.
Regulatory clarity on energy build-out → more capacity online sooner.
∂g10/∂(−ΔC/C)=senηcˉφ≈0.168
Extra −10 pp cost drop → +1.68 pp.
∂g10/∂η≈0.065
+0.10 elasticity → +0.65 pp.
∂g10/∂cˉ≈0.209
+0.05 savings → +1.05 pp.
∂g10/∂sen≈0.0838
+0.05 coverage → +0.42 pp.
KPIs: $/token & $/kWh trend, effective utilization/throughput per GPU, cost-to-serve per action, % workloads shifted to cheap windows, new use-cases lit per quarter.
(moving people into higher-productivity AI-complementary tasks quickly)
g11 = (u0−u1) + mq
Parameters
u0−u1: Yearly unemployment reduction (pp), capturing aggregate re-employment into productive roles.
m: Share of workers retrained/redeployed into AI-complementary tasks this year.
q: Avg productivity uplift for those workers (hours × quality).
High values
u0=0.07, u1=0.05, m=0.12, q=0.08
Step-by-step
g11=(0.07−0.05)+0.12⋅0.08=0.02+0.0096=0.0296 ⇒ 2.96 pp/yr
Redeployment programs place people into real roles, not just classroom time.
q includes on-the-job augmentation benefits (copilots, tools) and better matching—not just narrow skills certificates.
No double counting with automation savings: this term captures human output uplift and re-employment.
Credential infrastructure (modular micro-credentials, RPL—recognition of prior learning, national skills graph).
Placement markets with high-velocity matching; employer consortia publish skills-based job standards.
Wage insurance & portable benefits to de-risk moves; relocation/childcare support where needed.
Training aligned to workflows (tool-stack literacy, domain data, safety/compliance), not generic courses.
Public procurement requiring vendors to hire certified redeployed workers.
Copilots for learning (adaptive tutors, code/data labs) → raises q.
Outcome-based training finance (ISA/AMCs with guardrails) → raises m.
Licensing reform (where safe) to open entry into high-demand roles.
Regional talent hubs co-located with AI-intensive employers.
∂g11/∂(u0−u1)=1
Each additional 1 pp unemployment drop → +1.0 pp.
∂g11/∂m=q=0.08
+10 pp to mmm → +0.80 pp.
∂g11/∂q=m=0.12
+5 pp to qqq → +0.60 pp.
KPIs: median transition time (<12 weeks), % workforce earning new micro-credentials, job-to-job switch rate, redeployed wage delta, employer fill-time for AI-complement roles.
(reduce tail-risks, lower risk premia, unlock capex & adoption)
g12 = ρ pshock L + sK ΔK/Kaddl
Parameters
ρ: Risk-reduction fraction (how much governance reduces the probability/impact of bad outcomes).
pshock: Baseline annual probability of a costly AI-related shock (bio/cyber/misinformation/regulatory halt).
L: Output loss if shock occurs (share of GDP).
sK: Capital share.
ΔK/Kaddl: Extra investment unlocked because lower risk premia / clearer liability / better insurance markets.
High values
ρ=0.50, pshock=0.10, L=0.05, sK=0.40, ΔK/Kaddl=0.015
Step-by-step
Avoided loss=0.5⋅0.10⋅0.05=0.0025 (= 0.25 pp)
Capex unlock=0.40⋅0.015=0.006 (= 0.60 pp)
g12=0.0025+0.006=0.0085 ⇒ 0.85 pp/yr
There is real tail risk that—if unmanaged—can erase multiple points of GDP; governance reduces its expected cost and financing frictions.
Insurance/assurance markets respond to standardized evals, audits, and liability clarity, lowering risk premia.
Assurance stack: standardized evals, third-party audits, incident reporting, transparency & provenance, secure MLOps.
Liability clarity (who’s on the hook for failures), safe-harbor for responsible disclosure.
Minimum-duty baselines (data protection, content authenticity, red-team requirements) and regulatory sandboxes.
Cyber/bio safety readiness (secure compute, biosafety gatekeeping, anomaly detection networks).
International mutual recognition of assurance standards (to help exports, too).
Mandatory evals for high-risk use → larger ρ, more predictable adoption.
Safe model & data cards embedded in procurement → reduces due-diligence friction.
Risk-pooling / insurance products tailored to AI incidents → raises ΔK/Kaddl
Cross-sector red-team guilds and bug bounty programs.
∂g12/∂ρ=pshockL=0.005
+0.10 to ρ → +0.05 pp (via avoided loss).
∂g12/∂ΔK/Kaddl=sK=0.40
+0.01 extra unlocked capex → +0.40 pp.
∂g12/∂pshock=ρL=0.025
(Not a lever to raise, but shows why high-risk environments benefit most from strong assurance.)
KPIs: incident rates & severity, insurance pricing spreads for AI deployments, time-to-approval in sandboxes, % models with eval/audit artifacts, capex-to-WACC spreads.