Chapter 18 — Cases of AI in business synthesis

This chapter closes Part III and the analytical track of the textbook. The chapter returns to specific cases at greater synthesised depth, applies the integrated frameworks of Chapters 13–17, and introduces additional cases that earlier chapters did not develop in detail. The chapter’s purpose is consolidation: bringing the analytical capability the prior chapters have built into focused application on cases that demonstrate both the framework’s power and the field’s continued evolution.

The chapter also closes the textbook by reflecting on what graduates of the unit should be able to do with the analytical-and-operational capability the unit develops. Students completing the textbook will encounter many AI deployment cases across their careers — as employees, consultants, investors, regulators, journalists, or scholars. The integrated framework supports their analysis of cases that have not yet emerged. The framework’s value is realised over years and decades of subsequent analytical practice, not only in the immediate engagement with the case material this textbook has covered.

The chapter develops fourteen sections. Section 18.1 frames the case-based approach. Sections 18.2–18.4 cover three success archetypes at integrated depth (Stitch Fix; Stripe Radar; GitHub Copilot). Section 18.5 covers NVIDIA as the infrastructure case. Section 18.6 covers the foundation-model labs (Anthropic and OpenAI) as deployment cases. Section 18.7 covers the DeepSeek R1 January 2025 inflection. Section 18.8 covers recent cautionary additions. Sections 18.9–18.10 cover Asian and Australian regional cases. Section 18.11 develops cross-case synthesis. Section 18.12 develops the forward-looking case methodology. Section 18.13 covers the graduate-capability synthesis. Section 18.14 closes the textbook.

18.1 The case-based approach to AI in business

The textbook’s organising approach has been case-based throughout. Part II developed sectoral case material; Part III built analytical frameworks anchored in the cases; Part V applied operational discipline that the cases motivate. The case-based approach is not coincidental; it reflects specific pedagogical and analytical commitments.

Why cases matter. Cases provide concrete material for analysis that abstract framework discussion alone cannot match. The Acemoglu-Restrepo task-based framework (Chapter 15) is more comprehensible against the Hollywood-strike, professional-services, and agricultural-labour cases than against abstract description. The five-factor maturity framework (Section 16.4) is more usable against the Watson Health, Klarna, and Stitch Fix cases than against generic dimensions. The integrated framework (Chapter 17) is more rigorous against specific case applications than against general assertions.

The case-based approach also captures the specific characteristics of contemporary AI that abstract analysis sometimes misses. AI deployment is heterogeneous across sectors and applications; the heterogeneity is what cases demonstrate. AI deployment evolves rapidly; the cases capture moments in the evolution that subsequent analysis can build on. AI deployment failures and successes follow specific patterns; the patterns are visible across cases in ways that single-case or abstract analysis would not produce.

The integrated-framework methodology. Throughout the textbook, the integrated framework has been applied to specific cases. The methodology has specific structure:

  • Identify the case. What deployment, decision, or event is being analysed?
  • Apply the four lenses. Capability-and-maturity; governance-and-compliance; agentic-deployment; labour-and-economic.
  • Identify cross-lens patterns. Alignment, compound misalignment, binding constraints, complementarity.
  • Synthesise the diagnosis. What the case is fundamentally about; what the lessons are.
  • Generalise the lessons. What recurring patterns the case illustrates; how the lessons apply to other contexts.

The methodology produces the systematic analysis that case-based work requires. Section 18.2 begins the application.

The case-selection criteria for this chapter. The chapter selects cases on three criteria.

First, cases that warrant deeper integrated analysis than earlier chapters provided. Stitch Fix (Section 8.2 introduced the data flywheel; Section 18.2 develops the integrated analysis), Stripe Radar (Section 8.10 introduced the platform; Section 18.3 develops the analysis), and GitHub Copilot (Section 13.5 introduced the deployment; Section 18.4 develops the analysis) each receive deeper treatment.

Second, cases that earlier chapters did not develop at depth. NVIDIA (Section 18.5), the foundation-model labs as cases (Section 18.6), and the DeepSeek inflection (Section 18.7) each receive substantial treatment as additions to the textbook’s case canon.

Third, regional cases that warrant focused attention given the unit’s KL-Melbourne dual-cohort context. Sections 18.9 (Asian regional) and 18.10 (Australian regional) develop specific regional cases.

The cumulative case material across Part II and Chapter 18 is substantial; the analytical capability developed across Part III applies systematically to all of it.

18.2 The success archetype — Stitch Fix integrated analysis

Stitch Fix (introduced in Section 8.2) is among the cleanest examples of the data-flywheel-driven success pattern. The integrated analysis demonstrates how the four lenses align to produce sustained competitive advantage.

The case context. Stitch Fix, founded in 2011 by Katrina Lake (then a Harvard Business School student) and headquartered in San Francisco, operates personalised styling for clothing customers. The business model: customers complete a style profile; Stitch Fix sends a curated selection of clothing items (a “Fix”); customers keep what they want and return the rest; the data on what is kept and returned feeds back into the recommendation system. The model produces structured data on every customer interaction that the broader e-commerce industry typically lacks.

The 2017 IPO valued the company at approximately USD 1.4 billion; subsequent operations have produced substantial revenue (USD 1.7 billion in fiscal 2024) with episodes of growth and contraction over the period. The company’s competitive positioning has been substantially built on its data-and-AI infrastructure; chief algorithms officer roles and substantial data-science investment have been central to the firm’s identity.

Capability-and-maturity lens. Stitch Fix’s AI capability for personalised recommendations is at high TRL — the system has operated at production scale for over a decade with continuous capability improvement. The deployment-maturity assessment is at ADML 4–5: the AI is deeply integrated into the firm’s operations; the firm’s competitive position is substantially built on the AI infrastructure; the firm cannot operate without the AI capability.

The five-factor analysis is favourable across all dimensions: task definition is sharp (predict which items will satisfy the customer); feedback signals are strong (every Fix produces detailed structured data); data availability is substantial (millions of Fix interactions accumulated); regulatory environment is light (general consumer-protection only); deployment friction is low (web-and-app-based plus warehouse operations).

Governance-and-compliance lens. Stitch Fix operates within general consumer-protection and data-privacy frameworks (CCPA in California; broader US state-level frameworks). The company has not faced substantial AI-specific regulatory friction; the deployment context is structurally less governance-constrained than financial-services or healthcare AI. The lighter governance burden is partly a sectoral characteristic (e-commerce styling is lower-stakes than financial or clinical decisions) and partly a deployment characteristic (the AI augments rather than makes binding decisions for the customer).

Agentic-deployment lens. Stitch Fix’s AI is structurally hybrid — algorithmic recommendation combined with human stylist judgment. The combination is operationally important: the AI generates candidate selections; a human stylist reviews and finalises the Fix. The human-in-the-loop structure addresses several agentic concerns directly: the AI does not take final action without human review; the failure modes (recommending items that cause customer dissatisfaction) are bounded; the trust threshold for fully-autonomous operation is not crossed.

The hybrid structure is itself a strategic choice. The pure-algorithm approach would reduce labour costs but lose the relationship and judgment dimension; the pure-human approach would lose the data-flywheel benefits. The hybrid captures both.

Labour-and-economic lens. Stitch Fix employs substantial human stylists alongside the AI infrastructure. The labour pattern is augmentation: stylists handle higher-value judgment and customer-relationship work; the AI handles the candidate-generation and pattern-recognition work. The augmentation pattern produces more-engaged stylist work than pure-volume styling work would, and produces better customer outcomes than pure-algorithm recommendation would.

The economic structure produces value capture across multiple dimensions: the data-flywheel produces sustained competitive advantage that competitors without comparable data cannot match; the hybrid AI-human structure produces more value than either alone; the customer relationship sustains over multiple Fixes, producing lifetime value that single-transaction e-commerce does not match.

The integrated diagnosis. Stitch Fix exhibits high alignment across all four lenses. The capability is mature; the deployment context is exceptional; the governance is manageable; the agentic dimensions are well-handled through the hybrid structure; the labour dynamics are augmentation-positive. The compound success across lenses produces the sustained competitive advantage that the firm has demonstrated.

Lessons that generalise.

Lesson 1 — the data-flywheel is structural. Firms that design for data-flywheel from the start produce sustained advantage; firms that retrofit data-flywheel onto existing operations produce weaker results. The flywheel is not a feature to add but a structural design principle.

Lesson 2 — hybrid AI-human structures support deployment in trust-sensitive contexts. Where the trust threshold for fully-autonomous AI is not yet crossed (which is most contexts in 2026), the hybrid structure captures most of the AI benefit while maintaining human judgment.

Lesson 3 — sustained advantage requires continuous investment. Stitch Fix’s data-flywheel advantage requires ongoing investment in the AI infrastructure, the data-collection mechanisms, and the human-AI integration. The advantage erodes if investment lapses.

Lesson 4 — sectoral characteristics matter. The same AI strategy that works in personalised styling may not work in financial advisory or clinical decision-making; the sectoral differences in regulatory environment, trust threshold, and operational structure produce different deployment outcomes.

18.3 The platform-as-success — Stripe Radar integrated analysis

Stripe Radar (introduced in Section 8.10) provides a different success archetype: AI as platform infrastructure that supports broader business capability.

The case context. Stripe, founded in 2010 by Patrick and John Collison and headquartered in San Francisco, operates payments infrastructure for businesses globally. Stripe Radar, the company’s fraud-prevention AI product, was launched in 2016 and has since become one of the highest-deployment-scale fraud-prevention systems globally. The product processes hundreds of billions of dollars in annual transaction volume; the underlying ML capability supports decisions that affect both Stripe’s customers (the businesses using Stripe payments) and Stripe’s customers’ customers (the consumers transacting with those businesses).

Capability-and-maturity lens. Stripe Radar’s capability is at very high TRL (TRL 9, proven in operational environment) for fraud-prevention specifically. The deployment-maturity assessment is at ADML 4–5: the system is deeply integrated into Stripe’s payments infrastructure; Stripe’s competitive position substantially depends on the fraud-prevention capability; competitive entry without comparable fraud capability is structurally difficult.

The five-factor analysis is highly favourable: task definition is sharp (predict whether a transaction is fraudulent); feedback signals are very strong (chargebacks provide direct outcome data); data availability is substantial (hundreds of billions in annual transaction volume across Stripe’s customer base); regulatory environment is established (payment-industry regulation including PCI DSS and adjacent frameworks); deployment friction is low (the system operates within Stripe’s payments infrastructure).

Governance-and-compliance lens. Stripe Radar operates within substantial regulatory infrastructure: PCI DSS for payment-card security; broader payments regulation including cross-border requirements; AML/KYC frameworks; consumer-protection frameworks. The cumulative regulatory burden is substantial; Stripe’s compliance infrastructure is sophisticated and continues to develop.

The 2024–2026 AI-specific regulatory developments (Section 14.5 covered the US framework; Chapter 14 covered the EU and other frameworks) have added specific requirements. Stripe has been a substantive contributor to the policy discussion; the company’s compliance infrastructure has progressively addressed AI-specific requirements.

Agentic-deployment lens. Stripe Radar is substantially agentic — the system makes specific transaction-acceptance-or-rejection decisions autonomously, in milliseconds, at substantial scale. The agentic-specific failure modes apply: errors at the system’s scale produce substantial harm; the trust threshold for autonomous fraud decisions is operationally critical; the decision must be reviewable through specific mechanisms (chargeback processes; merchant-appeals; consumer-recourse).

The deployment design addresses these dimensions explicitly: false-positive rates are continuously monitored and adjusted; merchant-side controls allow customisation; appeal-and-review processes exist for both merchants and consumers; the audit trail of decisions supports investigation. The hybrid structure (the AI makes decisions; specific human-review mechanisms exist for disputed cases) addresses the trust-threshold concerns.

Labour-and-economic lens. Stripe Radar’s labour effects are substantially positive for the broader economy. The fraud-prevention work that Stripe Radar performs at scale would otherwise require either substantially more human labour at merchant firms (each merchant operating fraud teams) or substantially more fraud losses (higher transaction costs across the economy). The platform’s efficiency contributes to lower transaction costs for online commerce broadly.

The labour effects on Stripe itself involve substantial AI-fluent employment. Stripe’s data-science, engineering, and risk-management teams represent substantial high-skilled employment; the firm’s growth has supported substantial career opportunities in these fields.

The integrated diagnosis. Stripe Radar exhibits the platform-AI archetype: AI as infrastructure that supports broader business capability across many users. The compound success across the lenses produces a position that is structurally difficult for competitors to match. The case demonstrates that AI deployment can produce platform-level value rather than only firm-level efficiency gains.

Lessons that generalise.

Lesson 1 — platform AI captures network effects that firm-level AI does not. The Stripe Radar advantage compounds across all of Stripe’s customers; comparable capability built within a single merchant’s operations cannot match the cross-merchant data and pattern recognition.

Lesson 2 — agentic deployment in high-stakes contexts requires structural design for appeal and review. The trust threshold for autonomous high-stakes decisions is high; deployment design must address the threshold explicitly through review mechanisms.

Lesson 3 — regulatory burden can be a moat. Stripe’s substantial compliance infrastructure is itself a competitive advantage; new entrants face the cost of building comparable infrastructure before they can offer comparable services. The pattern generalises to other regulated AI deployment domains.

18.4 The capability frontier — GitHub Copilot integrated analysis

GitHub Copilot (introduced in Section 13.5) provides the canonical case of capability-frontier AI augmentation deployed at scale.

The case context. GitHub Copilot, launched in technical preview in June 2021 (built on OpenAI’s Codex model, descended from GPT-3) and general availability in June 2022, was the first major commercial deployment of foundation-model-based coding assistance. By 2024, Copilot was used by over 1 million developers across organisations representing the majority of Fortune 500 firms. The 2024 Copilot Workspace launch extended capability toward more-agentic patterns; subsequent Copilot Enterprise extensions have continued the trajectory.

Capability-and-maturity lens. Copilot’s capability is at high TRL for the specific code-completion and adjacent tasks (TRL 8–9). The deployment-maturity assessment is at ADML 3–4: the product is operational at substantial scale across Microsoft’s enterprise customer base; the deployment infrastructure is mature; the integration with VS Code and broader Microsoft developer ecosystem is deep.

The five-factor analysis is favourable: task definition is sharp (suggest the next lines of code; complete the function; suggest the implementation); feedback signals are clear (developers accept or reject suggestions; the code compiles or doesn’t; tests pass or fail); data availability is exceptional (the underlying GitHub code corpus is among the largest source-code datasets globally); regulatory environment is light (general software-development frameworks); deployment friction is low (developers install the extension; integration is automatic).

Governance-and-compliance lens. Copilot operates within general intellectual-property frameworks. The most-substantive governance question has been training-data IP — Copilot’s training corpus included GitHub-hosted code, including code with various open-source licenses. The 2022 Doe v. GitHub class-action lawsuit alleged license-violation through Copilot’s training; the case has progressed through 2022–2026 with various rulings on specific issues. The case is structurally similar to the New York Times v. OpenAI (Section 10.9) and broader training-data-IP litigation landscape.

GitHub’s response has included: the 2022 introduction of “duplicate detection” to suppress verbatim reproduction of training-corpus code; the 2023 introduction of code-attribution features; the 2024 substantial expansion of license-compatibility infrastructure. The company has progressively addressed the IP concerns; the legal landscape continues to develop.

Agentic-deployment lens. Copilot evolved from primarily-assistive to increasingly-agentic through 2022–2026. The original product suggested completions that the developer accepted or rejected; the 2024 Copilot Workspace extensions added multi-file editing capabilities that approach agentic deployment; the broader trajectory toward autonomous coding-task completion is continuing.

The agentic dimensions raise specific questions: when Copilot writes code that the developer accepts, who is responsible for bugs in the code? When the AI suggests architectural choices that prove problematic, what is the accountability framework? The specific incidents (occasional security-vulnerability suggestions; specific copyright-infringement reproductions; quality issues in generated code) have produced operational responses but the broader accountability framework is still developing.

Labour-and-economic lens. Copilot’s labour effects have been substantially augmentation-focused. The Microsoft-GitHub research (Peng et al., 2023) found 55% productivity improvement on specific tasks; subsequent research has produced more-mixed estimates but with consistent productivity gains in the augmentation context.

Software-engineering employment has not collapsed despite substantial Copilot adoption. The 2024 software-engineering job market has been more competitive than the 2021–2022 boom, but this reflects broader tech-industry restructuring rather than AI-driven displacement. The pattern is consistent with the broader Acemoglu-Restrepo augmentation framework.

The integrated diagnosis. Copilot exhibits the capability-frontier-augmentation archetype: AI that extends professional capability without substituting for it. The compound success across the lenses (mature capability; established deployment context; manageable governance with ongoing IP development; mostly assistive with progressive agentic extensions; augmentation-positive labour effects) produces sustained value.

Lessons that generalise.

Lesson 1 — frontier capability supports augmentation before substitution. The early-deployment pattern is augmentation; substitution-focused deployment in the same capability domain (the broader autonomous-coding-agent attempts in 2024–2025; Section 13.5 covered Devin’s restructuring) produced more-mixed results.

Lesson 2 — integration with existing workflows determines adoption. Copilot’s deep integration with VS Code is a substantial driver of adoption; standalone alternatives have struggled. The lesson generalises beyond coding: AI deployment that integrates with existing workflows succeeds where standalone deployment struggles.

Lesson 3 — IP frameworks for training data are still developing. The 2022 Doe v. GitHub litigation, the 2023 NYT v. OpenAI litigation, the 2024 RIAA v. Suno-Udio litigation, and adjacent cases collectively shape the framework. The framework is not yet settled in 2026; the resolution will substantially affect the broader generative-AI economic landscape.

18.5 The hyperscaler infrastructure — NVIDIA’s role

A specific case worth dedicated treatment is NVIDIA — the dominant provider of AI training-and-inference infrastructure during the 2022–2026 period.

The case context. NVIDIA Corporation, founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, was originally a graphics-card manufacturer. The company’s CUDA platform (introduced 2006) enabled general-purpose GPU computing that proved foundational for the deep-learning era beginning with AlexNet in 2012. The 2017–2022 deployment of GPUs for deep-learning training established NVIDIA’s positioning. The 2022–2024 ChatGPT-and-foundation-model wave produced explosive growth: NVIDIA’s revenue grew from approximately USD 27 billion in fiscal 2023 to approximately USD 130 billion in fiscal 2025 — among the most-rapid revenue growth at scale of any major technology company in history. The company became the most-valuable public company globally for periods through 2024–2025, with a market capitalisation exceeding USD 3 trillion at peak.

Capability-and-maturity lens. NVIDIA’s product capability for AI infrastructure is at extremely high TRL. The H100 (introduced 2022) and H200 (introduced 2024) GPUs have been the standard infrastructure for foundation-model training globally; the Blackwell architecture (introduced 2024) and subsequent generations continue the trajectory. The deployment-maturity assessment is structural: NVIDIA’s products are the deployment infrastructure that other firms’ AI deployments rely on. The firm operates at ADML 5 — its competitive position is structurally derived from the AI infrastructure it provides.

The five-factor analysis is favourable across dimensions: task definition is sharp (provide compute infrastructure that supports AI training and inference); feedback signals are strong (customer adoption and competitive comparisons); data availability is substantial (NVIDIA’s customer-deployment data); regulatory environment is established with progressive AI-related developments; deployment friction is moderate (the integration with customers’ broader compute infrastructure is engineered).

Governance-and-compliance lens. NVIDIA faces specific governance challenges. The most-prominent is US export controls: the Biden-era restrictions on AI-chip exports to China (2022, 2023, 2024 progressive elaboration) have substantially affected NVIDIA’s revenue trajectory. The Trump administration’s policy through 2025 has continued the export-control framework with specific modifications. The geopolitical dimension of AI infrastructure is increasingly central; NVIDIA navigates substantial regulatory complexity.

Beyond export controls, NVIDIA faces other regulatory considerations: antitrust attention to its dominant market position; specific national-security frameworks; the broader AI-Act implications for the chip-and-infrastructure layer. The regulatory complexity is substantial and continues to develop through 2024–2026.

Agentic-deployment lens. NVIDIA’s products are not themselves agentic in the deployment sense — they are infrastructure. However, they enable substantial agentic deployment by their customers. The case is informative for how infrastructure providers fit in the broader agentic-AI ecosystem: NVIDIA does not directly deploy agents but enables the entire industry’s agent deployment. The strategic position is structurally important.

Labour-and-economic lens. NVIDIA’s economic position has been transformative for the company and substantial for the broader economy. The company’s market capitalisation growth has produced substantial wealth creation; the customer-spending it enables has produced substantial economic activity. The labour effects are substantial: NVIDIA itself has expanded substantially; customer-firm employment in AI has grown substantially; the broader chip-design and chip-fabrication ecosystem (TSMC; Samsung; specific other firms) has benefited from NVIDIA-driven demand.

The economic-rents question (Section 11.12) is operationally significant for NVIDIA. The firm captures substantial rents from its dominant position; competitors (AMD with the MI300 series; Intel with the Gaudi series; Google with the TPU; Amazon with Trainium and Inferentia; specific other players) are progressively addressing the position but have not yet substantially displaced it. The 2026–2030 trajectory will substantially shape the rents distribution.

The integrated diagnosis. NVIDIA’s position is the infrastructure-dominance archetype: a single firm providing infrastructure that an entire industry depends on. The position is competitively valuable and politically sensitive; it produces substantial rents but also substantial regulatory attention. The integrated lessons:

  • Infrastructure firms occupy strategic positions in AI ecosystems that downstream-application firms do not match.
  • Regulatory and geopolitical dimensions are increasingly central to AI infrastructure firms.
  • Competition for infrastructure positions is substantial; the long-run market structure is not yet settled.
  • The economic value created by infrastructure positioning is substantial; the rents capture is contested.

The NVIDIA case illustrates the broader infrastructure layer of contemporary AI deployment. Foundation-model labs (Section 18.6), cloud providers, and specialised infrastructure firms collectively constitute the infrastructure that downstream application deployment depends on. Understanding this layer is essential for understanding contemporary AI economics.

18.6 The frontier-model labs as cases — Anthropic and OpenAI

Anthropic and OpenAI, the two major contemporary frontier-model labs, are themselves cases worth integrated analysis. The labs’ deployment trajectories shape the broader AI landscape.

The case context. OpenAI was founded in December 2015 by Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, and others as a non-profit AI research organisation. The 2019 transition to a capped-profit structure with Microsoft investment shifted the model substantially; the 2022 ChatGPT release produced commercial inflection. By 2026 OpenAI is a substantial commercial entity with multi-billion-dollar revenue, hundreds of millions of users, and continued frontier-research capability.

Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and several other former OpenAI researchers. The firm’s positioning has emphasised AI safety as central to commercial mission; the Claude family of models has been the primary commercial product. By 2026 Anthropic is a substantial commercial entity with substantial revenue, enterprise customer adoption, and continued frontier-safety-research capability.

Capability-and-maturity lens. Both labs operate at the capability frontier — their models are among the most-capable foundation models commercially available. The deployment-maturity assessment is interesting: as foundation-model providers, they operate at ADML 5 in their core capability (the foundation models are the firms’ core products); their broader application-deployment is at varying levels (ChatGPT operates at substantial scale; Claude operates at substantial enterprise scale; specific other products are at various levels).

The five-factor analysis is favourable for both firms in their core foundation-model business but variable in their broader application deployments. The capability advancement has been substantial through 2022–2026; the deployment maturation has accelerated substantially; the regulatory environment has progressively developed; agentic-deployment dimensions are increasingly central.

Governance-and-compliance lens. Both labs operate at the centre of AI-governance development. The frontier-model governance frameworks (Section 14.10) — Anthropic’s Responsible Scaling Policy; OpenAI’s Preparedness Framework — represent substantial firm-internal governance. The labs have been active participants in international governance (Bletchley, Seoul, Paris summits); both firms have signed the Frontier AI Safety Commitments and have published substantial governance content.

The implementation of the governance frameworks has been variable. Anthropic’s RSP has been more substantively implemented (with specific public capability-evaluation publications) than OpenAI’s Preparedness Framework (which has had public controversy through 2024 with safety-team departures). The contrast is substantial; the broader-industry governance landscape will be shaped by how these frameworks evolve.

Agentic-deployment lens. Both labs have substantial agentic capabilities and deployments. OpenAI Operator (Section 13.6) and ChatGPT’s broader agentic features represent OpenAI’s deployment positioning. Anthropic’s Computer Use (Section 13.6), Claude Code (Section 13.5), and the broader Claude API enable substantial agentic deployment by customers.

The labs’ specific agentic-deployment patterns differ substantially. OpenAI has been more consumer-facing in its agentic deployment (ChatGPT; Operator); Anthropic has been more enterprise-and-developer-facing (Claude API; Claude Code). The strategic positioning differences shape the firms’ deployment patterns and the broader market structure.

Labour-and-economic lens. Both labs have produced substantial economic effects. OpenAI’s revenue has grown substantially through 2022–2026 (multi-billion-dollar annual revenue with continued rapid growth); Anthropic’s revenue has grown substantially (mid-billions annually with rapid growth); the broader AI-economy effects (developer ecosystems; customer-firm AI adoption; the broader infrastructure demand) are substantial.

The labour effects within the labs are substantial: both firms have grown rapidly with substantial high-skilled employment; the labs have been substantial employers for AI researchers, engineers, and adjacent talent. The labour-market dynamics in AI talent (substantial salary premiums; rapid mobility between labs and other firms; specific patterns of employee activism on safety questions) have been distinctive to the lab context.

The integrated diagnosis. The frontier-model labs occupy a foundation-position archetype: their products are infrastructure that downstream application deployment relies on. The position is structurally similar to NVIDIA’s infrastructure position but operates at a different layer (the model layer rather than the chip layer). The compound positioning produces substantial influence on the broader AI deployment landscape; the specific governance and safety choices the labs make shape what AI deployment is possible across the industry.

Lessons that generalise.

Lesson 1 — foundation-model positioning produces infrastructure-level influence. Firms operating at the foundation-model layer have substantial influence on downstream deployment; the strategic positioning matters substantially for the broader AI landscape.

Lesson 2 — firm-internal governance becomes industry standard. The lab-internal governance frameworks (RSP; Preparedness Framework) are progressively becoming industry references; the firms’ choices shape broader practice.

Lesson 3 — capability-and-deployment trajectories may diverge. The labs’ capability advancement has been faster than the broader industry’s deployment-environment maturation; the gap produces specific dynamics that shape what deployment is possible.

Lesson 4 — geopolitical and regulatory positioning is increasingly central. Both labs face substantial regulatory and geopolitical attention; the firms’ navigation of these dimensions substantially shapes their futures.

18.7 The China inflection — DeepSeek R1 January 2025

A specific case worth focused treatment is DeepSeek’s R1 release in January 2025, which substantially shifted the global foundation-model competitive landscape.

The case context. DeepSeek, a Chinese AI firm founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, released DeepSeek-R1 in January 2025. The model was a frontier-class reasoning model with capability comparable to OpenAI’s o1 series. The release had several distinctive features: open-source release with permissive licensing; substantially lower training costs than US frontier models (DeepSeek’s published estimates were approximately USD 5–6 million for the final training run, substantially below the hundreds-of-millions-to-billions that comparable US models reportedly cost); strong inference performance with relatively modest hardware requirements.

The release produced substantial market and policy reactions. NVIDIA’s stock declined approximately 17% on 27 January 2025, the largest single-day market-cap loss in US public-market history at the time (over USD 600 billion); broader AI-related stocks declined; substantial public discussion ensued about whether the contemporary US AI competitive position was threatened.

Capability-and-maturity lens. R1’s capability was substantively at the frontier — the published benchmarks showed performance comparable to o1 on many evaluation tasks. The deployment-maturity assessment is interesting: as an open-source model, R1’s deployment is structurally different from proprietary models; the firm’s competitive position is built around the broader ecosystem rather than around exclusive product access.

The case is substantively about whether efficient frontier-capability training is possible at substantially lower compute investment than US labs have been investing. The question is technically contested; subsequent analysis has produced varied conclusions about the actual training costs and the methodology’s generalisability.

Governance-and-compliance lens. DeepSeek operates within Chinese AI regulatory frameworks (Section 14.7). The firm’s deployment in China is subject to the Generative AI Provisions; the deployment internationally faces the broader AI-governance landscape. The export-control implications of R1’s release have been substantial: if frontier capability can be developed at substantially lower compute investment, the export-control framework’s effectiveness in constraining Chinese AI development is reduced.

The 2025 Trump administration policy response has continued the export-control framework with specific modifications addressing the R1-revealed dynamics. The broader policy implications continue to develop through 2025–2026.

Agentic-deployment lens. R1’s reasoning capability supports substantial agentic deployment. The open-source release means the model is broadly available for agentic-deployment construction; the deployment ecosystem around R1 has grown substantially through 2025.

Labour-and-economic lens. The economic implications of R1 have been substantial. The case demonstrated that frontier capability is not exclusively achievable at hyperscale-only investment levels; the implications for the broader AI competitive landscape are contested. The market reactions (NVIDIA stock decline; broader AI-stock pressure) reflected the market’s reassessment of the AI-infrastructure-investment thesis.

The longer-term economic implications depend on whether R1’s training-cost economics generalise (the methodology was open-published; whether other firms can replicate and extend it is an empirical question), whether the open-source pattern continues to challenge proprietary positioning, and how the geopolitical-and-regulatory environment evolves.

The integrated diagnosis. R1 is the competitive-disruption archetype — a deployment that substantially shifts the competitive landscape through specific capability or economic differentiation. The case illustrates several patterns:

  • Frontier capability is not exclusive to US labs. The 2024–2026 period produced substantial Chinese AI capability development; R1 is the most-prominent example. The competitive landscape is global, not US-exclusive.
  • Open-source releases can shift competitive dynamics. R1’s open-source release substantially reduced the proprietary advantage of frontier-class reasoning models; subsequent open-source frontier releases may continue the pattern.
  • Cost-efficiency in training is operationally significant. If frontier capability can be developed at substantially lower investment, the AI-economy structure is different from the hyperscale-required model.
  • Policy responses lag market and technical developments. The export-control framework’s 2025 reaction came after the R1 release; subsequent policy continues to develop.

The R1 inflection demonstrates that the AI competitive landscape continues to evolve in unpredictable ways. The integrated framework supports analysis of subsequent inflections that will continue to emerge through 2026–2030.

18.8 Recent cautionary additions — 2024–2026 cases not yet covered

Beyond the major cautionary cases of Part II, specific 2024–2026 cases warrant brief integrated treatment.

The Cruise October 2023 incident and December 2024 closure. Section 12.4 covered the case briefly. The integrated analysis: the case combines capability-and-deployment-maturity failure (the AV technology was not yet ready for unsupervised public deployment); governance failure (the company’s relationship with regulators deteriorated through the incident); agentic-deployment failure (the system took actions in a complex multi-agent environment with inadequate handling of edge cases); labour-and-economic effects (substantial layoffs; AV-industry retrenchment). The compound failure across lenses produced the GM exit decision.

The Apple Project Titan February 2024 closure. Section 12.4 covered the case briefly. The integrated analysis: the project had operated for approximately a decade with cumulative spending exceeding USD 10 billion. The closure decision reflected several integrated factors: capability-maturity for autonomous-vehicle technology had not advanced as projected; deployment-environment friction for vehicle manufacturing was substantial; the regulatory environment for AVs remained uncertain; the labour-and-economic case for Apple’s specific positioning was insufficient. The decision was substantially about strategic positioning rather than technical incapacity; the closure illustrates that even capability-rich firms can decide that specific deployments are not viable.

The 2024 NYC Local Law 144 implementation issues. The bias-audit requirements for AI employment tools (Section 14.5) entered force in 2023; the implementation through 2024–2026 has produced substantial issues. Specific cases of inadequate audits, audit-quality variance, and ambiguous applicability have emerged. The case illustrates governance-implementation challenges: passing legislation is the first step; effective implementation requires substantial subsequent work.

The 2024 Air Canada chatbot ruling. A specific case worth noting is the February 2024 British Columbia Civil Resolution Tribunal ruling against Air Canada in a small-claims case. The customer used Air Canada’s chatbot for travel-policy information; the chatbot provided incorrect information about bereavement-fare policies; Air Canada attempted to argue that the chatbot was a separate entity for which the airline was not responsible. The Tribunal ruled against Air Canada, finding the airline responsible for chatbot statements. The case is small in absolute terms but legally precedent-setting: firms cannot disclaim responsibility for their AI systems’ outputs by pointing to AI as a separate entity. The ruling has been cited subsequently in other AI-accountability contexts.

The 2024–2025 generative-AI training-data litigation expansion. Beyond NYT v. OpenAI (Section 10.9) and RIAA v. Suno-Udio (Section 10.8), the training-data litigation has expanded substantially. Specific cases include Authors Guild v. OpenAI (multiple plaintiff groups; consolidated 2024); Getty Images v. Stability AI (UK and US proceedings); various other publishers and rights-holders pursuing claims. The cumulative legal landscape is unsettled; the 2026–2028 trajectory will substantially shape the legal foundation for foundation-model training.

The recent cases collectively illustrate that the cautionary-case pattern continues to produce new examples. The diagnostic framework (Section 16.12) applies; the patterns recur. Mature analytical practice anticipates that subsequent cases will continue to illustrate the framework’s diagnostic categories.

18.9 Asian regional cases — DBS, ViTrox, Sime Darby Plantations

Three Asian regional cases warrant integrated analysis given the unit’s KL-Melbourne dual-cohort context.

DBS Bank — Singapore’s banking AI exemplar. DBS Bank Limited, headquartered in Singapore, has been a substantial regional AI deployer. The bank’s “DBS as a Tech Company” positioning since approximately 2014 has supported substantial AI investment; specific deployments include comprehensive customer-service AI; substantial fraud-detection infrastructure; AI-supported wealth management; and increasingly agentic deployments through 2024–2026. The bank’s AI deployment has been recognised in international banking-technology rankings; DBS has been substantively positioned as one of the most-AI-capable banks globally.

The integrated analysis: capability and deployment maturity are at high alignment (ADML 3–4); the Singapore regulatory environment (Section 14.8) supports the deployment with substantial AI-specific MAS guidance; agentic dimensions are increasingly central (with bounded-scope deployments); labour effects are augmentation-positive with substantial AI-fluent workforce development. The case is the leading regional example of mature financial-services AI; lessons generalise to other Asian banks pursuing similar trajectories.

ViTrox Corporation — the Penang manufacturing AI frontier. ViTrox (introduced in Section 9.3 and §9.12) is a Malaysian-headquartered firm operating at the global frontier of automated optical inspection for electronics manufacturing. Founded in 2000 in Penang and listed on Bursa Malaysia, the firm has grown substantially through 2010–2026 with major customers across the global semiconductor and electronics industries.

The integrated analysis: capability is at high TRL (TRL 9 for the specific inspection applications); deployment maturity is exceptional (the products are operationally deployed at scale across major customers globally; the firm operates at ADML 4–5 in its specific applications); the governance environment is established (manufacturing inspection is well-regulated); agentic dimensions are present but bounded (the systems make specific inspection decisions; integration with broader manufacturing operations is engineered); labour effects are augmentation-positive in the specific deployment (the systems support human inspectors rather than fully replacing them). The case demonstrates that frontier-AI capability can be developed in regional markets without requiring US or European headquarters; the lessons generalise to other regional AI firms pursuing similar trajectories.

Sime Darby Plantations — Malaysian agricultural AI. Sime Darby Plantations Berhad (introduced in Section 11.8) operates the world’s largest oil-palm plantation portfolio at approximately 600,000 hectares globally. The firm’s AI-deployment programme through 2018–2026 has been substantial; specific applications include drone-and-satellite-based plantation monitoring; AI-driven yield prediction; harvest-scheduling optimisation; processing-facility quality assessment.

The integrated analysis: capability is at TRL 7–8 for the specific applications; deployment maturity is at ADML 2–3 (operational at scale with continuing development); the governance environment is mid-developed (palm-oil sustainability frameworks add specific requirements; the EU Deforestation Regulation effective 2024 has substantial implications); agentic dimensions are limited (the AI provides decision support rather than autonomous action); labour effects are mixed (the AI-deployed operations remain labour-intensive but with substantial efficiency gains; the broader sustainability-and-labour-rights questions remain). The case demonstrates that AI deployment in Asian agricultural contexts faces specific challenges (sustainability requirements; labour-rights questions; smaller-customer-base compared to manufacturing or banking) that the integrated framework helps clarify.

Cross-case patterns. The three Asian cases collectively illustrate the regional AI-deployment landscape: substantial deployment maturity in specific contexts (financial services in Singapore; specific manufacturing in Malaysia; specific agricultural applications); governance frameworks at varying stages of development; mixed agentic-deployment patterns; varied labour-effect patterns depending on sectoral characteristics. The regional ecosystem is genuinely substantial; graduates of the unit operating in regional contexts will encounter cases of comparable depth and complexity.

18.10 Australian regional cases — Annalise.ai, Rio Tinto, others

Three Australian regional cases similarly warrant integrated analysis.

Annalise.ai — Australian medical-AI exemplar. Annalise.ai (introduced in Section 7.8) is a Sydney-based medical-AI firm developing computer-vision systems for radiological imaging. The company’s specific products (chest X-ray and CT-imaging analysis) have received regulatory clearance in multiple jurisdictions; the deployment has scaled to major Australian and international healthcare systems.

The integrated analysis: capability is at high TRL (TRL 8–9 for the specific imaging applications); deployment maturity is at ADML 3 (operational at substantial scale with continuing expansion); governance is mature (TGA, FDA, and other regulatory frameworks support deployment); agentic dimensions are limited (the systems provide decision support to radiologists rather than autonomous diagnosis); labour effects are augmentation-positive (radiologists with AI augmentation handle higher case volumes with maintained or improved accuracy). The case demonstrates that Australian medical-AI firms can compete at international scale; the lessons generalise to other Australian AI-deployment ambitions.

Rio Tinto and the Pilbara remote-operations centres. Rio Tinto’s Mine of the Future programme (introduced in Section 9.11) has progressively automated operations across the Pilbara iron-ore mining region. By 2026 the operations include substantial autonomous haulage trucks (over 200 deployed across Pilbara mines); autonomous drilling; remote-operations centres in Perth handling operations at multiple sites; substantial AI-supported planning and scheduling.

The integrated analysis: capability is at high TRL for the specific autonomous operations; deployment maturity is at ADML 3–4 (operational at substantial scale with continuing capability development); governance is established (mining-and-resources regulation with progressive AI extensions); agentic dimensions are central (autonomous trucks, drills, and adjacent equipment make specific operational decisions); labour effects are mixed (specific Pilbara on-site roles substituted; remote-operations-centre roles created; the broader Australian mining-and-resources employment stable through the transition managed over a decade-plus timeframe). The case is the global frontier of mining-autonomy; lessons generalise to other resource-extraction contexts globally.

The Australian agtech ecosystem. Beyond specific firms, the Australian agtech ecosystem (Section 11.8) deserves treatment as a regional case. The ecosystem includes substantial CSIRO research support; multiple commercial firms (AgriDigital, AGUR, Agworld, Maia Grazing); university-based research (UQ, Sydney, UNE, Monash); the broader policy framework (DAFF, AGD).

The integrated analysis at ecosystem level: capability is mature in specific applications; deployment is substantial at large-farm contexts but limited at smallholder; governance is mid-developed; agentic dimensions are increasing (autonomous equipment; AI-driven decision support); labour effects are concentrated in specific roles (seasonal labour; on-farm management). The ecosystem case demonstrates that regional AI-deployment can develop substantially even in markets that do not match the largest US or European centres; the Australian agtech sector has produced substantial international expansion despite the smaller domestic-market scale.

Cross-case patterns. The Australian cases collectively illustrate substantial deployment maturity in specific sectors (medical AI; mining; agriculture) operating at international competitive scale. The regulatory environment is supportive; the labour-and-economic effects are managed. The 2024–2026 trajectory has produced continued development; the 2026–2030 forward look is positive.

18.11 Cross-case synthesis — patterns that recur

The cumulative case material across Part II and Chapter 18 produces specific patterns that recur with sufficient consistency to warrant systematic synthesis.

Success patterns. Five patterns recur in successful AI deployments.

Pattern 1 — sharp operational task definition. Successful deployments are organised around sharply-defined operational tasks (predict CTR; detect fraud; complete code; generate ads; classify images; recommend products) rather than broad domain framings. Stitch Fix, Stripe Radar, GitHub Copilot, ViTrox, Annalise.ai all exemplify the pattern.

Pattern 2 — operational-deployment maturity matched to capability maturity. Successful deployments avoid both undershooting (deployment context less mature than capability supports, leaving value unrealised) and overshooting (deployment context less mature than capability requires, producing failure). The successful cases exhibit alignment between capability and deployment maturity.

Pattern 3 — data-flywheel operational structure. Successful deployments have operational structures that produce data-flywheel dynamics: usage generates data; data improves the system; better systems produce more usage. Stitch Fix is the canonical case; Stripe Radar, GitHub Copilot, and many others exhibit similar dynamics.

Pattern 4 — augmentation-focused labour design. Successful deployments are typically structured for augmentation rather than substitution. The hybrid AI-human structures, the senior-professional-augmented patterns, and the productivity-rather-than-displacement framings support sustained value capture.

Pattern 5 — staged maturity progression. Successful deployments progress through maturity stages systematically: proof of concept → pilot → limited deployment → production → mature operations. Skipping stages produces failure; systematic progression produces sustained operational success.

Failure patterns. Five patterns recur in cautionary cases.

Pattern 1 — broad framing without operational definition. The Watson Health pattern (Sections 7.3, 17.4) recurs across multiple cases. Broad framings produce broad expectations that the specific capability cannot meet.

Pattern 2 — alpha-skipping or staged-rollout failure. The Klarna pattern (Sections 8.4, 17.5) recurs. Premature scaling produces failure that intermediate staging would have surfaced.

Pattern 3 — single points of failure. The Boeing 737 MAX pattern (Section 9.7) and the Robodebt pattern (Sections 12.1, 17.7) exemplify reliance on specific components that, when they fail, produce systemic failure. Architectural rigour identifies and mitigates SPOFs.

Pattern 4 — defensive post-incident management. The pattern recurs across Watson Health, Klarna, Boeing MAX, Cambridge Analytica, Robodebt, and others. Defensive responses extend failure periods and increase eventual costs.

Pattern 5 — governance ambiguity exploited. Several cases (Cambridge Analytica; Robodebt; specific others) operated in regulatory ambiguity that allowed deployment without adequate accountability. The pattern motivates the governance development that subsequent regulatory frameworks have produced.

The integrated diagnostic value. The patterns are diagnostic tools that mature analytical practice applies systematically. When analysing a new case, the prior patterns provide reference points: does this case match any of the success patterns? Does it exhibit any of the failure patterns? The diagnostic value is realised through systematic application; future cases that emerge through 2026–2030 will be analysable through the same pattern recognition.

The generalisation question. A specific question about the patterns is whether they continue to generalise as the AI landscape evolves. Some patterns are likely to remain stable (broad framing without operational definition will continue to be problematic; alpha-skipping will continue to produce failures); others may evolve (specific success patterns may shift as the technology and economic landscape evolves). The 2026–2030 case material will inform the assessment; mature analytical practice maintains awareness of which patterns are durable and which are evolving.

18.12 The forward-looking case methodology

The integrated framework supports analysis of cases that have not yet emerged. The forward-looking methodology has specific structure.

Anticipating future cases. The 2026–2030 period will produce substantial case material across several dimensions:

Capability advancement cases. Specific advancement events (frontier-model releases; new capability demonstrations; specific competitive shifts) will produce case material analogous to the DeepSeek R1 inflection. The framework supports analysis of these events.

Deployment expansion cases. Specific deployment-expansion events (sectoral deployments reaching maturity; specific applications scaling substantially) will produce case material. The framework’s deployment-maturity dimensions support analysis.

Cautionary cases. The pattern of cautionary cases will continue. Specific deployments will fail in patterns matching the diagnostic framework; the framework supports rapid identification and analysis.

Governance development cases. Specific regulatory developments, court rulings, and adjacent policy events will continue to shape the deployment landscape. The framework’s governance dimensions support analysis.

Labour-and-economic cases. Specific labour-and-economic developments (organised-labour responses; productivity inflections; specific role-displacement events) will continue to emerge. The framework supports analysis.

Methodology for analysing future cases. When a new case emerges, mature analytical practice applies the framework systematically:

  1. Initial framing. Identify what the case is about — capability advancement, deployment, cautionary, governance, labour.
  2. Four-lens application. Apply the four lenses (capability-and-maturity; governance-and-compliance; agentic-deployment; labour-and-economic) systematically.
  3. Pattern matching. Identify whether the case matches existing success or failure patterns.
  4. Integrated diagnosis. Synthesise the lenses and patterns into the case’s specific diagnosis.
  5. Generalisation. Identify what lessons the case adds to the framework — confirming existing patterns or surfacing new ones.

The methodology is what professional analytical practice consists of in AI-deployment work. Graduates of the unit who engage with the field over decades will apply the methodology continuously; the framework’s value is realised in this continuing application.

The unresolved-questions awareness. Mature analytical practice acknowledges what the framework does not yet address. The unresolved questions identified in Section 17.13 (capability trajectory; deployment-environment maturation; labour-effect realisation; frontier-AI safety; international convergence; trust-threshold dynamics; economic-rents-and-redistribution) will progressively resolve through 2026–2030 case material; mature practice tracks the resolution and updates the framework accordingly.

18.13 The graduate-capability synthesis

Students completing the unit should leave with specific analytical-and-operational capability. The capability synthesis identifies what graduates should be able to do.

Analytical capability. Graduates should be able to:

  • Assess specific AI deployments using the integrated framework’s four lenses with appropriate depth for the analytical purpose.
  • Identify diagnostic patterns (success and failure patterns) in deployments they encounter.
  • Apply the maturity framework (TRL; deployment-maturity factors; ADML levels) systematically.
  • Evaluate governance and compliance requirements for specific deployments across major jurisdictions.
  • Analyse labour-and-economic effects of deployments using the Acemoglu-Restrepo framework.
  • Distinguish substantive from inflated claims about AI capability and deployment.
  • Make rigorous deployment, investment, regulatory, or evaluation decisions using the integrated decision frameworks.

Operational capability. Graduates should be able to:

  • Build AI products using the playbook discipline of Part V.
  • Apply staged-rollout methodology (Chapter 24’s alpha discipline) to manage deployment risk.
  • Design data-flywheel mechanisms (Chapter 25’s beta discipline) for sustained competitive advantage.
  • Pitch AI products to investors, customers, and regulators with rigorous claims.
  • Navigate the regulatory landscape (the EU AI Act, sector-specific frameworks, regional frameworks).
  • Manage the labour-and-economic dimensions of AI deployment decisions.

Integrated capability. The combination of analytical and operational capability is what produces graduate-level AI-in-business work. Pure analysis without operational capability produces understanding without ability to act; pure operations without analytical capability produces action without judgment about what should be acted on. The integrated capability is what the unit develops; the value is realised in subsequent professional practice across decades.

The continuing analytical practice. Graduates will encounter AI deployment cases continuously across their careers. The framework is not static; the field continues to develop; specific cases will require framework refinement. Mature analytical practice combines:

  • Systematic application of the existing framework to new cases.
  • Critical reflection on whether the framework is adequate to the specific case or requires extension.
  • Continuous learning about new developments, new cases, and new analytical insights.
  • Engagement with the broader field — academic literature; industry analysis; professional networks.

The graduate capability is therefore not a fixed endpoint but the foundation for continuing professional development. The unit’s purpose is to produce that foundation; the realised value is what graduates produce over careers of subsequent application.

18.14 The closing of the textbook

This textbook has covered substantial ground. Part I (Chapters 1–5) developed the foundational framing — what AI is, the eras of its development, the AI factory, the rewired firm, and AI strategy. Part II (Chapters 6–12) developed sectoral case material across finance, healthcare, retail, manufacturing, marketing-media-energy, logistics-agriculture-services, and the broader sectoral landscape with synthesis. Part III (Chapters 13–18) developed analytical frameworks — agentic AI, governance and the EU AI Act, labour and productivity, maturity frameworks, frameworks synthesis, and (this chapter) cases of AI in business synthesis. Part V (Chapters 19–28) developed the operational playbook — ten weeks of build discipline applied to the Team Aroma worked example.

The textbook’s organising commitment has been the integration of analytical understanding with operational discipline. The cautionary cases of Part II motivate the analytical frameworks of Part III; the analytical frameworks shape the operational discipline of Part V; the operational discipline produces the deployment outcomes that the analytical frameworks then evaluate. The integration is what produces graduate-level AI-in-business capability.

The field of AI-in-business is genuinely substantial. The cases this textbook has covered — Watson Health, Klarna, Stitch Fix, Stripe Radar, GitHub Copilot, the Hollywood strikes, Robodebt, Boeing 737 MAX, NVIDIA, Anthropic, OpenAI, ViTrox, Annalise.ai, and many others — represent a fraction of the deployments and developments occurring globally. The 2026–2030 trajectory will produce substantially more case material; the field will continue to evolve.

Graduates of the unit who engage with the field across decades will apply the analytical-and-operational capability the textbook has developed to a continuously-evolving landscape. The value of the framework is not in its completeness (it is not complete; the unresolved questions of Section 17.13 acknowledge specific limits) but in its diagnostic and operational utility. Specific deployments graduates encounter will be analysable through the framework; specific deployments graduates build will be guided by the operational discipline; the cumulative engagement will produce the field-shaping work that the discipline supports.

The textbook closes here, but the practice continues. Students completing the unit will encounter AI deployment cases as employees, founders, consultants, investors, regulators, journalists, and scholars. Each encounter is an opportunity to apply the framework, refine the diagnosis, and extend the analytical-and-operational capability. The framework’s value is realised over decades of continuing application.

The journey from idea to deployment to operations, with the discipline that the cautionary cases motivate and the rigour that the analytical frameworks support, is what AI-in-business work consists of in 2026 and is likely to consist of for the foreseeable future. The graduate capability the unit develops is the foundation; the continuing professional engagement is what produces field-shaping outcomes.

The closing of the textbook is the opening of the practice.

References for this chapter

Success archetype primary sources

  • Stitch Fix Inc. (2017–2024). Annual reports and IR communications.
  • Stripe Inc. (2024). Stripe Radar product documentation; Stripe Press The Algebra of Wealth and adjacent communications.
  • GitHub (2022, 2024). Copilot launch and enterprise communications.
  • Microsoft (2024). Copilot and developer-tools communications.

NVIDIA case sources

  • NVIDIA Corporation (2024, 2025). Annual reports and earnings communications.
  • US Department of Commerce (2022, 2023, 2024). AI chip export-control rules.
  • Goldman Sachs Equity Research (2024, 2025). NVIDIA research notes.

Foundation-model lab sources

  • OpenAI (2015–2025). Public communications, research publications, and annual reports.
  • Anthropic (2021–2025). Responsible Scaling Policy (multiple versions); research publications; corporate communications.
  • Bletchley AI Safety Summit (2023). Bletchley Declaration and supporting documents.
  • Seoul AI Safety Summit (2024). Frontier AI Safety Commitments.

DeepSeek case sources

  • DeepSeek (2025). DeepSeek-R1 release announcement and technical report, January 2025.
  • Various market analyses (2025). Equity research and market commentary on R1 release.

Air Canada chatbot case

  • Moffatt v. Air Canada (2024). British Columbia Civil Resolution Tribunal decision, February 2024.

Asian regional cases

  • DBS Bank Limited (2024). Annual report and AI strategy communications.
  • ViTrox Corporation Berhad (2024). Annual report and product communications.
  • Sime Darby Plantations Berhad (2024). Annual report and AI deployment communications.

Australian regional cases

  • Annalise.ai (2024). Product documentation and regulatory clearances.
  • Rio Tinto plc (2024). Annual report and Mine of the Future communications.
  • Australian Bureau of Agricultural and Resource Economics and Sciences (2024). Agricultural Commodity Statistics.

Cross-case synthesis literature

  • Iansiti, M. and Lakhani, K. R. (2020). Competing in the Age of AI. Harvard Business Review Press.
  • Acemoglu, D. and Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy 128(6): 2188–2244.
  • Brynjolfsson, E. and McAfee, A. (2017). Machine, Platform, Crowd. W. W. Norton.
  • Davenport, T. H. and Mittal, N. (2023). All-in on AI. Harvard Business Review Press.

Closing literature

  • The textbook’s complete bibliography across Parts I–V comprises the primary references for graduate-level engagement with AI-in-business work. The 2026–2030 literature will extend the bibliography substantially; mature analytical practice maintains current engagement with the developing literature.