Chapter 12 — Other sectors and cross-cutting synthesis
This chapter completes Part II’s sectoral coverage and synthesises the patterns that have emerged across Chapters 6–11. The first six sections (12.1–12.6) cover sectors that the prior chapters did not address directly: government and public-sector AI; education; real estate; transportation and aviation beyond the Boeing MAX case; insurance beyond healthcare; hospitality and travel. The remaining sections (12.7–12.13) synthesise the cross-sector patterns — the cautionary-case constellation that has recurred across chapters; the deployment-maturity framework that distinguishes where AI is operationally mature from where it is still developing; the data-flywheel patterns and their failures; the regional patterns visible across the Australian and Malaysian sections of preceding chapters; the 2026–2030 forward look; and the bridge to Part III’s analytical frameworks.
The synthesis matters because the sector cases on their own are descriptive; the patterns across them are analytical. A student who has read the seven Part II chapters has substantial case material to draw on; the integration of that material into transferable patterns is what produces graduate-level competence. Part II’s coverage is comprehensive but not exhaustive — every sector has detail that would deserve more attention than this textbook can give. The patterns that recur across sectors are what generalise, and what subsequent AI-deployment work in any sector can build on.
This chapter develops the remaining sectors and the synthesis across thirteen sections. Sections 12.1–12.6 cover the remaining sectors: government and public-sector AI with the Australian Robodebt case as the canonical failure; education with attention to adaptive learning and the contemporary AI-tutoring wave; real estate with the Zillow Offers shutdown as the canonical failure; transportation and aviation with attention to the autonomous-vehicle industry’s 2024 retrenchment; insurance beyond the healthcare context of Chapter 7; hospitality and travel. Section 12.7 synthesises the five cautionary cases that have recurred across Part II. Section 12.8 develops the deployment-maturity framework. Section 12.9 covers the data-flywheel patterns. Section 12.10 develops the durable lessons from the cautionary-case constellation. Section 12.11 synthesises the regional patterns across Australian and Malaysian context. Section 12.12 sketches the 2026–2030 forward look. Section 12.13 bridges to Part III.
12.1 Government and public-sector AI — Robodebt and the lessons of public-AI failure
Government-and-public-sector AI deployment has produced some of the contemporary period’s most-significant cautionary cases. The deployment context is structurally distinct from private-sector AI: the citizens affected by public-sector AI cannot opt out (unlike private-sector customers); the legal-and-administrative-law frameworks for public action constrain deployment in ways that do not apply to private firms; the political accountability for public-AI failure is concentrated in ways that private failures are not. These differences produce specific deployment patterns and specific failure modes.
The Australian Robodebt case. The Australian government’s Online Compliance Intervention programme — colloquially “Robodebt” — is the most-detailed public-record failure of public-sector AI deployment globally. The programme operated from 2015 to 2019; the 2023 Royal Commission’s report (delivered July 2023) constitutes the most-comprehensive post-mortem of any AI-deployment failure to date.
The programme’s design: the Department of Human Services (later Services Australia) used an automated system to identify discrepancies between welfare recipients’ reported income and their tax-record income, and to issue debt notices for repayment of allegedly-overpaid welfare benefits. The automation was based on income-averaging — annual tax-record income divided by 26 fortnights to estimate fortnightly income. The income-averaging approach was unlawful (a fact established by the 2019 Federal Court ruling in Amato v Commonwealth), but the programme had operated for four years before judicial intervention.
The programme’s scale: approximately 470,000 debt notices were issued; total debts assessed exceeded AUD 1.8 billion; approximately 380,000 of these debts were ultimately determined to be unlawfully calculated, with refunds totalling approximately AUD 720 million. The human consequences were substantial: documented evidence of welfare recipients receiving demands for tens of thousands of dollars in debt; evidence of harassment by debt collectors; multiple suicides linked to the programme by the Royal Commission’s report; evidence of psychological harm at scale.
The Royal Commission’s report (delivered July 2023, with extensive public testimony through 2022–2023) characterised the programme as “a crude and cruel mechanism, neither fair nor legal” that was “a costly failure of public administration.” The Commissioner, Catherine Holmes, identified specific failures: the automation was implemented without legal authority for the income-averaging approach; multiple senior public servants and political officials were aware of the legal questions but pressed forward; the appeal-and-review mechanisms were inadequate to handle the scale of disputes; the government’s response to early evidence of problems was defensive rather than corrective. The report recommended substantial reforms to public-sector AI deployment, including specific legal-authority requirements before deployment, mandatory human-review pathways for adverse decisions, and transparency requirements for algorithmic decision-making.
The structural lessons from Robodebt. Five lessons recur in subsequent public-sector-AI deployment literature.
Lesson 1 — public-sector AI requires explicit legal authority. The Robodebt income-averaging approach was unlawful at deployment; subsequent legal challenge established this conclusively. The deployment proceeded without confirmed legal authority because the political imperative (recovering welfare overpayments) outweighed the legal-review-and-confirmation imperative. The lesson: public-sector AI deployment must be grounded in explicit legal authority, with legal review preceding deployment rather than following it.
Lesson 2 — automation that affects vulnerable populations carries amplified ethical responsibility. Welfare recipients are by definition economically vulnerable; the population’s capacity to challenge automated decisions is structurally limited (limited resources for legal counsel; limited ability to absorb temporary repayment burdens during dispute). Automation that affects this population must be designed with the specific population’s circumstances in mind. The lesson: AI deployments that affect vulnerable populations require additional procedural safeguards beyond what general-purpose deployment would justify.
Lesson 3 — the appeal-and-review pathway must scale with the volume. Robodebt issued 470,000 debt notices; the appeal-and-review mechanism was not designed to handle review at this scale. The result was that disputes accumulated; many recipients paid debts they did not owe rather than navigating a slow appeal process. The lesson: when AI deployment scales adverse-decision volume, the review-and-appeal infrastructure must scale proportionately.
Lesson 4 — defensive post-incident management is itself a failure mode. The government’s response to early Robodebt criticism was defensive: the senior public-service-and-political response was to defend the programme, attribute problems to recipient misconduct, and resist substantive review. The defensive posture extended the harm and the eventual political-and-legal cost. The lesson generalises across the cautionary cases of preceding chapters: defensive post-incident management compounds the original failure.
Lesson 5 — political accountability for public-sector AI is concentrated and durable. The Robodebt political consequences have been substantial: the 2022 Australian federal election turned partly on Robodebt issues; multiple senior political figures (former Prime Minister Scott Morrison, former Minister Stuart Robert, former Department head Kathryn Campbell) faced specific accountability; legal proceedings against involved individuals continue. The political-cost dynamic is structurally different from private-sector AI failures, which are absorbed by the firm and rarely produce comparable individual accountability.
The UK A-Level grading 2020 case. A separate but instructive case is the UK government’s 2020 A-Level grading algorithm. The COVID-19 pandemic prevented in-person examinations; the UK Office of Qualifications and Examinations Regulation (Ofqual) developed an algorithm to assign grades based on teacher predictions, school historical performance, and pupils’ prior performance. The algorithm produced systematic downgrading of students from disadvantaged schools — approximately 39% of students received grades below their teachers’ predictions, with the impact concentrated in state schools relative to private schools. Public protest produced a rapid government reversal: within four days of grade publication, the government announced that teacher predictions would replace algorithmic grades. The case is instructive because the reversal was rapid and decisive — a structural difference from Robodebt’s prolonged defence — but the underlying deployment failure (an algorithm with structural bias deployed without adequate testing) recurred regardless.
US benefits administration AI. The US public benefits system has had multiple AI-deployment problems through the 2010s and 2020s. Specific cases include the Michigan Unemployment Insurance Agency’s MIDAS system (operational 2013–2017), which produced approximately 40,000 false fraud determinations against unemployment recipients; the Arkansas Medicaid algorithmic scoring case (in litigation through 2018–2020); various state-level child-welfare-screening algorithms with documented bias issues. The pattern recurs across jurisdictions: automation deployed with inadequate testing; affected populations unable to challenge the automation effectively; the legal-and-political resolution slow relative to the harm.
The tax-administration AI context. Tax-administration AI deployment is more mature than welfare-administration AI; tax authorities globally have used ML-supported audit-selection, fraud-detection, and compliance-prediction for over a decade. The Australian Tax Office, the UK HMRC, the US IRS, and the Inland Revenue Board of Malaysia all operate substantial AI infrastructure. The deployment is generally less controversial than welfare-AI because the relationship to the taxpayer is structurally different (taxpayer-and-state in a transactional relationship; welfare-recipient-and-state in a more dependent relationship); the political accountability dynamics are also different.
The 2024–2026 generative-AI extension to government. The contemporary generative-AI wave has produced substantial government-AI experimentation. The US federal government’s October 2023 AI executive order (and subsequent April 2024 OMB guidance) established the framework for federal AI deployment with explicit risk-assessment requirements. The EU AI Act’s public-sector provisions (effective 2025–2027) similarly establish public-sector-specific requirements. Australian government AI guidance (the September 2024 Policy for the responsible use of AI in government) followed similar structural patterns. The deployment in 2026 is characterised by cautious experimentation in specific use cases (citizen-facing chatbots; document-processing automation; internal productivity tools) with explicit attention to the cautionary lessons from Robodebt and parallel cases.
12.2 Education and AI
Education has been a substantial AI-deployment domain through 2010–2026, with patterns that connect directly to the Team Aroma worked example of Part V (the SPM tutoring case). The deployment has been uneven across education levels and contexts; the contemporary generative-AI wave has produced both substantial new capability and substantial concern.
Adaptive learning — the historical context. The earliest serious adaptive-learning systems date to the 1970s (PLATO at the University of Illinois) and the 1980s (intelligent tutoring systems from Carnegie Mellon, Stanford, and elsewhere). The 2010s wave produced commercial-scale adaptive-learning platforms: Knewton (founded 2008; substantial venture funding through 2015; eventually wound down with assets sold in 2018); ALEKS (originally developed at UC Irvine; acquired by McGraw-Hill in 2013); DreamBox Learning (founded 2006; acquired by Discovery Education in 2023). The 2010s commercial wave produced substantial deployment in K-12 mathematics specifically, with documented learning-outcome improvements at meaningful scales. The pattern was uneven — the technology-and-pedagogy integration was difficult; the cost-of-deployment was substantial; the evidence-base for effectiveness was variable.
The 2022–2026 AI-tutoring wave. Foundation-model capability has substantially extended the AI-tutoring deployment landscape. Specific commercial products include Khan Academy’s Khanmigo (launched 2023, GPT-4-based, integrated with Khan Academy’s broader curriculum infrastructure); Duolingo’s AI features (introduced through 2023–2024); Anthropic’s Claude-for-education and OpenAI’s ChatGPT-for-education enterprise offerings (both launched 2024 with substantial deployment to universities and school districts); various startup-stage AI-tutoring products. The deployment pattern is broader and faster than the 2010s adaptive-learning wave; the foundation-model capability supports tutoring across substantially more subjects and contexts than the earlier era’s narrower-topic systems.
The Khan Academy Khanmigo case. Khan Academy’s deployment of Khanmigo through 2023–2025 is the most-publicly-documented contemporary AI-tutoring case. Khanmigo combines GPT-4 with Khan Academy’s curriculum infrastructure to provide on-demand tutoring conversations. Sal Khan’s published commentary (and the Brave New Words book published in 2024) emphasises the platform’s design for student-centred learning rather than answer-providing; the tutoring framework prompts students through problem-solving rather than producing answers directly. Deployment scale by 2024 reached substantial school-district partnerships (multiple US states; expansion to additional countries); the platform’s evaluation-evidence base has been growing.
The cheating-and-plagiarism problem. A specific concern that the foundation-model wave has produced for education is the cheating problem. Foundation models can produce essays, problem solutions, code, and other student work at quality levels that traditional plagiarism-detection systems cannot reliably identify. The 2022–2024 period produced substantial response from educational institutions: Turnitin’s AI-detection feature (launched April 2023, with substantial subsequent revisions following accuracy concerns); changes in assessment practice (more in-class assessment; more oral examinations; revised assessment design that emphasises process over product); explicit AI-use policies across most universities. The deployment evidence for AI-detection tools has been mixed; false-positive rates have been documented at problematic levels; the cat-and-mouse dynamic with foundation-model capability suggests that detection-only approaches will not be durable.
The structural questions in education AI. Three questions define the education-AI deployment landscape over 2026–2030. First, the pedagogical-effectiveness question: does AI tutoring actually improve learning outcomes, controlling for the confounding effects of student selection (students who use AI tutoring may differ systematically from those who do not)? The evidence is accumulating; current research suggests modest-positive effects with substantial heterogeneity by subject and context. Second, the access-and-equity question: AI-tutoring access correlates with socioeconomic factors that already drive educational disparities; the deployment may compound existing inequalities rather than address them. Third, the role-of-teacher question: the deployment patterns substantially affect teacher work; the question of how teaching evolves with AI capability is unresolved across school systems.
The Malaysian SPM tutoring context. The Malaysian-specific tutoring market — which Team Aroma’s worked example in Part V addresses — has specific characteristics. Malaysian secondary education culminates in the Sijil Pelajaran Malaysia (SPM) examination, taken by approximately 400,000 students annually in Form 5 (typically age 17). The high-stakes character of SPM (it is the primary credential for tertiary admissions and many employment paths) has produced a substantial private-tutoring industry, estimated at RM 4–6 billion annually. The industry’s structure includes both individual tutors and tutoring centres of varying scale; the Penang and Klang Valley urban concentrations are the largest markets. AI-tutoring deployment in this market is at early stages; foundation-model capability for Malay-language and SPM-rubric-aligned tutoring has improved substantially through 2023–2026, but the established tutoring-centre infrastructure has not yet broadly adopted AI augmentation. The opportunity space is what motivates the Team Aroma worked example.
12.3 Real estate — Zillow Offers and the canonical failure
Real estate has been a substantial AI-deployment domain through the 2010s and 2020s, with the most-detailed public-record failure case being Zillow Offers’ November 2021 shutdown.
The iBuying thesis. “iBuying” — instant-buying of residential real estate, with the iBuyer purchasing homes directly from sellers, holding inventory briefly, and reselling — was a substantial 2014–2021 venture-funded thesis. The premise: ML-driven valuation could enable accurate pricing at scale; the iBuyer could provide convenience to sellers (instant offers; no listing process; certainty of sale); the business would scale through volume. Major iBuyers included Opendoor (founded 2014, IPO 2020), Zillow Offers (Zillow’s iBuying division, operational 2018–2021), Redfin Now (Redfin’s iBuying, operational 2018–2022), and Offerpad (founded 2015, public 2021).
The Zillow Offers programme. Zillow Group, the dominant US real-estate-listing platform, launched Zillow Offers in 2018. The strategic logic combined Zillow’s data advantage (the company had been operating Zestimate property valuations for over a decade) with the iBuying business model. Zillow Offers operated through 2018–2021, scaling to approximately 25 metropolitan markets and substantial inventory by 2021.
The November 2021 shutdown. On 2 November 2021, Zillow announced the wind-down of Zillow Offers. The announcement disclosed: USD 569 million in inventory write-downs (homes valued at substantially less than Zillow had paid for them); approximately 25% workforce reduction (approximately 2,000 employees laid off); inventory of approximately 9,800 homes that Zillow needed to sell, mostly at a loss. The cumulative cost to Zillow of the iBuying experiment exceeded USD 1 billion through write-downs and operating losses.
The structural lessons from Zillow Offers. The post-mortem analyses (Zillow’s own 10-Q filings; industry analysis from Mike DelPrete; subsequent academic case studies) converge on five lessons.
Lesson 1 — pricing-model accuracy is a structural constraint at iBuying scale. The Zestimate valuation accuracy was sufficient for general consumer use (where users compared a Zestimate to other signals like comparable sales) but insufficient for buying decisions. Specifically, the model’s accuracy was strongest where there was substantial recent transaction data; in markets with lower transaction velocity, the accuracy degraded. iBuying at scale required accurate pricing at scale; the pricing model could not maintain accuracy across the breadth of the deployment.
Lesson 2 — the housing-market-cycle interaction was destabilising. Zillow Offers operated through the 2020–2021 housing-market acceleration; the company aggressively grew inventory based on extrapolation of the rising-price trajectory. The summer-2021 deceleration (and subsequent stabilisation) caught Zillow with overpriced inventory. iBuying at scale requires accurate forecasting of market trajectory, not just current pricing; the forecasting was harder than the model assumed.
Lesson 3 — the operational-complexity scale was underestimated. Each home in inventory required maintenance, marketing, repairs (often unknown until acquisition), tenant management (in some cases), and eventual sale. The operational labour-and-cost requirements scaled with inventory; Zillow’s operational-staffing assumptions had not adequately captured the heterogeneity of the housing stock the company would acquire.
Lesson 4 — the ML-vs-business decision authority was misaligned. Reporting from the post-mortem (including Bloomberg’s detailed coverage in November 2021) suggested that Zillow’s pricing decisions had been substantially driven by the model’s outputs without sufficient business-judgment override. When the model overestimated values in mid-2021, the corrective adjustment by human judgment was slower than market conditions required. The lesson generalises: AI-driven decisions in high-stakes operational contexts require explicit human-judgment integration; full automation of high-stakes pricing decisions has structural risks.
Lesson 5 — competitor restraint can be informative. Opendoor (the largest iBuyer) operated through the same period with smaller losses and survived; Redfin’s iBuying continued operating but more cautiously; Offerpad survived but at smaller scale. The variable outcomes among competitors with similar technology suggest that operational and judgment factors, not technology factors, were the binding constraints. The competitor analysis is informative: when one competitor fails dramatically while comparable competitors do not, the failure is not primarily about the technology.
The broader real-estate-AI landscape. Beyond iBuying, real-estate AI deployment includes property-valuation models (used by lenders, appraisers, real-estate platforms), commercial-real-estate analytics (CompStak, CoStar, the Real Estate Board of New York’s data products), property-management AI (for residential and commercial portfolios), and increasingly customer-service AI for the broader real-estate customer-experience. The deployment has been substantial but generally less controversial than the iBuying experiment.
Australian and Malaysian real-estate AI. The Australian real-estate AI ecosystem includes Domain.com.au, REA Group’s realestate.com.au, and PropTrack’s analytics product line. The 2020s deployment has been substantial in property valuation and listing-matching applications. The Malaysian real-estate context — with iProperty and PropertyGuru as major listing platforms — has had less mature AI deployment, but substantial growth through 2023–2026 with the foundation-model wave producing customer-service-and-listing-generation extensions.
12.4 Transportation and aviation
Transportation and aviation have been substantial AI-deployment domains, with the contemporary period producing both significant successes (Waymo’s commercial robotaxi operations) and significant retrenchment (Cruise’s late-2024 exit from autonomous-taxi operations).
Beyond Boeing 737 MAX. Chapter 9 covered the Boeing 737 MAX MCAS case as the canonical safety-critical-automation failure. The broader aviation-AI landscape extends beyond MCAS into multiple operational domains: airline operations (route planning, fuel optimisation, maintenance scheduling), air traffic management (specific deployments in NextGen and SESAR systems), airline pricing (yield management has used ML approaches for decades), and increasingly customer-service automation. The major airlines (United, Delta, American, Lufthansa, Cathay Pacific, Singapore Airlines, Qantas, Malaysia Airlines, AirAsia) have all deployed AI extensively; the deployment is less publicly visible than safety-critical aircraft systems but operationally substantial.
The autonomous-vehicle industry’s 2024 retrenchment. The autonomous-vehicle (AV) industry experienced its most-significant retrenchment in 2024. The industry had attracted approximately USD 100 billion in cumulative venture and corporate investment over 2010–2024 with the expectation that mature commercial deployment would arrive by mid-2020s. The actual trajectory has been substantially slower; specific 2024 events include:
- Cruise’s exit from autonomous-taxi operations. General Motors’ Cruise subsidiary suspended California operations in October 2023 following a serious incident in San Francisco (a Cruise vehicle pinned a pedestrian who had been struck by a hit-and-run human-driven car); Cruise paused all driverless operations nationwide. The 2024 trajectory included substantial layoffs (approximately 25% of the workforce in December 2023), the appointment of new leadership, and ultimately the December 2024 announcement that GM was discontinuing Cruise’s robotaxi business altogether, refocusing on personal-vehicle Super Cruise driver-assistance technology. The cumulative cost to GM of the Cruise programme exceeded USD 8 billion.
- Apple’s car project shutdown. Apple confirmed in February 2024 that it was discontinuing its ten-year-old electric-and-autonomous vehicle project (variously known as Project Titan internally), with substantial team reduction. The cumulative spend by Apple on the project was estimated at over USD 10 billion.
- Argo AI’s earlier exit (October 2022). Ford and Volkswagen’s autonomous-vehicle joint venture had shut down in late 2022 with substantial layoffs.
- Various smaller-scale exits. Embark Trucks (autonomous trucking), TuSimple (autonomous trucking, with substantial regulatory and corporate-governance issues), and several other AV startups exited or substantially reduced operations through 2023–2024.
Waymo as the contrasting success. Waymo (Alphabet’s autonomous-vehicle subsidiary; emerged from Google’s Self-Driving Car Project) has continued operations and expanded commercial deployment. By late 2024, Waymo operated commercial autonomous-taxi services in San Francisco, Phoenix, Los Angeles, and (planned) other cities; the company reported approximately 200,000 autonomous rides per week in early 2025. The Waymo trajectory has been notably patient — commercial robotaxi operations launched only in 2018 (Phoenix) and expanded slowly with substantial focus on safety case-by-case. The contrast with Cruise’s more-aggressive expansion is informative.
Structural lessons from the AV industry’s 2010–2024 trajectory. Several lessons emerge.
Lesson 1 — the “long tail of edge cases” is structurally hard. AV systems handle the bulk of driving conditions adequately; the long tail of unusual situations (construction zones; emergency vehicle interactions; unusual weather; complex multi-agent traffic situations) accumulates in ways that are hard to fully address. The 2010s expectation that the long tail would be addressed through scale (more data; better models) has only partially materialised. Some long-tail issues persist despite substantial investment.
Lesson 2 — the operational and regulatory infrastructure scales unevenly with technology. Even where AV technology works well, the operational infrastructure (vehicle maintenance; rider support; incident response) and regulatory infrastructure (insurance; liability; permits) does not automatically scale with deployment. Cruise’s October 2023 incident exposed the operational-infrastructure gaps; the subsequent regulatory response (California suspending Cruise’s permits; broader state-level review of AV permits) substantially constrained the industry’s deployment trajectory.
Lesson 3 — the unit economics are hard at sub-mature scale. AV operations require substantial ongoing investment (compute; mapping; remote support; vehicle ownership and maintenance) that is hard to recover at sub-mature ride-volume scales. The 2020s pattern has shown that AV operations require substantially larger scale than initial business models projected to achieve unit economics that compete with conventional ride-hailing or vehicle ownership.
Lesson 4 — the consumer-and-public trust trajectory matters. Public attitudes toward AV technology have been substantially affected by high-profile incidents; the Cruise October 2023 incident produced measurable trust impact in public opinion, with implications for permit-and-regulatory environments well beyond the company involved. The trust-trajectory question is similar to other safety-critical AI deployment trust questions.
Ride-hailing AI deployment. Beyond AVs, ride-hailing (Uber, Lyft, Grab in Southeast Asia, Gojek in Indonesia, Ola in India, DiDi in China) has been a substantial AI-deployment domain. The applications include matching (matching riders to drivers); pricing (the surge-pricing dynamic that has been controversial); routing (Uber’s evolution from Google Maps to in-house routing); fraud detection; safety (ML-based monitoring of routes for anomalies). The deployment depth across major ride-hailing operators is comparable to the deployment depth at major financial-services firms. The 2024–2026 generative-AI extensions have produced specific customer-service and driver-support tools.
12.5 Insurance — beyond healthcare
Insurance — across property-and-casualty (P&C), life, auto, and the broader specialty lines — has been a substantial AI-deployment domain. The deployment has been less publicly visible than some sectors but operationally substantial; the industry’s structural characteristics (large data assets; clear pricing problems; substantial operational scale) make it amenable to AI deployment.
P&C insurance AI applications. Property-and-casualty insurance has used ML for underwriting, claims processing, fraud detection, and pricing for over a decade. Specific applications include automated claims-handling for routine claims (where the AI assesses claim validity and amount based on photos, structured information, and prior claim history); fraud-detection systems that flag anomalous claims for human review; underwriting models that predict claim frequency and severity; pricing models that adjust premiums based on predicted risk. The major US insurers (State Farm, Geico, Progressive, Allstate, Liberty Mutual, USAA) have all deployed substantial ML capabilities; the European insurers (AXA, Allianz, Generali, Zurich) and Asian insurers (Ping An, China Life, AIA, Tokio Marine) have parallel deployments.
Life insurance underwriting. Life-insurance underwriting has been a particularly active AI-deployment frontier. Traditional life-insurance underwriting required medical examinations, blood-and-urine tests, and substantial documentation review; ML-based underwriting can provide approximate assessments from less-invasive data (medical-record review; questionnaire responses; lifestyle data). The deployment has produced both speed advantages (instant or same-day decisions vs weeks of traditional underwriting) and concerns about accuracy and fairness. Specific firms like Ladder, Haven, and Bestow have positioned around AI-enabled life insurance; the major incumbents have parallel programmes.
Climate-and-catastrophe modelling. A specific frontier of insurance-AI is climate-and-catastrophe modelling. The increasing frequency-and-severity of climate-related claims (hurricanes, wildfires, floods, heat events) requires substantially more sophisticated risk modelling than 1990s-era catastrophe models provided. The major catastrophe-modelling firms (RMS, AIR Worldwide, Karen Clark & Company) have integrated ML approaches; specific reinsurers (Munich Re, Swiss Re, Hannover Re) operate substantial in-house modelling. The 2023–2024 California-wildfire and Florida-hurricane insurance crises (where insurers withdrew from markets due to unprofitable underwriting at available premium levels) reflect the structural difficulty: even with sophisticated modelling, the underlying climate risk has been growing faster than the modelling-and-pricing infrastructure can adjust.
Fairness questions in insurance. A specific concern in insurance AI is fairness. Insurance pricing is structurally based on risk-discrimination (the premise of insurance is that pricing reflects risk; lower-risk insureds pay less than higher-risk insureds), but the discrimination must align with legal frameworks that prohibit specific protected-characteristic-based discrimination. ML-based pricing can produce disparate impact even without using protected characteristics directly; the regulatory response has been substantial. The Colorado Insurance Department’s 2023 algorithmic accountability rules (which require insurers to test for and address disparate impact in ML-driven decisions) and parallel state-level regulations represent the contemporary regulatory framework. The 2024 NAIC (National Association of Insurance Commissioners) AI principles extend the framework. The deployment in 2026 is characterised by substantial compliance attention to fairness alongside the commercial AI deployment.
The 2024–2026 generative-AI extension. Insurance customer service, claims documentation, policy explanations, and broker support have all been extended with generative-AI capabilities through 2024–2026. Lemonade (founded 2015; operates with substantial AI emphasis) has been the most-public AI-native insurer; the company’s 2024 communications have emphasised generative-AI integration across operations. Major incumbents have launched parallel capabilities; the deployment depth varies but is consistently increasing.
12.6 Hospitality and travel
Travel and hospitality have been substantial AI-deployment domains for over a decade, with the 2024–2026 generative-AI wave producing particular extensions to travel-planning use cases.
The online travel agencies (OTAs). Booking.com (Booking Holdings) and Expedia Group dominate online travel reservations globally. Both companies operate substantial ML infrastructure for search ranking, pricing, demand forecasting, and customer service. The deployment depth is comparable to major e-commerce platforms (Chapter 8); the underlying technical architecture is similar. The 2024 Booking.com generative-AI features (the “AI Trip Planner” launched June 2023 in beta, expanded through 2024) and Expedia’s parallel “trip planning” features represent the foundation-model extension.
Airbnb’s AI deployment. Airbnb (founded 2008, IPO December 2020) has been an interesting AI-deployment case because of the marketplace structure (hosts as the supply side; guests as the demand side; the company mediating between). Specific applications include search-and-ranking (matching guest searches to host listings), pricing recommendations for hosts (the Airbnb pricing-suggestion product), fraud-and-trust-and-safety (ML-based detection of fraudulent listings, guests, and bookings), customer-service automation, and increasingly generative-AI for listing optimisation. The deployment scale is substantial; Airbnb reported approximately 7.7 million active hosts and 200+ million booking nights annually in 2024. Like other marketplace platforms, Airbnb has accumulated substantial transaction-and-behaviour data that supports a meaningful data flywheel.
Hotel operations AI. Major hotel chains (Marriott, Hilton, Hyatt, IHG, Accor, Wyndham) have been long-running AI deployers. The applications include yield management (revenue management with sophisticated demand forecasting); customer-relationship management (loyalty programme personalisation); operations (housekeeping scheduling; maintenance optimisation; energy management); and increasingly generative-AI for guest-facing customer service. The deployment depth is mid-scale relative to other operational-AI domains; the hospitality industry’s lower-margin economics constrain AI investment relative to higher-margin sectors like financial services.
The 2024–2026 generative-AI extension to travel planning. A specific frontier in hospitality and travel is generative-AI-supported travel planning. Multiple foundation-model-based travel-planning startups have emerged (Mindtrip, Roam Around, several others). Major OTAs have integrated AI travel planning (the Booking and Expedia features mentioned above). Google’s Gemini-based travel planning extends Google’s existing travel-search capabilities. The deployment is at early scale; consumer adoption of AI-supported travel planning has been growing but represents a small fraction of total travel-research activity. The structural question is whether AI travel planning substantially changes consumer behaviour or remains a marginal augmentation; the early evidence is mixed.
Asian hospitality and travel. The Asian hospitality and travel context includes major regional players: Trip.com Group (Chinese; one of the world’s largest OTAs), Traveloka (Indonesian; substantial Southeast Asian operations), Klook (Hong Kong-based; activity-and-experience focus), Agoda (Booking Holdings subsidiary, originally Thailand-based), and the regional operations of Booking and Expedia. AI deployment at these regional operators has been substantial; Trip.com particularly has been an aggressive AI adopter, with the 2023–2024 generative-AI integration producing Trip.com’s TripGenie product. Singapore and Malaysia tourism boards have launched AI-supported tourism-promotion programmes; the regional pattern is comparable to other regional patterns described in preceding chapters.
12.7 The five cautionary cases — pattern synthesis
Part II has developed five cautionary cases at substantial depth: Watson Health (Chapter 7); Klarna (Chapter 8); Boeing 737 MAX MCAS (Chapter 9); Cambridge Analytica (Chapter 10); Tradelens (Chapter 11). The Robodebt case introduced in this chapter joins these as a sixth canonical case, with public-sector specifics. The cases share specific failure modes; the patterns across them are what generalise.
Pattern 1 — broad framing without operational definition. Watson Health, Tradelens, and Cambridge Analytica each suffered from broad scope without operational-task definition. Watson Health was “AI for medicine” rather than a specific clinical task; Tradelens was “trade digitalisation” rather than a specific document workflow; Cambridge Analytica’s Facebook integration was broad data access rather than a specific application. The broad framing prevented the operational evaluation that narrower scope would have enabled.
Pattern 2 — brand or political momentum substituting for evaluation. Watson Health’s Jeopardy! moment, Klarna’s February 2024 announcement, and Robodebt’s political imperative each produced trajectories where the deployment commitment outran the underlying evidence. The momentum-substituting-for-evaluation pattern is structurally distinct from inadequate evaluation; it specifically describes the situation where evaluation evidence exists but is dismissed or overridden by other priorities.
Pattern 3 — alpha-skipping or staged-rollout failure. Klarna’s full-scale customer-service deployment without staged rollout; Boeing’s MCAS deployment without pilot training; Robodebt’s nationwide rollout without proportional review-and-appeal infrastructure scaling — each case skipped the staged validation that would have surfaced problems before they accumulated. The pattern matches the explicit alpha-discipline framework that Chapter 24 of the playbook develops.
Pattern 4 — sensor or data single-points-of-failure. Boeing MAX’s reliance on a single AoA sensor; Robodebt’s reliance on income-averaging without verification; Cambridge Analytica’s reliance on the Graph API without considering downstream uses — each case had a structural single-point-of-failure that produced systemic risk. The lesson generalises to AI-deployment risk management: identification and mitigation of structural single-points-of-failure is necessary even when they are not obvious.
Pattern 5 — defensive post-incident management. Watson Health, Klarna, Boeing MAX, Cambridge Analytica, Tradelens, and Robodebt each had defensive responses to early evidence of problems, with the defensive posture extending the failure period and increasing the eventual cost. The pattern is sufficiently consistent across the cases that it constitutes one of the most-durable lessons: post-incident management is itself a failure mode; organisations that respond to AI-deployment problems defensively pay multiplier costs relative to organisations that respond constructively.
The integration with the playbook. Part V’s playbook chapters reference these patterns repeatedly. Chapter 19 (idea selection) emphasises avoiding the broad-framing trap. Chapter 21 (MVP scoping) emphasises operational definition. Chapter 23 (evaluation) emphasises closed-loop measurement against the right metrics. Chapter 24 (alpha launch) emphasises staged rollout. Chapter 25 (beta) emphasises responding constructively to evidence. Chapter 28 (commercialisation) emphasises post-incident management discipline. The patterns from Part II’s cautionary cases are what the playbook’s procedural discipline is designed to address.
12.8 The deployment-maturity framework across sectors
The seven Part II chapters describe sectors at different stages of AI-deployment maturity. The maturity differences are not random; they reflect specific structural factors that the framework can identify.
Mature deployment sectors. Several deployment domains have reached maturity in the sense that AI is operational, broadly-accepted, and producing documented value:
- Programmatic advertising (Section 10.1) — operational at scale across the global digital-advertising industry; the methodology is well-established.
- Consumer recommendation systems (Section 10.5; also Chapter 8) — mature at major platforms; new entrants face substantial flywheel disadvantages.
- Imaging-AI in radiology (Section 7.2) — over 800 FDA-cleared algorithms; deployment integrated into hospital workflows.
- Predictive maintenance in manufacturing (Section 9.2) — deployment depth at major manufacturers; documented productivity gains.
- Fraud detection in finance and retail (Chapters 6 and 8) — mature at major platforms; the methodology is well-understood.
- Route optimisation in logistics (Section 11.3) — UPS ORION, FedEx, DHL, Amazon all operate mature systems.
- Programmatic trading (Chapter 6) — over 60% of equities trading volume is algorithm-mediated.
The mature-deployment sectors share specific characteristics: well-defined operational tasks; clear feedback signals; substantial data assets at major operators; and integration with established business processes. The combination produces the deployment maturity.
Mid-stage deployment sectors. Several deployment domains are in active deployment but not yet mature:
- Generative AI in ad creative (Section 10.2) — broad deployment, but the long-run impact on advertising effectiveness is unsettled.
- Drug discovery with AI (Section 7.4) — substantial pipeline of candidates, but no AI-discovered drug yet approved.
- Manufacturing computer-vision QA (Section 9.3) — deployed at major manufacturers, with continued scaling and capability improvement.
- Agricultural precision applications (Sections 11.5–11.6) — deployment at large farms, with ongoing extension to broader farm contexts.
- Legal and accounting AI (Sections 11.10–11.11) — deployment at major firms, with continued adaptation of professional-conduct frameworks.
- Conversational customer service (covered across multiple chapters) — broad deployment with caution following the Klarna lessons.
The mid-stage sectors have functioning AI deployment but face continuing technical-and-operational challenges that the mature sectors have largely resolved.
Early-stage and contested deployment sectors. Several deployment domains are still at early stages or actively contested:
- Autonomous vehicles (Section 12.4) — narrow commercial deployment after substantial industry retrenchment.
- Generative video at production scale (Section 10.6) — operational at narrow scales, aspirational at broader scales.
- Government and public-sector AI (Section 12.1) — deployment with substantial caution following Robodebt and parallel cases.
- Humanoid robots in manufacturing (Section 9.4) — narrow demonstrations with uncertain trajectory.
- Frontier clinical agents (Section 7.6) — early deployment with continuing trust-threshold challenges.
- AI-supported governance and policy decisions — early experimentation with substantial caution.
The early-stage sectors are characterised by either technical-capability limits, deployment-environment friction, or both; the trajectory toward maturity is unclear.
The factors that drive maturity speed. Five factors recur in the maturity-speed analysis.
Factor 1 — task definition. Sectors with well-defined operational tasks (predict the click-through rate of an ad; optimise the route of a delivery vehicle; identify a defect on a circuit board) reach maturity faster than sectors with poorly-defined tasks (recommend the best cancer treatment; advise the best legal strategy; design a new drug).
Factor 2 — feedback signals. Sectors with rapid, clear feedback signals (advertising clicks; equipment failures; customer churn) reach maturity faster than sectors with slow, ambiguous feedback signals (drug efficacy; educational outcomes; long-run macroeconomic effects).
Factor 3 — data availability. Sectors with substantial existing data assets (large-platform e-commerce; financial transactions; manufacturing sensor networks) reach maturity faster than sectors where data must be collected from scratch (specific clinical applications; novel scientific domains; nascent product categories).
Factor 4 — regulatory environment. Sectors with established regulatory frameworks that accommodate AI deployment (financial services; manufacturing; advertising) reach maturity faster than sectors where the regulatory framework is new or contested (autonomous vehicles; public-sector decision-making; generative content).
Factor 5 — deployment-environment friction. Sectors with low integration friction (web-based or app-based deployments at consumer scale) reach maturity faster than sectors with high integration friction (hospital deployments; safety-critical infrastructure; cross-firm platform deployments).
The framework predicts: sectors that score high on all five factors mature rapidly; sectors that score low on some factors face structural deployment delays. The framework also helps predict where AI deployment will mature next; sectors currently in mid-stage that are converging on high scores across the five factors will likely mature through 2026–2030.
12.9 The data-flywheel pattern across sectors
The data-flywheel concept introduced in Chapter 3 (Iansiti and Lakhani’s factory framework) recurs across Part II’s sector analyses. The pattern is sufficiently consistent to warrant explicit synthesis.
Where data flywheels work. Specific sectors and applications have demonstrated durable data-flywheel dynamics:
- Stitch Fix (Section 8.2) — every Fix produces structured outcome data that improves the recommendation system.
- Amazon’s recommendation infrastructure (Section 8.3) — every interaction produces feedback that updates the system.
- Netflix’s recommendation (Section 10.5) — viewing data updates the recommendations, which shape subsequent viewing.
- Stripe Radar (Section 8.10) — every transaction-and-chargeback updates the fraud detection system.
- GE Aviation engine twins (Section 9.2) — every flight produces data that updates the prediction models.
- Programmatic advertising platforms (Section 10.1) — every impression produces feedback that updates bidding models.
The successful flywheels share specific characteristics: large-volume interactions; clear outcome signals; the deployment is structurally connected to the data collection (the same system both makes decisions and observes outcomes); the organisation operates the loop end-to-end.
Where data flywheels fail to materialise. Specific situations have produced expected-but-not-realised flywheel dynamics:
- Watson Health (Section 7.3) — the broader medical-AI flywheel did not materialise at IBM’s expected scale; the operational integration with hospital workflows was insufficient.
- Tradelens (Section 11.2) — the industry-wide platform flywheel did not turn because adoption was insufficient; the partial network produced negative dynamics.
- Various government-AI deployments — the Robodebt (Section 12.1) and parallel cases lacked the closed-loop structure that flywheels require.
- Many platform-AI ambitions across sectors — the platform-level flywheel often fails to materialise because the platform does not achieve the cross-firm coordination required.
The failed flywheels share specific characteristics: insufficient deployment scale; weak feedback signals; or operational discontinuity between the AI system and the data that should update it.
The structural implication. The data flywheel is a structural advantage when the deployment infrastructure supports it; it is not automatic. AI deployments must be designed with the flywheel structure in mind from the start. Bottom-up operational deployments in single firms typically support flywheels naturally; top-down platform plays often do not. The pattern matches the broader operational-AI-vs-platform-AI distinction that recurs across Part II’s analysis.
12.10 The cautionary-case constellation — durable lessons
Beyond the specific patterns of Section 12.7, the broader lessons from Part II’s cautionary-case constellation generalise to AI deployment thinking broadly.
Lesson 1 — the structural lessons are durable. The patterns that recur across the cautionary cases (broad framing; brand-momentum-over-evaluation; alpha-skipping; single-points-of-failure; defensive post-incident management) are not artefacts of specific cases; they are structural features of AI deployment that re-appear in new cases as the field matures. A new AI-deployment failure in 2027 or 2028 is likely to exhibit some combination of these patterns.
Lesson 2 — the failure cost is asymmetric. The cautionary cases produce costs that often exceed the original deployment investments by orders of magnitude. Watson Health’s billions in lost investment; Klarna’s brand damage and rehiring costs; Boeing’s USD 20+ billion in 737 MAX direct costs (and substantially more in indirect costs); Cambridge Analytica’s role in a regulatory environment that constrains the entire industry; Robodebt’s AUD 1.8 billion in restitution and the political-and-individual-accountability costs that continue. The asymmetry between successful deployment value and failed deployment cost has implications for risk management: deployments worth pursuing should be staged and monitored to bound the failure cost, not only optimised for success-case value.
Lesson 3 — the public-sector cases have specific dimensions. The Robodebt and UK A-Level cases differ from private-sector cautionary cases in specific ways: the affected populations cannot easily opt out; the political accountability is structurally more concentrated; the legal-and-administrative-law frameworks impose specific requirements that private-sector deployments do not face. Public-sector AI deployment requires explicit attention to these specific dimensions; importing private-sector deployment patterns directly into public-sector contexts is structurally risky.
Lesson 4 — the regulatory consequences extend across years. Each cautionary case has produced regulatory consequences that affect the broader industry well after the specific failure. GDPR’s enforcement intensity post-Cambridge-Analytica; the FAA ODA reform and broader certification reform post-Boeing-MAX; Australian government-AI guidance post-Robodebt; broader healthcare-AI scepticism post-Watson-Health. The regulatory consequences are durable; they shape the deployment environment for years afterward.
Lesson 5 — the cautionary cases inform but do not exhaust the deployment space. The patterns from the cautionary cases are useful for risk identification; they do not mean that AI deployment is uniformly risky or should be avoided. The successful deployments described across Part II demonstrate that AI deployment produces substantial value when the structural lessons are observed. The cautionary cases motivate procedural discipline; they do not motivate deployment avoidance.
12.11 Regional context synthesis — Australian and Malaysian patterns
Part II’s chapters have included regional context for Australia and Malaysia in each chapter. Synthesising across these treatments produces specific patterns that the Australian-and-Malaysian-focused students of the unit will encounter directly in their post-graduation careers.
The Australian pattern. Australian AI deployment shows specific characteristics: - Mining-and-resources autonomy (Section 9.11) is the global frontier — Rio Tinto’s Mine of the Future, BHP, Fortescue, with the Pilbara remote-operations centres as the most-developed such infrastructure globally. - Agricultural technology (Section 11.8) is mature, with substantial domestic agtech firms and CSIRO research support, though the venture-funding ecosystem is constrained relative to US comparable firms. - Health technology (Section 7.8) — Annalise.ai, the academic medical centres, the Garvan-and-Walter-and-Eliza-Hall research base — operates at international competitive scale. - Financial services AI (Chapter 6) — major Australian banks (CBA, Westpac, NAB, ANZ) operate mature deployments; the sector is comparable to similar advanced-economy banking sectors. - Government AI (Section 12.1) — the Robodebt experience has produced substantial caution and a reformed framework for subsequent deployments. - Data-centre concentration — Sydney as a major regional data-centre hub. - Regulatory environment — TGA for medical AI; ASIC and APRA for financial services; the Australian Privacy Principles framework; the September 2024 Government AI policy.
The Australian regional implications. Australian-domiciled AI graduates typically have employment options that include: continuing the academic-research path at the major universities (Monash, Melbourne, Sydney, UNSW, ANU); joining AI-deploying firms across the mining, agriculture, healthcare, and financial-services sectors; joining the major consulting firms’ AI practices; joining the smaller-but-growing AI startup ecosystem; or international-employment options that the Australian credential supports. The 2024–2026 trajectory has produced strong demand for AI-capable graduates across these paths.
The Malaysian pattern. Malaysian AI deployment shows specific characteristics: - Manufacturing AI (Sections 9.3 and 9.12) — ViTrox Corporation operating at the global frontier of automated optical inspection; the broader Penang E&E cluster with Intel, AMD, Infineon, and many others; Inari Amertron and Pentamaster as supporting firms. - Palm oil and agriculture (Section 11.8) — Sime Darby Plantations as a regional leader with substantial AI investment; the broader plantation sector adopting precision-agriculture approaches. - Banking and financial services (Chapter 6) — Maybank, CIMB, Public Bank as mature deployers; Boost and TouchnGo eWallet in the digital-payments space; AIA Malaysia in insurance. - E-commerce and retail (Chapter 8) — Shopee Malaysia and Lazada Malaysia as the dominant platforms; Carsome as a regional unicorn. - Healthcare (Section 7.11) — IHH Healthcare and Sunway Medical Centre as regional leaders; the Penang and KL academic medical centres. - Data-centre cluster — the Johor cluster with substantial 2023–2025 investment from Microsoft, AWS, ByteDance, and others.
The Malaysian regional implications. Malaysian-domiciled AI graduates typically have employment options that include: continuing the academic-research path at Malaysian universities (Universiti Malaya, USM, UKM, UTM, the Monash Malaysia campus); joining the AI-deploying firms across manufacturing, banking, healthcare, and e-commerce sectors; joining the regional offices of major consulting firms; joining the smaller-but-growing Malaysian AI startup ecosystem (ViTrox alumni network; Khazanah-supported portfolio firms; the broader Cradle / MyStartup ecosystem); regional opportunities across Singapore, Indonesia, and Vietnam; or international-employment options. The 2024–2026 demand growth has been substantial; specific Malaysian firms (ViTrox, Inari, Sunway, IHH, AIA, Carsome, Maybank, CIMB) have been actively hiring.
The cross-regional dynamics. A specific feature of the unit’s KL-and-Melbourne dual-cohort context is the cross-regional dynamics. Australian and Malaysian firms collaborate on specific applications (mining-equipment exports from Australia to Malaysia; agricultural-technology exports from Australia to Malaysian palm-oil; financial-services partnerships across the regions; manufacturing partnerships particularly in semiconductor and electronics). The graduate’s ability to navigate both regional contexts has substantial value; the unit’s bicampus structure is partly designed to develop this capability.
12.12 The 2026–2030 forward look
Part II’s chapters have each sketched 2026 frontier topics. The cross-sector synthesis of these forward looks produces a more-coherent picture of the medium-term deployment trajectory.
Trajectory 1 — operational-AI deepening. Across most Part II sectors, operational AI deployment will continue to deepen through 2026–2030. The specific applications that have achieved maturity (advertising; fraud detection; recommendation systems; predictive maintenance; route optimisation) will see continued capability improvement and broader adoption. The deployment pattern will increasingly include the foundation-model-driven generative-AI extensions that have been emerging through 2023–2026; the cumulative effect will be substantial productivity improvements at major operators.
Trajectory 2 — the contested frontiers. Several frontier deployments are contested through 2026–2030: humanoid robotics (Section 9.4); autonomous vehicles at scale (Section 12.4); clinical agents at broader scale (Section 7.12); generative video at production scale (Section 10.6); broad agentic commerce (Section 8.8). The trajectory of these frontiers will substantially shape the 2030 industrial structure; specific outcomes will differ across the frontiers, with some maturing rapidly and others stalling or developing more slowly than current projections suggest.
Trajectory 3 — the regulatory environment maturation. The cumulative regulatory developments — EU AI Act phased through 2025–2027; US executive-action on AI; the various national frameworks; the sector-specific regulatory adaptations described across Part II’s chapters — will produce a substantially more-developed regulatory environment by 2030. The deployment pattern will be increasingly shaped by regulatory considerations; firms that integrate regulatory thinking into deployment design from the start will have advantages over firms that retrofit compliance.
Trajectory 4 — the data-and-rights resolution. The ongoing litigation and licensing developments (Section 10.8 on music; Section 10.9 on news; the broader IP landscape) will produce a substantially more-clarified rights framework by 2027–2028. The clarification will affect foundation-model training-data economics, with implications for which firms can build frontier models, what the licensing market structure looks like, and how AI-deployment economics interact with rights-holder economics.
Trajectory 5 — the energy-AI infrastructure interaction. The energy intensity of AI deployment (Section 10.12) will be a binding constraint through 2026–2030; the resolution will combine generation-capacity expansion (nuclear, renewable, some natural-gas additions), per-unit-of-AI-output efficiency improvements, and possibly substantial AI-deployment moderation in response to energy-cost economics. The geographic distribution of AI infrastructure will increasingly reflect energy availability; specific regions (the Malaysian Johor cluster; specific Nordic and Middle Eastern locations; certain US states with substantial energy infrastructure) will benefit at the expense of regions with weaker energy infrastructure.
Trajectory 6 — labour-and-economic redistribution. The cumulative labour effects across Part II’s sectors will produce substantial redistribution, but the redistribution will be heterogeneous: specific role categories will be substantially affected; specific geographic-and-sector concentrations will face displacement; broader employment will not collapse but will shift in ways that the Acemoglu-Restrepo (2020) framework predicts. The policy frameworks for labour-market transitions will be increasingly tested through 2026–2030; the frameworks that work effectively will be informative for subsequent waves of automation.
12.13 Connection to Part III analytical chapters
Part II’s seven sector chapters provide the case material that Part III’s analytical frameworks address. The connection works in both directions: the cases motivate the frameworks, and the frameworks help interpret and generalise the cases.
Chapter 13 — Agentic AI will develop the agentic-deployment dynamics that have appeared across Part II’s chapters: customer-service agents (Klarna lessons); workflow automation in professional services (Mata v. Avianca lessons); autonomous-equipment deployment in manufacturing and agriculture (the John Deere autonomous-tractor case; the Foxconn lights-out aspirations; the Australian mining autonomy); clinical agents (the Hippocratic AI deployment); agentic commerce (the OpenAI Operator wave). The synthesis of these cases produces a more-general analysis of agentic-AI deployment patterns.
Chapter 14 — Governance and EU AI Act will develop the regulatory frameworks that Part II’s chapters have referenced repeatedly: the EU AI Act provisions; the FDA, TGA, MDA frameworks for medical AI; the financial-services regulatory regime; the broader AI-governance landscape including the US executive-action sequence and the various national frameworks. Part II’s cautionary cases (particularly the Robodebt case for public-sector AI; Cambridge Analytica for cross-jurisdictional regulatory response; Boeing MAX for safety-critical certification) provide the case material that motivates the governance discussion.
Chapter 15 — Labour and productivity will develop the labour-and-economic effects that Part II’s chapters have surfaced: the augmentation-vs-displacement pattern; the Acemoglu-Restrepo (2020) framework; the specific role-category and geographic-concentration effects; the policy responses to labour-market transitions. The Hollywood-strike case (Section 10.7) provides the most-detailed organised-labour response; the agricultural and mining labour-displacement patterns provide the most-detailed sectoral data; the professional-services augmentation pattern provides the contemporary white-collar test case.
Chapter 16 — Maturity frameworks will develop the deployment-maturity framework introduced in Section 12.8: the structural factors that drive deployment-maturity speed; the specific frameworks for evaluating whether a deployment is mature; the operational implications for AI-deployment decision-making in firms across sectors.
Chapter 17 — Frameworks synthesis will integrate the analytical frameworks of Chapters 13–16 with the case material of Part II and the operational discipline of Part V. The framework synthesis is what produces the integrated analytical-and-practical capability that the unit’s signature pedagogical move addresses.
Chapter 18 — Cases of AI in business will return to specific cases at greater synthesised depth, applying the integrated frameworks of preceding chapters. The specific cases will include some of Part II’s cautionary cases reinterpreted through the analytical frameworks, plus additional cases that illustrate the framework patterns at greater depth.
The bridge from Part II to Part III is structural: Part II’s case material is the empirical foundation; Part III’s analytical frameworks are the structured interpretation; Part V’s playbook is the procedural application. The three together constitute the unit’s integrated analytical-and-practical curriculum. Students who have worked through Part II, will work through Part III, and have completed Part V’s ten-week build will have substantially developed the integrated capability that graduate-level AI-in-business work requires.
The seven sectors of Part II — finance, healthcare, retail, manufacturing, marketing-media-energy, logistics-agriculture-services, and the additional sectors of this chapter — together cover the major commercial-deployment domains. The patterns across them are durable; the specific cases are illustrative; the integration with the analytical frameworks of Part III is what produces graduate-level competence. The journey from idea to deployment to operations, with the discipline that the cautionary cases motivate and the rigour that the analytical frameworks support, is what AI-in-business work consists of in 2026 and is likely to consist of for the foreseeable future.
Part II ends here. Part III begins next.
References for this chapter
Government and public-sector AI
- Royal Commission into the Robodebt Scheme (2023). Final Report (3 volumes). Commonwealth of Australia, July 2023.
- Amato v Commonwealth of Australia (2019). FCA 1078.
- Office of Qualifications and Examinations Regulation (Ofqual) (2020). 2020 A-Level grading methodology and subsequent reversal.
- Information Commissioner’s Office UK (2017). Royal Free — Google DeepMind ruling.
- US Executive Order 14110 (2023). Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
- US Office of Management and Budget (2024). Memorandum M-24-10.
- Department of Industry, Science and Resources Australia (2024). Policy for the responsible use of AI in government.
Education AI
- Khan, S. (2024). Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). Viking.
- Holmes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.
- Khan Academy (2023, 2024). Khanmigo product communications.
- Anthropic, OpenAI (2024). Education-vertical product launches.
- Turnitin (2023, 2024). AI-detection product communications.
Real estate and Zillow Offers
- Zillow Group Inc. (2021). 2 November 2021 announcement and Q3 2021 earnings communications.
- Zillow Group Inc. (2018–2024). Annual reports.
- DelPrete, M. (2021–2024). Inside Real Estate analysis on iBuying.
- Bloomberg News (2021). Zillow Offers post-mortem coverage, November 2021.
- Opendoor Technologies Inc. (2024). Annual report.
Transportation and aviation
- General Motors (2023, 2024). Cruise operations and December 2024 closure announcements.
- Apple Inc. (2024). Project Titan discontinuation, February 2024.
- Waymo LLC (2024). Operating disclosures and metropolitan-area expansion announcements.
- Uber Technologies Inc., Lyft Inc. (2024). Annual reports.
Insurance
- US Insurance Information Institute (2024). Insurance Industry Report.
- Lemonade Inc. (2024). Annual report and AI strategy communications.
- Munich Re, Swiss Re, Hannover Re (2024). Annual reports.
- National Association of Insurance Commissioners (2023, 2024). AI principles and guidance.
- Colorado Insurance Department (2023). Algorithmic accountability rules.
Hospitality and travel
- Booking Holdings Inc. (2024). Annual report.
- Expedia Group Inc. (2024). Annual report.
- Airbnb Inc. (2024). Annual report.
- Trip.com Group Limited (2024). Annual report.
- Marriott International, Hilton Worldwide, Hyatt Hotels (2024). Annual reports.
Cross-sector synthesis
- Iansiti, M. and Lakhani, K. R. (2020). Competing in the Age of AI. Harvard Business Review Press.
- Acemoglu, D. and Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy 128(6): 2188–2244.
- World Economic Forum (2024). Future of Jobs Report.
- McKinsey Global Institute (2024). Generative AI in the workplace.
- Boston Consulting Group (2024). Cross-sector AI deployment patterns.