Chapter 17 — Frameworks synthesis
This chapter synthesises the analytical frameworks developed across Chapters 13–16 — agentic AI, governance and the EU AI Act, labour and productivity, and maturity assessment — into an integrated analytical methodology. The synthesis matters because the frameworks are not independent: agentic AI deployment shapes the governance landscape; governance frameworks affect maturity progression; maturity differentials produce labour effects; labour effects shape political dynamics that drive governance evolution. The cross-framework dependencies are substantial; analysing AI deployment using any single framework produces partial understanding that integrated analysis substantially improves on.
The chapter develops the integration with explicit application to Part II case material. Rather than re-stating the frameworks, the chapter uses them in combination to produce richer analyses of specific cases and sectors than the single-framework analyses of preceding chapters provided. The Watson Health, Klarna, Hollywood-strike, and Robodebt cases each receive integrated treatment that demonstrates how the four frameworks combine to support deeper diagnosis. Sector-level analyses for finance, healthcare, and professional services apply the integration at sectoral scale.
The chapter also explicitly maps the analytical frameworks to the Part V playbook discipline. Students completing the unit will use both the analytical frameworks (for understanding cases and informing strategic decisions) and the playbook discipline (for actually executing AI deployment in their own builds). The analytical frameworks identify what deployment requires; the playbook discipline operationalises the requirement into specific procedures. The integration of analysis and execution is what produces the integrated analytical-and-practical capability that graduate-level AI-in-business work requires.
The chapter develops fourteen sections. Section 17.1 develops the integration question. Section 17.2 develops the integrated analytical framework. Section 17.3 covers cross-framework dependencies. Sections 17.4–17.7 apply the integrated framework to four cautionary cases (Watson Health, Klarna, Hollywood strikes, Robodebt). Sections 17.8–17.10 apply the integration at sectoral scale (finance, healthcare, professional services). Section 17.11 develops decision frameworks for applying the integration. Section 17.12 maps the integration to the Part V playbook. Section 17.13 covers unresolved questions. Section 17.14 sketches the 2026–2030 forward synthesis.
17.1 The integration question
The four analytical frameworks of Chapters 13–16 each address a substantial dimension of AI deployment. The agentic-AI framework (Chapter 13) addresses what AI systems can do and how they fail. The governance framework (Chapter 14) addresses how regulation shapes deployment. The labour-productivity framework (Chapter 15) addresses the economic effects on workers and aggregate output. The maturity framework (Chapter 16) addresses where specific deployments and sectors stand on capability and operational dimensions.
The frameworks are individually useful. A deployment decision can be informed by maturity assessment alone; a regulatory decision can be informed by governance analysis alone; a labour-market projection can be made using the labour framework alone. But each single-framework analysis produces partial understanding. The Watson Health case considered as a maturity failure (Section 16.1) is incomplete; the same case considered as a governance failure (Chapter 14) is also incomplete; the integrated analysis of Section 17.4 produces fuller diagnosis.
Why integration matters. Several specific reasons motivate the integration.
First, the frameworks describe interrelated phenomena. AI deployment is a single phenomenon that the frameworks each illuminate from different angles. Understanding the phenomenon requires the multiple angles; no single framework captures the full reality.
Second, deployment decisions require multiple-framework consideration. A firm deciding whether to deploy an AI system must consider capability and deployment maturity (will it work?), governance and compliance (is it permitted?), labour effects (how does it affect workers and the labour market the firm operates in?), and agentic dimensions (what specific responsibilities does the system take on?). Decisions made on one framework alone risk substantial blind spots.
Third, the cautionary cases of Part II demonstrate compound failures across multiple dimensions. Watson Health failed on capability maturity, deployment maturity, and (eventually) on the broader business-model dimensions; the failure was not single-cause. Klarna failed on deployment maturity, on the agentic-deployment dimensions, and on the labour-substitution-vs-augmentation dimensions simultaneously. Robodebt failed on legal-governance, capability maturity, deployment maturity, and the affected-population-vulnerability dimensions. Compound failures require compound analysis.
Fourth, opportunities cross frameworks. The most-successful AI deployments combine high capability maturity, mature deployment context, supportive governance environment, and labour-market alignment. The opportunities are not visible from any single framework; integrated analysis identifies them.
The relationship between Ch 13–16 frameworks. The frameworks have specific structural relationships.
Agentic AI ↔︎ Governance. Agentic systems raise governance questions that non-agentic systems do not. The credentials question, the accountability framework, and the trust threshold (Chapter 13) are substantially governance questions. The EU AI Act’s high-risk-systems framework (Chapter 14) addresses many agentic deployments; the regulatory environment shapes what agentic deployment is permissible.
Governance ↔︎ Maturity. Governance frameworks affect maturity progression. The EU AI Act’s conformity-assessment requirements (Chapter 14) operate at specific maturity levels (a deployment claiming TRL 9 must support that claim with conformity-assessment evidence). The maturity framework (Chapter 16) explicitly incorporates governance as one of its five factors. Mature governance frameworks support deployment-maturity progression by reducing regulatory uncertainty; immature frameworks delay maturation by leaving deployment requirements ambiguous.
Maturity ↔︎ Labour-productivity. Maturity differentials produce labour-effect differentials. Sectors at mature deployment produce different labour outcomes than sectors at early-stage deployment. The Brynjolfsson-Hitt time-to-impact framework (Chapter 15) explicitly addresses how productivity gains lag deployment maturation by years; the maturity progression is the operational mechanism through which labour effects accumulate.
Labour-productivity ↔︎ Governance. Labour effects shape governance evolution. The Hollywood strikes (Section 10.7) produced specific contractual provisions that constrain AI deployment in entertainment; subsequent contracts in other industries are progressively addressing similar concerns. Worker concerns drive policy responses; the policy responses constrain or support deployment patterns. The feedback loop is bidirectional: governance affects labour outcomes, and labour-effect concerns drive governance development.
Agentic ↔︎ Maturity. Agentic deployment introduces specific maturity considerations beyond non-agentic deployment. The trust threshold (Chapter 13) is a deployment-maturity constraint specific to agentic systems. Multi-agent systems face composition-problem maturity issues that single-agent systems do not. The maturity framework’s application to agentic deployment requires the agentic-specific considerations of Chapter 13.
Agentic ↔︎ Labour. Agentic deployment substantially affects labour beyond what non-agentic deployment produces. The “AI employee” framing (Section 13.12) explicitly positions agents as labour-substituting; the labour-economic effects of agentic deployment differ from non-agentic deployment in scale and pattern.
The cross-framework dependencies are substantial; integrated analysis must address them explicitly.
17.2 The integrated analytical framework
The integrated framework combines the four single-framework lenses into a systematic methodology for AI deployment analysis. The methodology supports specific decisions: deployment commitment; investment; regulatory development; evaluation of claims.
The four lenses. A deployment is analysed through four lenses simultaneously.
The capability-and-maturity lens assesses what the AI system can actually do, how reliably, and where the deployment stands on capability and deployment dimensions. The TRL framework (Section 16.5), the five-factor framework (Section 16.4), and the ADML levels (Section 16.11) are the specific tools.
The governance-and-compliance lens assesses what regulatory frameworks apply, what compliance requirements the deployment must meet, and how the regulatory environment shapes deployment options. The EU AI Act framework (Sections 14.3–14.4), the US patchwork (Section 14.5), the sector-specific overlays (Section 14.11), and the privacy-and-data-governance dimensions (Section 14.9) are the specific tools.
The agentic-deployment lens assesses whether and how the deployment involves autonomous action, what specific failure modes the agentic dimension produces, what trust and credentials questions arise, and what infrastructure the deployment requires. The agent-vs-assistant distinction (Section 13.1), the failure-mode taxonomy (Section 13.10), and the trust-and-credentials framework (Section 13.11) are the specific tools.
The labour-and-economic lens assesses what labour effects the deployment produces, how the augmentation-vs-displacement balance plays out, what institutional responses are likely, and what broader economic effects emerge. The Acemoglu-Restrepo task-based framework (Section 15.2), the augmentation-vs-displacement empirics (Section 15.4), and the institutional-response framework (Section 15.13) are the specific tools.
How the lenses combine. The four lenses produce four assessments that must be combined into integrated analysis. The combination is not mechanical; specific patterns recur:
High alignment across lenses. When capability is mature, deployment context is mature, governance is established, labour effects are manageable, and agentic dimensions (if any) are well-handled, the deployment has high alignment across the lenses. High-alignment deployments are the success pattern; the data-flywheel-driven mature operational deployments (Section 12.9) typically exhibit high alignment.
Compound misalignment. When multiple lenses show concerns simultaneously, the deployment faces compound risk. The Watson Health, Klarna, Tradelens, Robodebt, Cambridge Analytica, and Boeing 737 MAX cases all exhibit compound misalignment across multiple lenses. Compound misalignment substantially increases failure probability; the cautionary-case lessons of Part II reflect compound failures.
Single-lens binding constraint. Sometimes a single lens identifies the binding constraint while other lenses are positive. A deployment with high capability and supportive governance may face binding labour constraints (organised-labour resistance); a deployment with high deployment maturity may face binding capability constraints (the underlying technology is not yet capable enough). Identifying the binding constraint is what supports targeted intervention.
Cross-lens complementarity. Sometimes one lens’s strength compensates for another’s weakness. A deployment with limited capability may succeed if deployment context is exceptional (the human-augmentation framework dominates; the AI does specific bounded tasks well within a robust human-managed system). A deployment with substantial governance challenges may succeed if labour effects are managed well (organised-labour endorsement reduces political opposition).
The systematic application. Applying the framework systematically requires specific discipline.
Step 1 — Independent assessment of each lens. Each lens is applied separately to produce its specific assessment. The discipline prevents a single dominant concern from biasing the other-lens assessments.
Step 2 — Identification of cross-lens patterns. The four assessments are examined for the patterns identified above (alignment; compound misalignment; binding constraint; complementarity). The pattern identification produces the integrated diagnosis.
Step 3 — Decision support. The integrated diagnosis informs specific decisions. Deployment commitment requires high alignment or, at minimum, identified binding constraints with explicit mitigation; otherwise, proceeding produces excessive risk. Investment, regulatory, and evaluation decisions follow analogous logic.
Step 4 — Iterative refinement. As the deployment proceeds, conditions change; the analysis must be updated. The integrated framework is not a one-time assessment but an ongoing analytical practice.
The methodology is more demanding than single-framework analysis but produces substantially better understanding. Sections 17.4–17.10 apply it to specific cases and sectors.
17.3 Cross-framework dependencies — the dynamic interactions
Beyond the structural relationships of Section 17.2, the frameworks interact dynamically over time. The dynamics shape how AI deployment evolves and where the binding constraints shift.
Agentic capability advancement and governance response. Foundation-model agentic capability has advanced substantially through 2023–2026; the governance response has lagged. The lag is structural — regulatory frameworks adapt over years while capability advances over months — and produces specific dynamics. As capability advances, more applications become technically feasible; the governance frameworks must adapt or face substantial gaps. The 2024–2025 launch of OpenAI Operator (Section 13.6), Google Project Mariner, and Anthropic Computer Use (Section 13.6) demonstrated capability that the contemporary governance framework had not specifically addressed. The governance lag has been a recurring feature of the AI period; the 2026–2030 trajectory will continue to produce governance-vs-capability gap dynamics.
Governance maturation and deployment maturity. As governance frameworks mature (the EU AI Act’s progressive implementation through 2025–2027; sector-specific framework development; international convergence), the regulatory environment factor (Section 16.4 Factor 4) shifts toward more-favourable conditions for deployment. Mature governance reduces regulatory uncertainty; firms can invest in deployment with greater confidence about operational requirements. The dynamic is bidirectional: deployment experience informs governance development, and governance maturation supports deployment progression.
Maturity progression and labour effects. As deployment matures across sectors, labour effects accumulate. The professional-services augmentation pattern (Section 11.12) reflects deployment that has progressed from initial through operational to mature; the labour effects (employment growth with shifted task mix) follow from the deployment maturation. Sectors at earlier deployment stages have not yet produced comparable labour effects; the 2026–2030 progression will produce labour effects at sectors currently mid-stage.
Labour-effect concerns and governance evolution. Specific labour-effect concerns drive governance evolution. The Hollywood strike provisions (Section 10.7) are direct examples — concerns about AI displacement of writers and performers produced contractual provisions that constrain AI deployment. The broader pattern of organised-labour and professional-association responses (Section 15.13) is shaping subsequent governance development. Worker concerns about AI deployment, mediated through organised institutions, become operational constraints on deployment.
The compound dynamics. The cross-framework dynamics compound in specific ways. A capability advancement that produces new application possibilities triggers governance responses that constrain deployment; the constrained deployment produces specific labour effects; the labour effects shape political environments that drive further governance development. The compound dynamics produce non-linear evolution: small shifts in one framework produce substantial shifts in others over time.
The dynamic-interaction framework matters because static analysis (assessing the frameworks at a single point in time) does not capture the trajectory. Mature analytical practice considers both the current state and the likely trajectory; the dynamics inform the trajectory analysis.
17.4 Applying the integrated framework — Watson Health revisited
The Watson Health case (Section 7.3) provides a substantial test of the integrated framework. The single-framework analyses of preceding chapters each captured specific dimensions; the integrated analysis produces a fuller diagnosis.
Capability-and-maturity lens. Watson Health’s capability maturity in 2014–2018 was overstated. The TRL assessment for the broad clinical-AI applications IBM marketed was likely TRL 4–5 (validated in lab; some demonstration in relevant environment) at deployment commitment; the marketing suggested TRL 8–9 (system complete and qualified; proven in operation). The gap between actual and claimed capability was substantial.
The deployment-maturity assessment is similarly negative. Hospital integration of clinical AI requires substantial workflow integration, EHR data flows, clinical-decision-support infrastructure, and physician training. Watson Health’s deployment infrastructure was being built through deployment rather than developed in advance; the deployment-maturity factor scores were low across multiple dimensions (operational integration; organisational capability; data infrastructure for the specific clinical applications).
The five-factor analysis: task definition was poor (the broad framing did not specify operational tasks); feedback signals were weak (clinical outcomes are slow and noisy); data availability was variable (different clinical sites had different data quality); regulatory environment was developing (FDA AI/ML SaMD framework was nascent); deployment friction was high (hospital integration is structurally difficult). Multiple factors were unfavourable.
The ADML assessment: Watson Health was at ADML 1 (initial deployment with limited scope and explicit experimental framing) but framed as ADML 4 (strategic capability). The gap between actual and claimed levels was large.
Governance-and-compliance lens. The 2014–2018 governance environment for clinical AI was nascent. The FDA AI/ML SaMD framework was developing through this period; specific requirements for AI-driven clinical decision support were not yet established. Watson Health operated in regulatory ambiguity; the lack of clear regulatory requirements may have contributed to inadequate operational discipline. The HIPAA framework imposed data-privacy requirements but did not specifically address AI-driven decision-making at the level subsequent frameworks require.
The post-Watson-Health regulatory development has been substantial: the FDA’s PCCP framework (Section 14.11), the EU MDR/IVDR’s AI provisions, the various adjacent frameworks. Watson Health’s failure was partly enabled by, and contributed substantially to driving, the subsequent regulatory development.
Agentic-deployment lens. Watson Health was substantially assistive rather than agentic — clinicians remained the decision-makers; Watson provided recommendations. The agentic-deployment lens is therefore less directly applicable. However, specific deployments of Watson included more-agentic features (automated treatment-recommendation generation; specific decision-support that shaped clinical practice substantially); the assistant-vs-agent boundary was unclear in some applications. The agent-vs-assistant distinction (Section 13.1) had not yet been clearly framed in the AI literature; Watson Health’s deployment did not benefit from the analytical clarity that subsequent agent-deployment work has produced.
Labour-and-economic lens. Watson Health was structurally augmentation-focused — supporting clinicians rather than replacing them. The labour effects for radiologists, oncologists, and other specialists were modest in either direction. The economic effects (the broader medical-AI industry; the specific firms developing competing products) have been substantial and continue through 2024–2026 — the post-Watson-Health medical-AI ecosystem (Section 7.5) has produced more-successful firms operating with the lessons Watson Health’s failure produced.
The integrated diagnosis. The integrated analysis identifies Watson Health as a capability-and-deployment-maturity failure primarily, with secondary contributions from the immature governance environment and the agent-vs-assistant ambiguity. The labour-and-economic dimensions did not bind the failure. The integrated diagnosis is more specific than the single-framework analyses: the failure was substantially about capability-deployment alignment rather than about regulation, labour, or agentic-specific issues.
The diagnosis informs subsequent decisions. The 2024–2026 medical-AI deployments that have succeeded — Hippocratic AI’s bounded-task-focused deployment (Section 7.5); the ambient-scribe deployments (Section 7.5); the radiology-imaging deployments (Section 7.2) — each address the specific lessons. They focus on bounded operational tasks (sharp task definition); they accept gradual deployment maturation rather than aiming for ADML 4 from the start; they integrate with operational infrastructure rather than attempting to transform operations.
17.5 Applying the integrated framework — Klarna revisited
The Klarna case (Section 8.4) is structurally different from Watson Health and produces different integrated analysis.
Capability-and-maturity lens. Foundation-model capability for customer-service AI in early 2024 was substantial. The TRL assessment for routine-inquiry handling was approximately TRL 7–8 (system prototype demonstration in operational environment; system complete and qualified). The capability was real; the technology worked.
The deployment-maturity assessment is more nuanced. Klarna’s underlying customer-service infrastructure was substantial; the AI integration was technically feasible. The deployment context for substitution-focused AI customer service was, however, less mature. The five-factor analysis: task definition was sharp at the technical level (resolve customer-service inquiries) but ambiguous at the operational level (what counts as resolution? what about customer satisfaction beyond resolution?); feedback signals were strong for short-term metrics (resolution rate; resolution time) but weaker for long-term metrics (customer retention; brand perception); data availability was strong; regulatory environment was developing (consumer-protection frameworks evolving toward AI-specific provisions); deployment friction was relatively low.
The ADML assessment: the augmentation-focused deployment that Klarna could have pursued was at ADML 2 (operational deployment); the substitution-focused deployment that Klarna actually pursued was claimed at ADML 3 but operated at ADML 1 (initial deployment without adequate intermediate maturation).
Governance-and-compliance lens. Klarna operated in the EU and UK consumer-financial-services regulatory environment, which has been progressively addressing AI deployment through 2024–2026. The specific Klarna deployment did not face binding governance constraints at the time of the February 2024 announcement; the EU AI Act high-risk requirements were not yet in force; the consumer-protection framework had not yet specifically addressed AI customer-service substitution. Subsequent governance development (with Klarna as a substantial example) has been substantial.
The lack of binding governance at deployment time meant Klarna’s deployment decision was not regulatory-constrained. The case is informative for governance development: situations where deployment proceeds in regulatory ambiguity and produces consumer harm are exactly what regulatory frameworks emerge to prevent.
Agentic-deployment lens. Klarna’s customer-service AI was substantially agentic — the AI took resolution actions on customer inquiries without each action being human-reviewed. The deployment fits the bounded-action-taking-agent category (Section 13.1).
The agentic-specific failure modes (Section 13.10) apply: error cascading produced compound problems when individual interactions did not resolve well; loss of human control occurred as the deployment scaled and human oversight was reduced; the gradual erosion pattern (Section 13.10) describes the trajectory. The agent-specific trust threshold (Section 13.11) was crossed prematurely; customers’ trust in human-equivalent service from the AI was overestimated.
Labour-and-economic lens. Klarna’s deployment was substantially substitution-focused — the announcement explicitly framed the AI as replacing 700 customer-service agents. The labour-economic effects were direct: 700 specific employment positions affected; the broader labour-market signal that customer-service AI substitution was viable. The substitution-focused framing was structurally different from the augmentation-focused approach (Section 15.4) that has been more successful in other contexts.
The Brynjolfsson-Li-Raymond (Section 15.3) framework provides a useful counterfactual. A Klarna deployment in the augmentation mode — AI assists 700 human agents rather than replaces them — would likely have produced substantial productivity gains with maintained service quality. The deployment-mode choice (substitution vs augmentation) was the binding decision; the technology was capable in both modes.
The integrated diagnosis. The integrated analysis identifies Klarna as a deployment-mode and trust-threshold failure, primarily about agentic-deployment dimensions and labour-substitution-vs-augmentation choice rather than about capability or governance. The capability was sufficient; the regulatory environment did not bind; the failure was specifically about the deployment-design choices.
The diagnosis informs subsequent customer-service AI deployments. The post-Klarna industry pattern (Section 13.7) has been substantially augmentation-focused: the AI supports human agents rather than replacing them; the human-handoff structure is carefully designed; the metrics include long-term quality measures rather than only short-term resolution. The Klarna lessons have been substantially absorbed by the industry; subsequent deployments are more cautious specifically about the substitution-mode choice.
17.6 Applying the integrated framework — Hollywood strikes
The 2023 Hollywood strikes (Section 10.7) provide a different kind of case — not a deployment failure but an institutional response that shaped subsequent deployment. The integrated framework analyses this differently.
Capability-and-maturity lens. The capability assessment for the technologies the strikes addressed was variable. Generative video for production-scale film and television was at TRL 4–6 in 2023 — validated in lab and approaching relevant-environment demonstration but not yet production-ready. Digital replicas of performers were more mature for specific applications (background extras; specific aging/de-aging effects) but less mature for full-performance generation. Voice cloning was at TRL 6–7 for many applications.
The deployment-maturity assessment for full-substitution deployments (replacing writers; replacing actors fully) was very low; the technology was not capable enough for production substitution. The deployment maturity for augmentation applications (specific effects; AI-generated background; specific creative tools) was higher.
Governance-and-compliance lens. Pre-strike governance for AI in entertainment production was minimal. The Copyright Office’s framework for AI-generated content was developing; the SAG-AFTRA and WGA contracts had not specifically addressed AI; the broader IP-and-contract framework operated through general copyright and contract law without AI-specific provisions.
The strikes produced substantial governance development through collective bargaining — the contract provisions for both unions established specific AI-deployment constraints (Section 10.7). The provisions are governance in operation: not statutory regulation but contractual rules that bind the major studios and producers and effectively shape what AI deployment is permitted in the industry. The collective-bargaining framework filled the governance gap that statutory regulation had not addressed.
Agentic-deployment lens. Generative AI in creative production has substantial agentic dimensions. AI systems that generate content do not merely advise creators; they produce outputs that can be used directly. The agentic dimension matters for the rights and labour questions: who is the author of AI-generated content? what compensation is due to whom? The strike provisions addressed these questions partly through specifying that AI-generated content cannot serve as source material in ways that would reduce writer credits and residuals.
Labour-and-economic lens. The labour-and-economic dimensions are central to the strike case. The WGA represents approximately 11,500 writers; SAG-AFTRA represents approximately 160,000 actors and adjacent performers. The collective-bargaining position represented substantial labour interests; the strikes’ duration (148 days for WGA; 118 days for SAG-AFTRA) reflected the seriousness of the labour-and-economic stakes.
The contract provisions secured by the unions are the most-detailed AI-related labour-protective framework in any major industry. The provisions explicitly address: AI as source material; consent for digital replicas; compensation structures; ongoing bargaining about AI deployment as the technology evolves. The framework substantially constrains AI deployment in ways the underlying capability would not constrain absent the provisions.
The integrated diagnosis. The integrated analysis identifies the Hollywood strike case as a governance-through-collective-bargaining response to capability advancement with substantial labour-protective effect. The labour and governance dimensions are central; the capability dimension provided the impetus; the agentic dimension defined what the response had to address.
The diagnosis informs subsequent industries’ responses. Industries with substantial organised labour can produce comparable responses; industries without organised labour face deployment with fewer protective frameworks. The 2026–2030 trajectory will likely produce additional industry-specific labour-protective frameworks in industries with comparable organisation; industries without comparable organisation will likely face more-uneven deployment patterns.
17.7 Applying the integrated framework — Robodebt revisited
The Robodebt case (Section 12.1) provides a public-sector deployment failure that integrates differently across the frameworks.
Capability-and-maturity lens. The capability assessment for the Robodebt income-averaging system is interesting because the underlying mathematical operation (dividing annual income by 26) was simple; the “AI” framing arguably overstates the technology. However, the system operated as automated decision-making at scale, which is the operational characterisation that matters.
The deployment-maturity assessment is sharply negative across multiple factors. Task definition was poor in the legally-relevant sense — the income-averaging approach was not a legally-authorised method for assessing welfare-debt obligations. Feedback signals were weak — the volume of disputes overwhelmed the appeals infrastructure rather than informing system improvement. Data availability was strong but the data combinations the system relied on (annual tax records mapped to fortnightly welfare periods) were structurally inappropriate. Regulatory environment was substantially adverse — the deployment was unlawful as the Federal Court eventually ruled. Deployment friction in the operational sense was low (the system was deployed at scale rapidly) but in the legal-and-ethical sense was high (the deployment violated legal requirements).
The ADML assessment: Robodebt operated at scale (apparently ADML 3) but the deployment was based on legally-inadequate foundations. The maturity claim implicit in the operational scale was not supported by the legal-and-ethical foundation; the system was structurally at ADML 0 (the legal foundation did not support deployment) despite operating at ADML 3 in practice.
Governance-and-compliance lens. The Robodebt case is centrally a governance failure. The deployment proceeded without adequate legal authority; the income-averaging approach was eventually ruled unlawful (Amato v Commonwealth, 2019); the broader administrative-law framework was inadequate to the deployment scale.
The post-Robodebt regulatory development (Section 14.8) has been substantial: the Australian Government AI Use Policy (September 2024); the proposed Mandatory AI Guardrails for High-Risk AI; the Privacy Act reform with automated-decision provisions. The Royal Commission report (July 2023) explicitly addressed governance failures and recommended substantial reforms. The case is the canonical Australian government-AI failure; the governance framework that has emerged in response substantially addresses the specific failure modes.
Agentic-deployment lens. Robodebt was substantially agentic in the sense of taking actions (issuing debt notices) rather than merely advising. The agentic-deployment failure modes apply directly: the system took actions that should have required human review and approval; the appeal-and-review pathway did not scale with deployment volume; the loss-of-human-control pattern is evident in the deployment trajectory.
The trust-threshold framework (Section 13.11) is relevant but operates differently in public-sector contexts: the affected parties (welfare recipients) had not consented to the agentic deployment; the trust-threshold consideration is structurally different from consumer contexts where users opt into agent use.
Labour-and-economic lens. Robodebt’s labour-economic effects were concentrated on welfare recipients rather than on Australian labour markets directly. The 470,000 debt notices and the AUD 1.8 billion in assessed debts represented substantial economic harm to the affected populations. The political-economic effects (the 2022 Australian federal election partly turning on Robodebt issues; the substantial post-election regulatory development) were consequential.
The integrated diagnosis. The integrated analysis identifies Robodebt as a governance-and-deployment-maturity failure with substantial agentic-deployment failure-mode patterns and concentrated harm to vulnerable populations. The capability dimension was less central — the underlying technical operations were simple; the failure was specifically about the deployment context, the legal foundation, and the absence of adequate human oversight.
The diagnosis informs subsequent public-sector AI deployments. The post-Robodebt framework explicitly addresses the specific failure modes: legal authority requirements before deployment; mandatory human-review pathways for adverse decisions; transparency requirements for algorithmic decision-making; substantial appeal-and-review infrastructure scaling with deployment scale. The lessons are operationally specific; the framework substantially addresses what Robodebt got wrong.
17.8 Sector-level integrated analysis — finance
Finance (Chapter 6) provides a substantial test of sector-level integrated analysis. The sector exhibits high alignment across most lenses.
Capability-and-maturity lens. Financial-services AI is at high capability maturity for many applications. Algorithmic trading, fraud detection, credit underwriting, and customer-service AI are at TRL 8–9 for the operational tasks the major banks deploy. The deployment-maturity factor analysis is favourable: task definitions are typically sharp (predict default; detect fraud; price an option); feedback signals are strong (transaction outcomes; default events; trade P&L); data availability is substantial (transaction histories at scale); regulatory environment is established with progressive AI-specific extensions; deployment friction is moderate (financial systems are integrated but the integration is engineered).
The ADML assessment for major banks’ deployments is typically ADML 3 (mature operational) with some specific applications approaching ADML 4 (strategic capability — Stripe Radar’s fraud-prevention infrastructure is structurally at ADML 4–5).
Governance-and-compliance lens. Financial-services governance is among the most-developed sector-specific frameworks. The combination of horizontal AI regulation (EU AI Act high-risk-essential-services category for credit scoring), sector-specific federal regulation (Federal Reserve, OCC, FCA, ASIC, Bank Negara Malaysia), and consumer-protection frameworks (CFPB, equivalent agencies) produces substantial regulatory infrastructure. The cumulative governance burden is high; the operational discipline matches the burden.
The 2024–2026 AI-specific regulatory developments in financial services (Section 14.11) have extended the framework. The Federal Reserve’s 2024 Supervisory Letter, the OCC’s parallel guidance, the FCA’s AI Live Tests, ASIC’s AI guidance — each adds specific AI-related requirements to the established financial-services regulatory framework.
Agentic-deployment lens. Financial-services AI has substantial agentic dimensions. Algorithmic trading executes transactions autonomously; AI-driven credit decisions take actions with substantial financial consequences; agentic customer-service AI handles customer interactions autonomously. The agentic-specific failure modes apply; the regulatory framework increasingly addresses agentic dimensions.
The trust-threshold framework (Section 13.11) is operationally critical in financial services. Customers’ trust in AI handling their financial accounts requires substantial validation; the deployment patterns reflect this — agentic AI typically operates within bounded scopes (specific transaction types; specific account-management actions) rather than broad authority.
Labour-and-economic lens. Financial-services employment has been stable through 2024–2026 despite substantial AI deployment. The augmentation pattern dominates; specific routine roles (back-office processing; certain clerical functions) have faced substitution but the broader employment has continued to grow. The skill-mix shift toward AI-fluent roles is substantial; reskilling and continuous-learning support are increasingly central to financial-services labour-market dynamics.
The integrated diagnosis. The financial-services sector exhibits high alignment across the lenses — mature capability and deployment, established governance, manageable agentic dimensions, manageable labour effects. The high alignment reflects the sector’s structural advantages (clear tasks; abundant data; established regulation; substantial integration capability) and the cumulative investment in AI across the major operators. The sector is the contemporary period’s clearest example of mature AI deployment at scale.
17.9 Sector-level integrated analysis — healthcare
Healthcare (Chapter 7) exhibits more-uneven alignment across lenses than finance. The sector’s AI deployment has been substantial in specific applications but uneven across the broader healthcare landscape.
Capability-and-maturity lens. Healthcare AI is heterogeneous. Specific applications (radiology imaging; specific clinical-decision-support tools; ambient-scribe deployment) are at high capability and deployment maturity (ADML 3–4 in major academic medical centres); other applications (broad clinical AI; certain frontier diagnostic tools; specific drug-discovery applications) are at lower maturity. The Watson Health legacy (Section 17.4) substantially shapes the current pattern — the post-Watson-Health deployments focus on bounded operational tasks rather than broad clinical AI.
Governance-and-compliance lens. Healthcare AI governance is among the most-developed sector-specific frameworks (Section 14.11). The FDA AI/ML SaMD framework, the EU MDR/IVDR’s AI provisions, the TGA Australian framework, the MDA Malaysian framework — each provides substantial regulatory infrastructure. The HIPAA framework (US) and GDPR (EU) add data-privacy requirements. The cumulative governance burden is substantial; healthcare AI faces among the most-developed compliance requirements of any sector.
The 2024–2026 governance evolution has been substantial. The FDA’s PCCP framework for continuously-learning systems represents a specific innovation; the EU MDR’s AI provisions are progressively elaborating; the various national frameworks are converging on broadly similar approaches.
Agentic-deployment lens. Clinical AI is increasingly agentic. The Hippocratic AI Polaris deployment (Section 7.5) handles patient interactions autonomously within specific clinical contexts; ambient-scribe deployments support physician work but increasingly take specific documentation actions; clinical-decision-support tools increasingly act on behalf of clinicians rather than only advising. The agentic-specific dimensions are substantial; the trust-threshold framework operates differently in clinical contexts (where the trust must be both clinical-quality trust and patient-safety trust).
Labour-and-economic lens. Healthcare employment has continued to grow through 2024–2026; the labour-AI dynamics have been substantially augmentation-focused. Physician burnout reduction (a substantial driver of ambient-scribe deployment), nursing workflow improvement, and specific administrative-task automation are augmentation patterns. The displacement effects have been concentrated in specific roles (some medical-coding work; certain administrative roles) but small relative to the broader healthcare-employment scale.
The integrated diagnosis. Healthcare exhibits substantial heterogeneity across the lenses. Specific applications are at high alignment (radiology imaging; specific operational AI); other applications face binding constraints (broad clinical AI; certain frontier deployments). The governance framework is mature; the labour effects are manageable; the agentic dimensions are increasingly central. The sector’s deployment will likely continue to expand through 2026–2030, with continued progression in specific applications and continued caution in others.
17.10 Sector-level integrated analysis — professional services
Professional services (Chapter 11) provides a different sector-level pattern — substantial capability deployment with active labour-economic dynamics and developing governance frameworks.
Capability-and-maturity lens. Professional-services AI is at substantial capability maturity for specific applications. Legal research and document review (Harvey AI, Casetext, LexisNexis), accounting and audit AI (Big Four deployments), and consulting AI (McKinsey QuantumBlack, BCG Gamma, Bain Vector) are at ADML 2–3 across major firms. The capability is real; the deployment is substantial; the operational integration is progressing.
Governance-and-compliance lens. Professional-services governance is uneven across sub-sectors. Legal services have substantial professional-conduct frameworks (the ABA Formal Opinion 512; state-bar guidance; the Mata v. Avianca disciplinary framework, Section 11.10). Accounting and audit AI face PCAOB guidance and analogous frameworks. Consulting AI faces less professional-specific governance, though general AI regulation (Chapter 14) applies.
The Mata v. Avianca lessons (Section 17.5 analogue) have substantially shaped legal-AI deployment. The professional-discipline framework is increasingly capable of addressing AI-related professional-conduct concerns; the framework is still maturing through 2024–2026.
Agentic-deployment lens. Professional-services AI is substantially agentic in increasing measure. Legal-research agents that complete multi-step research tasks; accounting AI that prepares specific deliverables; consulting AI that handles specific analytical tasks. The agentic-specific dimensions are substantial; the verification framework that the Mata v. Avianca case identified as critical is increasingly central to professional practice.
Labour-and-economic lens. Professional-services labour effects (Section 11.12) have been substantially augmentation-focused at the aggregate level. Legal-services employment has grown; accounting employment has grown; consulting employment has grown. The internal task-mix has shifted; junior roles face the most direct AI substitution; senior roles capture more value.
The career-progression dynamic is the specific concern: traditional development pathways from junior to senior depend on the routine work that AI is increasingly handling. The 2024–2026 firm responses have been variable; the long-term solution is unresolved.
The integrated diagnosis. Professional services exhibits substantial deployment progress with developing governance and emerging labour-economic dynamics. The sector’s AI deployment will continue to expand through 2026–2030; the labour-economic dynamics will continue to evolve; the governance framework will continue to mature. The Mata v. Avianca-style cautionary cases will likely continue to surface but with progressively-better professional-discipline response.
17.11 Decision frameworks — applying integration to specific decisions
The integrated framework supports specific decisions across stakeholder groups. Each decision type has specific applications.
Deployment decisions. A firm deciding whether to deploy a specific AI application should systematically apply the four-lens framework.
Pre-commitment analysis. Before deployment commitment, the firm should: - Assess capability maturity for the specific application against TRL criteria. - Assess deployment maturity against the five-factor framework. - Identify governance requirements applicable to the deployment. - Analyse the agentic dimensions (if any) and the specific failure modes. - Project labour effects for the firm’s workforce and the broader labour market. - Identify cross-lens patterns (alignment; compound misalignment; binding constraints; complementarity).
Decision criteria. Deployment commitment should require: - Adequate capability maturity (TRL ≥ 6 for the specific application). - Adequate deployment context (the five factors broadly favourable). - Manageable governance requirements (compliance feasible within reasonable cost). - Manageable agentic dimensions (where applicable). - Manageable labour effects with explicit augmentation-focus rather than substitution-focus where possible. - Identified binding constraints with explicit mitigation plans.
Staged implementation. Even after deployment commitment, deployment should proceed in stages (Section 16.7). Premature scaling is the single most-common failure pattern in cautionary cases.
Investment decisions. Investors evaluating AI investments should apply the framework with investment-specific adaptations.
Investment thesis specification. The investment thesis should be explicit about: - Which lens(es) drive the investment value (capability advancement; deployment-maturity progression; governance-clarification benefits; labour-effect-driven demand). - Where the firm currently stands on each lens. - What specific developments support the thesis (capability advancement; regulatory clarification; market expansion; labour-market shift). - What specific risks could undermine the thesis (capability stagnation; regulatory adverse change; market-shift; labour-market resistance).
Due diligence. Due diligence should systematically evaluate each lens. Pricing. The investment should be priced consistent with the maturity assessment; ADML 1 firms should be priced as early-stage; ADML 4 firms can support more-aggressive valuations.
Regulatory decisions. Regulators developing AI-related rules should apply the framework with regulatory-specific adaptations.
Risk-based framework alignment. The regulatory framework should align with maturity reasoning. Mature applications can support standard regulatory burden; immature applications may need different treatment.
Cross-framework consistency. Regulations should be consistent across the frameworks. Governance that conflicts with capability requirements; labour protections that prevent beneficial deployment; agentic-specific rules that don’t address the specific failure modes — each represents framework misalignment.
Adaptive evolution. Regulatory frameworks should evolve as deployment matures. Static rules become inadequate as the deployment landscape changes.
Evaluation decisions. Researchers, journalists, and analysts evaluating AI claims should apply the framework as a diagnostic tool.
Pathology recognition. The pathologies of premature maturity claims (Section 16.12) provide diagnostic indicators.
Cross-lens consistency check. Claims that exhibit inconsistency across lenses (claimed capability not supported by deployment evidence; claimed deployment maturity without adequate governance compliance; claimed labour-friendly position without supporting evidence) signal substantive concerns.
Adversarial analysis. Mature evaluation considers what adversaries (skeptical investors; regulatory critics; affected workers; competitors) would identify; the adversarial perspective often surfaces issues that supportive analysis misses.
The decision frameworks are not algorithms; they are systematic methodologies that produce better-informed decisions. The methodology’s value is realised in the discipline of its application.
17.12 Bridge to Part V playbook discipline
The analytical frameworks of Part III map systematically to the operational discipline of the Part V playbook chapters. The mapping is not coincidental; the playbook was designed to operationalise the analytical insights.
Chapter 19 (Idea selection) ↔︎ Maturity framework Section 16.6. The playbook’s idea-selection criteria require teams to select problems where the AI capability is mature enough to address the specific problem and where the deployment context will support successful implementation. The analytical framework’s maturity assessment is the diagnostic tool that supports the selection.
Chapter 20 (Customer discovery) ↔︎ Labour and Acemoglu-Restrepo framework. The playbook’s customer-discovery work explicitly addresses how the proposed solution affects users — augmentation or substitution; what tasks it changes; what the labour-economic implications are for the customer. The analytical framework’s task-based decomposition supports the customer-discovery analysis.
Chapter 21 (MVP design) ↔︎ Capability and deployment maturity. The playbook’s MVP scoping requires teams to specify operational tasks the system will handle; the specification must be at the maturity level the team can actually achieve. The analytical framework’s capability-and-deployment-maturity assessment supports the scoping.
Chapter 22 (Stack decisions) ↔︎ Agentic-deployment framework. The playbook’s stack decisions include foundation-model selection, tool integration, and agentic-vs-assistive design choices. The analytical framework’s agentic-deployment dimensions support the decisions.
Chapter 23 (Evaluation) ↔︎ All four frameworks. The playbook’s evaluation discipline addresses all four analytical dimensions: capability evaluation (does the system actually work?); deployment evaluation (does the deployment context support the system?); governance evaluation (does the deployment meet applicable requirements?); labour evaluation (what are the actual effects on users and workers?).
Chapter 24 (Alpha launch) ↔︎ Pathologies of premature maturity claims. The playbook’s alpha discipline addresses Pathology 3 (alpha-skipping) directly. The staged-rollout requirement is the operational mechanism that prevents the cautionary-case failure pattern.
Chapter 25 (Beta and data flywheel) ↔︎ Iansiti-Lakhani factor framework. The playbook’s beta discipline addresses the data-flywheel dynamics that the analytical framework’s Section 16.9 identifies. The flywheel must be designed in from the start; the playbook discipline ensures it.
Chapter 26 (Pricing and unit economics) ↔︎ Labour and economic frameworks. The playbook’s pricing discipline addresses the economic structure that the analytical framework develops. The augmentation-vs-substitution choice has pricing implications; the agentic-employee framing has specific pricing patterns; the value-capture question is operationally specific.
Chapter 27 (Pitch and funding) ↔︎ Investment decision framework. The playbook’s pitch discipline addresses what investors evaluate — the integrated framework’s investment-decision logic. Teams pitching AI products must speak to the investment-thesis logic that the analytical framework develops.
Chapter 28 (Mock VC and commercialisation) ↔︎ All four frameworks integrated. The playbook’s commercialisation discipline integrates all four analytical lenses. The mock-VC pitch requires demonstrating capability maturity, deployment maturity, governance compliance, agentic-design soundness, and labour-effect manageability simultaneously.
The use of both together. Students completing the unit should apply both the analytical and operational dimensions:
- The analytical frameworks support understanding cases, evaluating claims, and informing strategic recommendations.
- The operational playbook supports actually building AI products with the discipline that the analytical frameworks identify as necessary.
- The combination produces capability that neither alone provides.
A student who has read Part II (cases) and Part III (analytical frameworks) but not Part V (playbook) understands AI deployment without being able to do AI deployment. A student who has done Part V (playbook) without Part II (cases) and Part III (frameworks) can do specific things without understanding why or what could go wrong. The integration is the unit’s signature pedagogical move; the analytical-and-operational combination is what produces graduate-level AI-in-business capability.
17.13 The unresolved questions and active research
The integrated framework is not complete; specific dimensions remain unresolved and represent active areas of research.
The capability trajectory question. Foundation-model capability has improved substantially through 2018–2026; the trajectory through 2026–2030 is contested. Whether capability continues at the same pace, slows substantially, or accelerates further has substantial implications for all four frameworks. The capability question is partially technical (what algorithmic and architectural advances will emerge?) and partially economic (what investment will the field sustain?). The answer will substantially shape the integrated landscape.
The deployment-environment maturation question. How fast will the deployment environment (data infrastructure, organisational capability, regulatory frameworks, integration tools) mature? The five factors (Section 16.4) each evolve at different rates; the cross-factor evolution will determine whether deployment maturity progresses smoothly or unevenly. The question is partially policy (what governance evolution occurs?) and partially industrial (what investment in deployment infrastructure occurs?).
The labour-effect realisation question. The 2024–2026 labour-market data has shown more modest effects than aggressive 2023 projections suggested. Whether the modest effects continue or whether substantial effects emerge through 2028–2030 is unresolved. The Brynjolfsson-Hitt time-to-impact framework suggests that substantial effects may emerge with lag; the actual trajectory will be visible in 2028–2030 macroeconomic data.
The frontier-AI safety question. The frontier-model governance frameworks (Section 14.10) address specific safety concerns. Whether the frameworks are adequate, whether they will mature into binding regulation, and whether specific safety concerns materialise are open questions. The 2026–2030 trajectory will substantially shape this dimension; specific incidents will inform the assessment.
The international convergence question. The governance landscape (Chapter 14) is fragmented across jurisdictions. Whether convergence occurs through 2026–2030, what the converged framework looks like, and how international coordination develops are unresolved. The cumulative regulatory burden depends substantially on this dimension.
The trust-threshold question. Agentic AI deployment depends on user trust (Section 13.11). How trust evolves with deployment experience, what specific trust-building mechanisms work, and where the trust threshold sits for specific applications are open questions. The trust dimension is harder to study than capability or deployment maturity but is critical for understanding agentic deployment trajectory.
The economic-rents-and-redistribution question. Section 11.12 raised the question of who captures AI-driven productivity gains — firms, workers, customers. The current pattern (firms capture as profit; clients face ongoing fees not declining proportionally) is contested; the long-run distribution may be substantially different. The redistribution dynamics will shape political-economic outcomes through 2026–2030.
The unresolved questions are not gaps in the framework’s analytical structure but areas where the empirical and theoretical foundations are still developing. Mature analytical practice acknowledges the unresolved dimensions rather than papering over them with false certainty.
17.14 The 2026–2030 forward synthesis
Five integrated trajectories define the 2026–2030 forward look.
Trajectory 1 — capability-and-deployment co-evolution. Foundation-model capability will continue to improve; deployment-environment maturation will continue to develop. The combined evolution will support substantially broader deployment in specific sectors while leaving others at current maturity stages. The 2030 deployment landscape will be substantially more developed than the 2026 landscape.
Trajectory 2 — governance maturation with continued fragmentation. The EU AI Act will reach full implementation; sector-specific frameworks will continue to mature; international convergence will be partial. The 2030 governance landscape will be substantially more-developed in major jurisdictions but still fragmented across the international system.
Trajectory 3 — labour-economic effect accumulation. The Brynjolfsson-Hitt time-to-impact dynamics may produce substantial productivity effects through 2028–2030 from the 2023–2026 deployment efforts. The labour-market effects will be heterogeneous; specific role categories will be substantially affected; broader employment effects will depend on the augmentation-vs-displacement balance and the reinstatement effects.
Trajectory 4 — agentic-deployment broadening. Agentic AI deployment will expand from current concentrated areas (coding; customer service; specific narrow contexts) to broader applications. The trust-threshold dynamics will substantially shape the trajectory; specific incidents will accelerate or delay the broadening.
Trajectory 5 — institutional response maturation. Organised labour, professional associations, and regulators will progressively adapt. The cumulative institutional layer will be substantially more developed in 2030 than in 2026.
The integrated trajectory points to AI deployment as substantially more developed in 2030 than in 2026 across all four framework dimensions, with specific sectoral and regional heterogeneity. The integrated analysis supports navigating the trajectory; the operational playbook supports executing within it.
The bridge to Chapter 18: the next chapter returns to specific cases at greater synthesised depth, applying the integrated frameworks of Chapters 13–17 to additional case material. The 2026–2030 trajectory will produce additional cases that the integrated analytical methodology will be applied to; the field’s analytical practice will continue to develop as the deployment landscape evolves.
References for this chapter
Primary integrated framework references
- Iansiti, M. and Lakhani, K. R. (2020). Competing in the Age of AI. Harvard Business Review Press.
- Acemoglu, D. and Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy 128(6): 2188–2244.
- Acemoglu, D. (2024). The simple macroeconomics of AI. NBER Working Paper 32487.
- Brynjolfsson, E., Li, D., and Raymond, L. R. (2023). Generative AI at work. NBER Working Paper 31161; Quarterly Journal of Economics (2025).
Cautionary case primary sources
- IBM (2018, 2022). Watson Health divestiture announcements.
- Klarna AB (2024, 2025). AI customer service deployment communications.
- Writers Guild of America (2023). 2023 MBA Memorandum of Agreement.
- Screen Actors Guild–American Federation of Television and Radio Artists (2023). 2023 TV/Theatrical Memorandum of Agreement.
- Royal Commission into the Robodebt Scheme (2023). Final Report.
Sectoral integrated analyses — supporting literature
- US Federal Reserve, OCC (2024). Supervisory communications on AI in financial services.
- US Food and Drug Administration (2021, 2023, 2024). AI/ML SaMD framework and PCCP guidance.
- American Bar Association (2024). Formal Opinion 512 on AI use by lawyers.
- Stanford CodeX (2024). AI in Law programme reports.
Integrated framework methodology references
- Russell, S. and Norvig, P. (2020). Artificial Intelligence: A Modern Approach, 4th edition.
- European Parliament and Council (2024). Regulation (EU) 2024/1689 (the AI Act).
- Anthropic (2023, 2024, 2025). Responsible Scaling Policy (multiple versions).
- OpenAI (2023, 2024, 2025). Preparedness Framework (multiple versions).