Chapter 4 — Building the AI-native enterprise

If the AI factory is the architectural what, the McKinsey six-capability framework — drawn from Rewired (Lamarre, Smaje, and Zemmel, 2023) — is the operational how. This chapter develops each capability with the depth a graduate manager needs to apply the framework to a real transformation, anchors it with three transformation cases at chapter-length depth, and connects the framework to the organisational-economics literature.

Chapter overview

This chapter is structured around two questions a graduate student must be able to answer after reading it. First, why all six capabilities, in concert? — addressed in §Chapter 4, §4.2 through the McKinsey banking benchmark study and the in-concert evidence. Second, what does each capability look like operationally? — addressed in §Chapter 4, §4.4 through §Chapter 4, §4.9, one section per capability. We then develop three anchor cases at chapter-length depth (§Chapter 4, §4.10–§Chapter 4, §4.12), survey the named failure modes (§Chapter 4, §4.14), and connect to the organisational-economics literature (§Chapter 4, §4.15).

Reading this chapter

This is the most operationally prescriptive chapter in the book. It is also the one most exposed to the McKinsey-as-source critique we raised in Chapter 1 — the framework is partly an artefact of McKinsey’s own consulting practice. Read it with that conflict of interest in mind, and let the empirical evidence (the banking benchmark, the named transformation cases) carry the weight.

The Rewired thesis

Companies that have rewired themselves around digital and AI massively outperform their competitors. We have observed time and again that the difference between leaders and laggards comes down to six interlocking capabilities. Building any one alone is insufficient; success requires building all six in concert.

— Lamarre, Smaje, and Zemmel (2023), Introduction

The McKinsey banking-benchmark study

The headline empirical claim in Rewired — that rewired firms outperform their competitors substantially — rests primarily on a McKinsey banking-benchmark study comparing 20 global banks identified as digital and AI leaders against 20 laggards over five years (2017–2022). The leaders outperformed laggards on total shareholder return (TSR) by approximately 14 percentage points per year.

A graduate reader should immediately ask three methodological questions:

  1. Selection. How were the 20 leaders and 20 laggards chosen? If the selection was based on observed performance, the result is partly tautological. McKinsey’s published methodology selects banks based on capability assessments conducted before the performance window opened, which mitigates but does not eliminate the concern.
  2. Causal identification. A 14pp TSR gap could reflect (a) a causal effect of rewiring on performance, (b) reverse causation (well-performing banks have more resources to invest in capability), or (c) common cause (e.g., both performance and capability investment are driven by management quality). McKinsey’s analysis controls for prior-period TSR but cannot rule out (c).
  3. Generalisation. The study is on banks. Whether the same pattern holds in non-financial sectors with different value-chain economics is an open empirical question. McKinsey’s broader engagement experience suggests it does, but the published quantitative evidence is much weaker outside banking.

Despite these caveats, the qualitative pattern — capability investment in concert outperforms capability investment in isolation — is robust across the broader McKinsey engagement record and consistent with the Brynjolfsson–Hitt complementary-intangibles framework we developed in Chapter 1.

Why six capabilities, not five or seven

A fair question is why the capability count is six. The framework’s origin is empirical pattern-matching across McKinsey’s transformation engagements; the six capabilities are not the result of a derivation from first principles. Reasonable alternative decompositions exist:

  • A five-capability variant might combine technology and data (since the data layer increasingly depends on cloud-native infrastructure).
  • A seven-capability variant might split roadmap into strategy and portfolio management (since the executive choice of which domains to prioritise is distinct from the operational choice of how to sequence them).
  • An eight-capability variant might add financial discipline (capital allocation, ROI tracking, value attribution) and governance (the regulatory and ethics function we develop separately in Chapter 14).

The graduate reader should treat the six-capability number as a useful organising convention rather than as an architectural claim. What matters is that the framework forces a manager to think about each of the listed capabilities, and to build them in concert rather than piecemeal.

The six capabilities

flowchart TB
    subgraph Top [" "]
        direction LR
        R[1. Roadmap<br/>Domain-based<br/>2-5 priority domains]
        T[2. Talent<br/>30-70 in-source<br/>2-in-a-box leadership]
        O[3. Operating model<br/>Agile pods<br/>Outcome-aligned KPIs]
    end
    subgraph Bottom [" "]
        direction LR
        Te[4. Technology<br/>Cloud-native<br/>Decoupled architecture]
        D[5. Data<br/>Data products<br/>Federated governance]
        A[6. Adoption<br/>Change management<br/>The most-overlooked]
    end
    style R fill:#e8f3fa,stroke:#006DAE
    style T fill:#fdf3e7,stroke:#d97706
    style O fill:#e9f5ec,stroke:#059669
    style Te fill:#f3eef7,stroke:#7c3aed
    style D fill:#fef9e7,stroke:#b8860b
    style A fill:#ffe4e1,stroke:#dc2626
Figure 5.1: The Rewired six capabilities.

Capability 1 — Roadmap

The roadmap is the strategic frame. Rewired’s most distinctive prescription is to organise transformation by domains — coherent, end-to-end customer or operational journeys typically 10–15 in number for a large enterprise — and then to attack 2–5 of them seriously.

Why domains rather than use cases

The book reports that roughly 80% of successful transformations re-anchor on domains after starting elsewhere (capability-led, technology-led, or use-case-led). Three reasons recur:

  1. Domain redesign forces alignment of the other five capabilities. A talent investment without a domain target is unmoored; a technology investment without a domain target is shelfware; a data product without a domain target is a dataset. Domains are the unit at which the complementary intangibles of Chapter 1 aggregate into a measurable financial outcome.
  2. Domains are the unit at which workflow redesign matters. The Iansiti–Lakhani thesis at the operational level: AI capability is necessary but value capture requires workflow redesign, and workflow lives at the domain level.
  3. Domains map to executive accountability. A use case has no natural owner; a domain does. The executive sponsor can be held to the financial and operational outcomes of a domain in a way that is impossible for a portfolio of disconnected use cases.

Examples of well-defined domains

Industry Domain examples
Retail banking Mortgage origination; onboarding and KYC; personal-loan customer journey; collections; wealth management onboarding
Pharmaceutical R&D Target identification through Phase II; clinical trial site selection and patient recruitment; regulatory submission; pharmacovigilance
Retail Post-purchase customer service; merchandise planning and allocation; fulfilment optimisation; pricing and promotions
Insurance Underwriting; claims first notice of loss; fraud detection; renewals and retention
Telecommunications Customer activation; network capacity planning; field-service dispatch; churn prevention

A useful exercise: pick a firm in your industry and list its 10–15 domains. The list is rarely controversial when shared with senior operators of the firm; arriving at it is itself a clarifying exercise.

Domain prioritisation

The 2–5 domain selection is the strategic act. McKinsey’s recommended prioritisation matrix scores each domain on:

  • Value at stake: estimated NPV of full domain transformation, typically 18–36 months out.
  • Probability of success: function of domain readiness (data, talent, executive sponsorship) and competitive pressure.
  • Strategic importance: how core is this domain to the firm’s competitive position?

The matrix typically produces 2–5 domains in the high-value × high-probability × high-importance corner. Where it does not — where the high-value domains are low-probability, or vice versa — the firm is in a strategically awkward position that the framework cannot resolve by itself.

Sequencing the roadmap

Once 2–5 domains are selected, they are sequenced. McKinsey’s recommended pattern:

  • Lighthouse first: a fast-win domain (12–18 months to measurable value) that funds and de-risks the broader transformation.
  • Anchor second: the firm’s largest-value domain, with a 24–36 month transformation timeline.
  • Adjacent third and fourth: domains that benefit from the lighthouse’s data products and infrastructure, accelerating their own timelines.

The lighthouse-anchor-adjacent pattern is the most-cited template; the Lamarre, Smaje, and Zemmel (2023) chapter on roadmap construction develops it in detail.

Capability 2 — Talent

The talent capability has three distinctive features in the McKinsey treatment.

The 30–70 shift

Leading firms typically bring 30–70% of their digital and AI talent in-house, reversing the 20-year drift toward systems-integration outsourcing. DBS Bank moved from approximately 15% in-house tech in 2009 to roughly 90% by 2018 — the most-cited example. The thesis: critical capability cannot be effectively outsourced because it must be embedded in the business’s everyday decisions.

The McKinsey research identifies six roles for which in-sourcing is essentially mandatory:

Role Why mandatory in-source
Data engineer Data pipelines are the firm’s core asset; outsourcing them creates ongoing dependency.
Data scientist The interface between business problems and modelling is irreducibly tacit.
ML engineer Production deployment, monitoring, and iteration require deep system context.
Product owner Continuous trade-offs between scope, time, and quality require firm-specific judgement.
Scrum master / agile coach Coaching is a long-cycle relationship with team members.
UX/UI designer Design choices encode firm-specific brand and customer-segment knowledge.

Other roles (DevOps, security, infrastructure, project management) can often remain partially outsourced without serious capability loss.

2-in-a-box platform leadership

Each technology platform is co-led by a business owner (responsible for outcomes) and a technology owner (responsible for the build), reporting jointly to a single executive sponsor. This breaks the long-standing IT-versus-business gap that has defeated many digital transformations.

The model is most fully realised at:

  • DBS: 33 platforms operate under 2-in-a-box leadership.
  • Spotify: tribes and squads, where each squad has both a product owner (business) and an engineering manager (technology) reporting jointly.
  • ING: similar tribes-and-squads structure, post a 2015 reorganisation that has been extensively studied.

The pattern’s effectiveness depends on three conditions: the two leaders have genuinely equal authority; the executive sponsor settles disputes promptly; and the organisational performance management system rewards joint outcomes rather than functional success.

The talent supply problem

The talent capability is constrained by global supply. The Stanford AI Index 2025 (Stanford HAI, 2025) documents that ML PhD output remains heavily concentrated in a handful of US universities (Stanford, Carnegie Mellon, MIT, Berkeley, Princeton, NYU, Toronto). The largest absolute talent pool is in the United States, with China second and India a fast-rising third.

For Asia-Pacific firms outside these three countries, the talent strategy must combine:

  • In-sourcing of the most critical roles (the six listed above) at competitive total compensation.
  • Capability building through targeted graduate-school sponsorship and partnerships with regional universities.
  • Targeted overseas hiring of senior technical leaders with domestic-region cultural fit.
  • Internal academies that retrain non-technical staff for technical-adjacent roles (data analyst, ML engineer junior, product owner).

The total cost is substantial. A well-resourced regional bank in Southeast Asia building this capability from scratch should budget 3–5 years and US$30–60 million in incremental personnel cost above the steady-state outsourced baseline.

Compensation benchmarks

A graduate-level treatment requires acknowledgement that the market clearing salary for AI talent has shifted dramatically since 2022. For senior ML engineers in 2024–2026:

Region Total comp band (USD), 5-10 years experience
US Tier-1 hyperscalers and frontier labs 500K–1.5M+
US enterprise (banks, healthcare) 250K–500K
EU enterprise 150K–300K
Singapore 200K–400K
Hong Kong 180K–350K
Kuala Lumpur, Jakarta 60K–150K

The implication for emerging-market firms: matching US-tier compensation is impractical, but the gap can be partly closed through equity, geographic premium, and project autonomy. Firms unwilling to pay any premium will struggle to retain senior talent against the pull of frontier labs and hyperscalers.

Capability 3 — Operating model

The operating-model prescription is convergent across Rewired, the agile literature, and the AI factory framework: cross-functional product pods of 8–10 people, owning a domain end-to-end, releasing software continuously, and measured on outcome KPIs (revenue, cost, satisfaction) rather than activity KPIs (lines of code, tickets closed).

The pod as the modern unit of digital execution

A pod’s anatomy is consistent across mature implementations:

  • Product owner: prioritises the backlog, owns outcome metrics, makes scope decisions.
  • Engineering manager / scrum master: coaches the team’s process, removes blockers, owns delivery cadence.
  • 3–5 software engineers: build and operate the software.
  • 1–2 data scientists / ML engineers: build and operate the models (where applicable).
  • 1 designer (UX/UI): owns the customer-facing surface.
  • 1 data engineer or data analyst: owns the data products consumed by the pod.

The pod is autonomous in the sense that it can ship software to production without external sign-off, given pre-agreed guardrails (security, compliance, brand). It is not autonomous on strategy: the pod’s priorities are set by the platform owner above it.

Scrum, kanban, SAFe, and Spotify

Four agile operating-model variants compete in practice:

  • Scrum: time-boxed sprints (typically 2 weeks), with backlog grooming, planning, retrospective. Best for predictable-cadence work where the team’s domain is mature.
  • Kanban: continuous flow with WIP limits. Best for unpredictable work (e.g., incident response, infrastructure operations).
  • SAFe (Scaled Agile Framework): a heavyweight scaling methodology with portfolio, program, and team layers. Best for very large transformations (10,000+ people) that need explicit scaling structure. Often criticised for re-introducing the bureaucracy that agile was meant to eliminate.
  • Spotify model (tribes, squads, chapters, guilds): a flexible structure with squads as autonomous teams, tribes as collections of squads working on related problems, chapters as functional communities of practice (e.g., all backend engineers across squads), guilds as cross-cutting interest communities. Spotify itself has since moved away from the rigid model, but the principle endures.

Most modern AI factories run on a Spotify-style operating model with light SAFe scaffolding for cross-tribe coordination. The DBS implementation is approximately this. ING’s 2015 reorganisation is the most-cited European example.

KPI alignment

Rewired’s most-cited diagnostic question for boards is: How many of your pods can ship to production this week without C-suite sign-off? The answer in most large firms is “essentially none”; in successful transformations, the answer is “most of them.”

The deeper question is how the firm’s KPI system rewards pod-level outcomes. Typical mature pattern:

  • Pod-level KPIs: outcome metrics (revenue, customer satisfaction, conversion rate) tied to the pod’s domain.
  • Platform-level KPIs: aggregate metrics across pods within a platform.
  • Function-level KPIs (engineering quality, design quality, data quality): cross-pod, owned by chapter leaders.
  • Firm-level KPIs: the standard P&L metrics, with explicit attribution to pods and platforms.

The hardest part is the cross-pod attribution. A pod may improve a customer-facing metric that another pod’s actions also affect. McKinsey’s recommended approach: shared metrics for shared outcomes, with explicit upstream-downstream agreements between pods.

Capability 4 — Technology

The technology capability is summarised in three architectural moves: decouple (microservices, APIs, well-defined contracts), cloud-native (elastic infrastructure, managed services), self-service (developer platforms, automated provisioning, MLOps for production ML).

The decoupled architecture

The decoupling thesis is that monolithic enterprise applications produce coordination costs that grow super-linearly in firm size. Microservice architectures with well-defined APIs reduce these coordination costs by letting teams release independently.

The full pattern:

  • Microservices: small, single-purpose services with their own databases.
  • API contracts: versioned, schema-validated, stable interfaces.
  • Event streaming (Kafka, Pulsar): asynchronous communication between services.
  • Service mesh (Istio, Linkerd): managed service-to-service communication with retries, circuit breakers, observability.
  • Polyglot persistence: different services use different databases (relational, document, graph, time-series, vector) appropriate to their domain.

The decoupling thesis is not universal — there is now a counter-current arguing that microservices are over-engineered for many use cases (“modular monolith” or “majestic monolith” patterns). The right architecture depends on team size and coordination cost; firms with <50 engineers often do better with a modular monolith, while firms with >500 engineers usually do better with microservices.

Cloud-native and the multi-cloud reality

The DBS figure of 99% workload migration to cloud is a useful directional benchmark — most large firms today are at 30–60%. The 2024–2026 evolution is hybrid: the pure public-cloud thesis has softened as some workloads (training large foundation models, serving high-throughput inference, regulated workloads) move back to on-premise or sovereign cloud arrangements. The right framing is “cloud-native, multi-cloud, and partially sovereign” rather than “all on AWS”.

A specific technology choice worth memorising is the data-mesh-versus-data-lake-versus-data-warehouse debate (covered in Chapter 3 §Chapter 3, §3.4). McKinsey’s reading by 2025 is that the lakehouse pattern has won the architectural argument and the data mesh has won the organisational argument; most firms operationally run a hybrid.

Self-service and the developer platform

The third architectural move is internal-developer-platform (IDP) investment. The IDP exposes infrastructure, CI/CD, monitoring, and ML serving as self-service capabilities consumed by pods through stable APIs. The IDP team is itself a product team, with the firm’s other engineers as its customers.

Mature IDPs measure themselves on:

  • Time from new engineer to first production deployment (target: <2 weeks).
  • Mean time to provision new service (target: <1 day).
  • Self-service deployment fraction (target: >95%).
  • Internal NPS from engineering teams.

The Spotify and Netflix engineering blogs are the public references; the open-source CNCF projects (Backstage, Argo, Kubernetes operators) are the technology substrate.

Capability 5 — Data

The fifth capability is the one that has shifted most in the post-2020 period. The Rewired prescription is the data product: a managed, versioned, governed dataset with a clear product owner, well-defined consumers, SLAs, and quality metrics.

What a data product is

A data product is to a dataset what a software product is to a script. It has:

  • A name (e.g., “Customer 360” or “Loan-application risk scores”).
  • A schema with documented field meanings.
  • An owner (a named team and individual).
  • Documented consumers with their use cases.
  • SLAs on freshness, availability, and quality.
  • Quality metrics (completeness, validity, uniqueness, consistency, timeliness, accuracy).
  • A versioning policy for breaking and non-breaking schema changes.
  • A contract that consumers can rely on.

A typical large firm has 50–500 candidate data products; the goal is to operationalise the 20–50 highest-value ones.

McKinsey-reported impact

McKinsey’s reported impact for firms that adopt this approach: 90% faster delivery of AI use cases, 30% lower TCO of analytics, with reusability lifting downstream value. The benefits compound: each new use case gets cheaper and faster as the data-product layer matures.

A graduate reader should be sceptical of the precise percentages — they are based on McKinsey’s own engagement-portfolio averages and almost certainly exhibit selection bias — but the qualitative pattern (compounding returns to data-product investment) is well-supported across the broader data-engineering literature.

RAG and the 2024+ evolution

The 2024–2026 development is the integration of retrieval-augmented generation (RAG) infrastructure into the data layer. RAG patterns turn unstructured corporate content (documents, emails, presentations, code) into queryable knowledge surfaces that LLMs can retrieve from. The data engineering required to make RAG work — chunking, embedding, indexing, permission preservation, freshness management — is often misunderstood as “just put it in a vector database” (we covered this in Chapter 3 §Chapter 3, §3.4). In practice, it is a full data-product engineering exercise.

Data contracts

A specific technical pattern worth memorising is the data contract: a formally specified, versioned, machine-readable agreement between a data producer and its consumers about schema, semantics, and SLAs. Data contracts are tested in CI/CD: a producer’s schema change that would break a consumer’s contract fails the build. The pattern is borrowed from API contracts and is the operational discipline that distinguishes a data-mesh-mature firm from one that has merely re-labelled its datasets as “data products”.

Capability 6 — Adoption

The most-overlooked capability and, by McKinsey’s reckoning, the single most common failure mode. Adoption requires change management, measured usage, embedded workflow integration, and ongoing reinforcement through KPIs.

Why adoption is harder than it looks

The default failure mode is a deployment that nobody uses. Symptoms:

  • Licence purchase data shows 10,000 seats; usage data shows 800 active users.
  • Pilot users praise the system; rollout users do not engage.
  • The system’s metrics look good in the analytics dashboard; the operational metrics it was supposed to improve do not move.

The root cause is almost never technical. It is that the system was designed to fit the new workflow, but the users continue to operate in the old workflow because it is what their performance is measured on, what their training prepared them for, and what their muscle memory executes.

Change management theory

The classic change-management literature provides three useful frames:

  • Lewin’s three-stage model: unfreeze, change, refreeze. Useful for thinking about the temporal arc of change but light on operational specifics.
  • Kotter’s eight-step model (1996): create urgency, build a guiding coalition, form a strategic vision, enlist a volunteer army, enable action by removing barriers, generate short-term wins, sustain acceleration, institute change. The most-cited operational guide.
  • ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement): a person-level model that helps diagnose where in the change cycle a particular individual or team is stuck.

A graduate manager should be able to use all three frames. They are complementary rather than competing.

The 50% budget rule

The Rewired framework is unusually emphatic that roughly half of every transformation budget should be spent on adoption — a number that strikes most firms as wildly high until they have failed to capture value from a technically successful deployment.

The specific change-management activities:

  • Training: role-specific, not generic. A relationship manager, a credit analyst, and a fraud investigator need different training even when they all use the same underlying tool.
  • Incentive realignment: KPIs and performance reviews. If the relationship manager is still measured on calls-completed, and the new tool reduces the number of calls but improves their quality, the manager will not adopt the tool.
  • Workflow redesign: often the hardest. Requires line-of-business operators to rework their day-to-day processes, with all the cross-team coordination that implies.
  • Executive sponsorship and communication: consistent, not episodic. The CEO mentions the transformation in every quarterly update, not just at the launch.
  • Ongoing measurement: adoption rate (% of intended users active monthly), depth of use (median actions per active user), and impact (the operational metric the tool was supposed to improve).

Microsoft’s internal Copilot experience

Microsoft’s own internal deployment of M365 Copilot produced widely-shared learnings that adoption is bimodal: a small cohort of power users (10–15%) adopt aggressively and unlock substantial productivity gains; a long tail of users barely engage. The variance can be reduced by training and incentive design but cannot be eliminated.

The implication is that adoption metrics should be measured at the user-cohort level, not the firm-wide aggregate. A firm with 78% of employees using the tool occasionally and 12% using it heavily is in a fundamentally different position from a firm with 30% of employees using it heavily and 60% not engaging — even though the average usage rates may be similar.

Anchor case 1 — Freeport-McMoRan and the Bagdad concentrator

Freeport-McMoRan applied ML on existing sensor data — without new capital expenditure — at its Bagdad concentrator in Arizona, achieving approximately 5% throughput improvement. The case is canonical because it illustrates that the AI factory’s data pipeline often runs on the data the firm already collects; the ROI is unlocked by the experimentation discipline, not by buying new sensors.

The pre-transformation state

The Bagdad concentrator processes copper ore through grinding, flotation, and dewatering stages. Operators had decades of experience with the process; the plant had hundreds of sensors collecting data continuously; and standard process-control software stored the data in historian databases. What was missing was a data-driven framework for which operating-parameter combinations (mill speed, reagent dosing, water addition, recycle ratios) maximised throughput at the current ore composition.

What changed

Freeport’s data team built a model that:

  1. Ingested historical sensor data (already collected, never previously analysed at scale).
  2. Identified the operating-parameter combinations associated with high-throughput periods.
  3. Recommended adjustments to operators in real time, with confidence intervals.
  4. Tracked the operators’ adoption rate and the impact on throughput.

The team was small (~8 FTEs at peak). The infrastructure was modest (cloud compute and standard ML tooling). The model was not technically sophisticated — it was essentially a regression with carefully engineered features. The 5% throughput improvement, on a high-fixed-cost asset, was financially substantial.

Why this is the canonical SME case

The case is the canonical Rewired anchor for two reasons:

  1. The data was already there. The firm did not need to build new collection infrastructure, only to use what it had.
  2. The ROI funded the broader transformation. The financial value released by the pilot was used to invest in the data-product, MLOps, and pod infrastructure that subsequently scaled across Freeport’s North American operations.

This is the classic “land and expand” pattern in industrial AI transformation. A graduate-level reading: the case shows that the binding constraint in industrial AI is rarely data collection or modelling — it is the experimentation discipline and operator adoption (Capability 6) that turn the analysis into operational value.

Anchor case 2 — DBS Bank’s GANDALF transformation

DBS Singapore is the most-cited example of a successful banking transformation built on the Rewired six capabilities. Under CEO Piyush Gupta, the bank set the explicit aspiration of being the “G” in GANDALF — an acronym deliberately placed alongside Google, Amazon, Netflix, Apple, LinkedIn, and Facebook.

The starting condition (2009)

Gupta took over DBS in 2009. The bank’s reputation was poor; its internal slogan was reputedly “Damn Bloody Slow” (a play on DBS). Tech was 85% outsourced to systems integrators. Customer satisfaction trailed regional peers. Branch network was the primary distribution channel.

The rewiring (2009–2018)

The transformation ran over roughly nine years and produced changes across all six capabilities:

🎯 Anchor case — DBS metrics

  • 33 platforms run in 2-in-a-box leadership, each pairing a business and a technology owner.
  • 15% to 90% in-source tech in roughly six years, a structural reversal of the prior outsourcing posture.
  • 99% of workloads on cloud infrastructure.
  • S$150M additional revenue + S$25M from loss prevention attributed to AI in a single recent year.
  • 50,000 personalised daily nudges delivered to consumer banking customers.
  • Lowest staff turnover in Singapore (10% vs 15–20% industry average) — using ML to predict employee attrition risk and intervene early.
  • Credit-card origination time fell from 21 days to 4 days — a four-fold improvement driven by journey redesign, not just ML.
  • 3× productivity for engineers using internal AI assistants by 2025.

Capability-by-capability

A graduate-level reading decomposes DBS into the six capabilities:

Capability DBS implementation
Roadmap Ten domains identified; transformation focused on five (cards, loans, deposits, treasury, wealth) over phased multi-year timelines.
Talent 15% → 90% in-source over 6 years; new internal training academy; aggressive hiring of senior engineers from regional tech firms.
Operating model 33 platforms under 2-in-a-box; agile pods structured around customer journeys; outcome KPIs tied to platform leaders’ compensation.
Technology 99% cloud workload migration; standard CI/CD, MLOps, and self-service developer platform.
Data Federated data products owned by domain teams, with central governance.
Adoption Heavy investment in change management, training, and ongoing reinforcement.

What made it work

Three features distinguish DBS from less-successful banking transformations:

  1. CEO sponsorship over a multi-year horizon. Gupta was CEO throughout the transformation. Changes in CEO mid-transformation have killed many such efforts elsewhere.
  2. Genuine in-sourcing, not staff augmentation dressed up as in-sourcing. DBS hired senior engineering leaders into the bank, gave them authority, and let them rebuild from inside.
  3. Explicit cultural transformation alongside the technical one. Gupta talked publicly about being a “27,000-person start-up”. The cultural framing made the operational changes legible to staff.

What it cost

DBS’s tech budget grew from approximately S$0.8B in 2009 to over S$1.5B by the late 2010s, before normalising. The incremental cost of the transformation was substantial — but the incremental revenue (S$150M+ annually attributable to AI) and the avoided-cost (lower fraud losses, lower attrition, faster origination) more than paid back the investment within five years.

Why DBS is the canonical case

DBS demonstrates that an established bank can become a digital firm without cannibalising its branch network — but only by rewiring all six Rewired capabilities simultaneously over 8+ years. A 2-year transformation that replaces branches with apps is not the DBS pattern; the DBS pattern is a 9-year capability rebuild that reshapes the bank’s entire operating model.

Anchor case 3 — LEGO

LEGO’s digital transformation is a useful counter-anchor to DBS because it operates in a non-financial-services context. LEGO rewired across e-commerce, omnichannel retail, the LEGO Ideas community platform, and the Mindstorms / digital play extensions.

The starting condition (early 2000s)

LEGO had near-bankrupted itself in the early 2000s by over-extending into film, theme parks, and product variants. The 2003–2004 financial crisis at LEGO — multiple years of substantial losses, with the company surviving only on a controlling-family bailout — forced a strategic refocus on the core brick. The digital transformation that began in the late 2000s was, in part, a response to the recognition that the brick alone could not sustain the firm’s growth ambitions.

The rewiring (2010–2025)

LEGO’s transformation produced:

  • E-commerce: direct-to-consumer LEGO.com became a major channel alongside retail. Now ~25% of revenue.
  • Omnichannel retail: 1,000+ branded LEGO stores globally, with integrated inventory and pricing across e-commerce.
  • LEGO Ideas community platform: user-submitted set ideas, voted on by the community, with a small fraction selected for production. Both a customer-engagement mechanism and a low-cost product-discovery channel.
  • Digital play extensions: Mindstorms, BOOST, Hidden Side, and AR-extended sets. The strategic question of how digital play complements vs. competes with physical brick remains unresolved, but the experimentation has produced revenue and learnings.
  • Internal data products: demand forecasting, supply-chain optimisation, marketing personalisation.

What made it work

LEGO managed the rewiring while maintaining the brand’s physical-product identity — which is exactly the challenge incumbents in retail, hospitality, and manufacturing face. The 2020–2025 financial outcomes (revenue compounding above 10% per year through pandemic and post-pandemic; market share gains against competitors; successful expansion into China) are partly attributable to the digital infrastructure built in the preceding decade.

Why LEGO is the canonical non-financial-services case

DBS’s transformation is sometimes dismissed by non-banking firms as “easy because banking is intrinsically digital”. LEGO is the standard counter-example: a 95-year-old firm whose core product is a physical plastic brick, transformed using essentially the same six-capability framework, with comparable financial outcomes. The framework’s generality is not, on this evidence, banking-specific.

Why all six capabilities, in concert

The strongest finding in the McKinsey benchmark data is that capability investment in concert outperforms capability investment in isolation by a wide margin. A firm that invests in talent and technology but neglects roadmap, data, and adoption typically fails to capture firm-level financial impact — even when individual deployments are technically successful.

The implication for transformation programmes is sobering: the right minimum unit of investment is all six capabilities at once, applied to 2–5 domains, sustained over 3–7 years. There is no shorter path that works.

Common transformation failure modes

The chapter ends with the reverse — the failure modes Rewired documents and the McKinsey teams have observed across hundreds of engagements:

Failure mode Symptom Capability gap
The use-case zoo A CEO can list 50 pilots but cannot point to a single domain that has been transformed. Roadmap (no domain prioritisation).
The platform replatform $200M spent on a new core banking system before any new customer-facing functionality ships. Roadmap (no business outcome) + Operating model (no agile delivery).
The hire-and-hope A Chief Data Officer or Chief AI Officer is appointed; the rest of the operating model is unchanged. The CDO/CAIO leaves within 18 months. Operating model + Talent (no authority, no team).
The pilot-to-production gap 80% of models built, 10% of models in production. Technology + Data + Operating model.
The adoption shortfall Licence purchase data shows 1,000 seats; usage data shows 100 active users. Adoption (usually combined with weak operating model).
The vendor handoff Transformation outsourced wholesale to a systems integrator, with the firm reduced to a project sponsor. Talent (no in-source).

Each failure mode maps to one or more missing capabilities. Treating the failure mode (e.g., training more people on the unused tool) without addressing the capability gap is a recipe for repeating the failure.

Connection to the organisational-economics literature

The Rewired framework’s intellectual lineage runs through the agile literature and the McKinsey engagement record, but it intersects with a deeper organisational-economics tradition.

Coordination costs

Porter (1985)’s value-chain framework and the Bain (1956) industrial-organisation tradition both recognise that coordination costs — the cost of arranging activity across people, teams, and firm boundaries — rise more than linearly with firm scale. Rewiring is, in this sense, a programme of coordination-cost reduction: pods reduce intra-pod coordination cost; APIs reduce intra-firm coordination cost; data products reduce data-team-to-consumer coordination cost.

The framework’s underlying claim is that the AI factory’s value is unlocked only when the firm’s coordination cost falls below a threshold. Below that threshold, every new pod can ship to production weekly; above it, every change requires C-suite approval and quarterly steering committees.

Complementary intangibles, redux

The Brynjolfsson–Hitt complementary-intangibles framework we developed in Chapter 1 is the framework’s intellectual home. Each of the six capabilities is a form of complementary intangible investment. The capabilities are interdependent: data products are useless without an operating model that produces them; an operating model is useless without the technology infrastructure that enables it; technology is useless without the talent that operates it; talent is wasted without a roadmap that points it; and the whole edifice is wasted without adoption.

Exercises 4.1

  1. Six-capability audit. Apply the six capabilities to a firm you know. Score each on a 1–5 scale. Identify (a) the weakest capability, (b) the binding constraint, (c) the cost and timeline to bring the weakest to a 4 or 5.

  2. The 30–70 talent shift. Construct the business case a regional bank in your country would need to present to its board to make the 30–70 shift. (a) Estimate the incremental cost. (b) Estimate the avoided cost from reduced systems-integration spend. (c) Estimate the value of in-source capability that does not exist today.

  3. Domain identification. For a firm in your country, list its 10–15 domains. Construct a McKinsey-style prioritisation matrix and identify the 2–5 priority domains. Defend the selection.

  4. Data product canvas. Specify three candidate data products for a retail firm. For each: name, schema, owner, consumers, SLAs, quality metrics, versioning policy.

  5. The adoption measurement framework. Construct a measurement framework that distinguishes deployment from adoption for an LLM-based knowledge-work tool. Specify: leading indicators, lagging indicators, cohort breakdowns, intervention triggers.

  6. The DBS counterfactual. Imagine DBS had attempted the transformation in 5 years instead of 9. Identify the three changes that would have been required to compress the timeline, and discuss whether each is feasible.

  7. The LEGO counterfactual. Identify a 95-year-old physical-product firm in your country and apply the LEGO playbook to it. (a) What domains would you prioritise? (b) What capability gaps would you face? (c) What is your realistic 5-year roadmap?

  8. Failure-mode diagnosis. Pick a transformation failure you have read about in the press. (a) Map it to one or more failure modes from §Chapter 4, §4.14. (b) Identify the capability gap. (c) Design a 90-day intervention to address it.

  9. The McKinsey banking-benchmark methodology. The 14pp TSR gap between leaders and laggards is the framework’s headline empirical claim. (a) Identify three sources of bias in the study. (b) For each, suggest a methodological improvement. (c) What would a more conservative point estimate of the gap be?

  10. 2-in-a-box leadership. Identify a platform in a firm you know that should be co-led under 2-in-a-box. (a) Name the business owner and the technology owner. (b) Identify the executive sponsor. (c) Write the platform charter (one page, including outcome KPIs and decision rights).

  11. The compensation gap. Your firm needs to hire a senior ML engineer at a US-tier compensation level ($400K total comp). Your firm is in Kuala Lumpur and your compensation policy caps total comp at $120K. Construct three approaches to closing the gap (each must be realistic given your firm’s constraints).

  12. The adoption budget. Rewired recommends ~50% of transformation budget on adoption. Most firms spend 5–10%. Construct the case to your CEO for moving from 10% to 30% over three years. What does the spending fund? How is success measured?

Further reading

Lamarre, Smaje, and Zemmel (2023) is the indispensable text. For change-management theory, Kotter’s Leading Change (1996; 2nd ed. 2012) is the canonical operational guide; Lewin’s three-stage model is in his 1947 Human Relations paper; the ADKAR model is in Hiatt’s ADKAR: A Model for Change in Business, Government and Our Community (2006). For the agile operating-model literature, the original Manifesto for Agile Software Development (2001) is mandatory reading; the Spotify Engineering Culture videos (2014) are the most-cited public-domain reference; SAFe’s published materials are useful even if you do not use SAFe. For organisational economics, Porter (1985) remains the canonical strategic-management text. The DBS transformation is documented across multiple HBS and INSEAD case studies; the Piyush Gupta interviews on the Rewired podcast are particularly accessible. For LEGO, the LEGO: Rebuilding the Brand HBS case (Robertson, 2010) is the standard reference. For Freeport-McMoRan, the McKinsey 2018 article Inside a mining company’s AI transformation is the public-record source.

References for this chapter

  • Lamarre, E., Smaje, K., and Zemmel, R. (2023). Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI. Wiley.
  • [Reference for sec not in bibliography]
  • Stanford HAI (2025). AI Index Report 2025.
  • Porter, M. E. (1985). Competitive Advantage: Creating and Sustaining Superior Performance. Free Press.
  • Bain, J. S. (1956). Barriers to New Competition. Harvard University Press.