Chapter 5 — Strategy, collisions, and the new meta
Iansiti and Lakhani (2020) argue that AI is not just changing how firms operate; it is changing the rules of competition itself. This chapter develops the five rules of the “new meta” with the theoretical depth a graduate-level course requires, alongside Agrawal-Gans-Goldfarb’s three solution layers, the post-DeepSeek commoditisation debate, and the AI-native disruption pattern. We connect the framework to the industrial-organisation, increasing-returns, and platform-economics literatures.
Chapter overview
This chapter is structured around two questions a graduate manager must be able to answer. First, how should we read the AI factory’s competitive implications through the strategy literature? — addressed in §Chapter 5, §5.2–§Chapter 5, §5.7 by working through Iansiti and Lakhani’s five rules, with each rule grounded in the broader industrial-organisation, increasing-returns, and Schumpeterian-disruption traditions. Second, what does an AI-era strategic playbook look like operationally? — addressed in §Chapter 5, §5.8–§Chapter 5, §5.12 by developing the Agrawal-Gans-Goldfarb three-layer framework, the commoditisation debate, the AI-native disruption pattern, and the strategic implications.
Reading this chapter
This is the chapter that does the most theoretical work. The five rules of the new meta are not statements of fact; they are claims about the structure of competition under the AI factory’s economics. The graduate reader should treat each rule as a theoretical position to be tested against industries, firms, and time periods, and should be willing to argue against the rules where the empirical evidence supports a different reading.
Strategic collisions
The central strategic phenomenon of the AI era is the collision — when an AI-enabled digital operating model meets a traditional one. Iansiti and Lakhani (2020)’s case studies are organised around these collisions.
A non-exhaustive list
| Sector | AI-native or rewired digital firm | Traditional incumbent | Status of the collision |
|---|---|---|---|
| Retail | Amazon | Walmart, Carrefour, Target | Long-running; Walmart partial rewiring |
| Banking | Ant Group, DBS | ICBC, HSBC, Citi (in China and Asia) | Asymmetric — Ant restructured by regulator |
| Media | Netflix | Disney, NBCUniversal, Paramount | Disney+ launched 2019; Disney still profitable |
| Streaming music | Spotify | Universal, Warner, Sony | Mediated by licensing; oligopolistic equilibrium |
| Cars | Tesla, BYD | Ford, GM, VW, Toyota | EV transition is the proximate driver, AI factory is the deeper one |
| Fitness | Peloton | SoulCycle, Equinox | Pandemic-distorted; Peloton over-extended |
| Marketplace commerce | Shopify | eBay, regional marketplaces | Shopify enabled SMB-led disruption of marketplaces |
| Search | Google, Perplexity | Bing | Perplexity disrupting Google’s intent-search |
| Software development tools | Cursor, Devin | GitHub Copilot, IntelliJ | AI-native disruption of incumbents |
The pattern
The pattern is consistent. The traditional firm faces diminishing returns to scale — its operating complexity grows faster than its revenue. The digital firm faces increasing returns to scale, scope, and learning. When they collide, the digital firm’s value curve overtakes the traditional firm’s, often catastrophically and quickly.
The underlying mathematics is simple. Traditional firms face a U-shaped average cost curve — efficient up to some scale, then increasingly costly to coordinate at greater scale. Digital firms face a downward-sloping average cost curve over the relevant range — every additional user is essentially free to serve, and the data that user generates makes the rest of the system more accurate. The collision is between two fundamentally different production functions.
A formal sketch
Let \(C^t(q)\) and \(C^d(q)\) denote average cost curves for traditional and digital firms at output \(q\). A typical specification:
\[ C^t(q) = \frac{F^t}{q} + c^t \cdot q^{\theta^t}, \qquad C^d(q) = \frac{F^d}{q} + c^d - \delta \log(1 + q), \]
where \(F^t < F^d\) (traditional firms have lower fixed costs of entry), \(c^t < c^d\) initially, \(\theta^t > 0\) produces a U-shape for the traditional firm, and the \(\delta \log\) term in the digital firm captures learning that lowers marginal cost in \(q\).
For low \(q\), the traditional firm has lower average cost. For high \(q\), the digital firm has lower average cost; the crossover point \(q^*\) depends on \(\delta\), \(F^d\), and the parameters. Once \(q\) exceeds \(q^*\), the digital firm’s cost advantage compounds, and the traditional firm cannot match it without rebuilding around the digital firm’s economics.
This formalises Schumpeter’s intuition about creative destruction (Schumpeter, 1942): the new technology is not better than the old at every scale; it is better only above a threshold scale, and the firms that reach the threshold first capture the field.
Rule 1 — Change is no longer localised; it is systemic
The age of AI is driven by a relentless and systemic driver of change. Rather than a number of separate waves of technological innovation, gradually spreading the Industrial Revolution across different industries and geographies, our new engine of change appears to be tackling all industries, globally, at just about the same time. Our entire economy is now effectively subject to Moore’s law.
— Iansiti and Lakhani (2020), Ch. 9
The argument
Inventions during the Industrial Revolution pertained to specific industries — the steam engine had more impact in manufacturing and transportation than in banking or healthcare. Digital technology and AI are different: they cut across every industrial environment at the same time. The same transformer architecture that recommends songs also drafts contracts, interprets X-rays, prices freight, and writes code.
The empirical pattern
No industry has been observably immune. Even the slowest-moving sectors — primary agriculture, defence procurement, ecclesiastical administration — have measurable AI deployment by 2026. The diffusion path is uneven; the diffusion direction is universal.
A useful piece of evidence is the McKinsey 2025 (McKinsey & Company, 2025) sectoral breakdown: while AI adoption rates vary across sectors (from 90%+ in financial services and tech to 60%+ in primary industries), every sector has crossed 50%. Compare with the diffusion of the steam engine, which never reached 50% across all sectors — many industries remained pre-industrial through the 1900s.
The Moore’s law analogy
Iansiti and Lakhani’s claim that “our entire economy is now effectively subject to Moore’s law” is provocative and worth examining. Moore’s law (transistor density doubling every ~2 years) was a sustained 1965–2015 empirical regularity that drove compounding cost reductions in computing. The claim is that AI’s economics inherit this property: the Kaplan et al. (2020) scaling laws, the Hoffmann et al. (2022) compute-optimal training corrections, and the post-Chinchilla efficiency gains together produce sustained cost reductions in inference that propagate through the economy.
The 280× inference-cost decline documented by Stanford HAI (2025) in 2022–2025 is consistent with this claim. Whether it sustains for another decade is an open question; some AI researchers expect deceleration after another 1–2 orders of magnitude.
Rule 2 — Capabilities are increasingly horizontal and universal
In a dramatic reversal from the Industrial Revolution’s vertical specialisation, the age of AI is making vertical, siloed organisations and specialised capabilities less relevant. Competitive advantage is shifting toward universal capabilities in data sourcing, processing, analytics, and algorithm development — the AI factory components from Chapter 3.
The horizontal-capability argument
The signal: when Uber looked for a new CEO, the board hired someone who had previously run a digital firm (Expedia), not a transportation services company. The same applies to firms entering new sectors:
- Amazon (retail) entered cloud, advertising, content, pharmacy, healthcare, and grocery.
- Tencent (gaming and messaging) entered financial services, healthcare, and cloud.
- Alibaba (commerce) entered banking, logistics, and cloud.
- Apple (consumer hardware) entered financial services, health, and content.
- Google (search) entered autonomous vehicles, cloud, healthcare, and quantum computing.
- Meta (social) entered VR/AR, AI infrastructure, and (briefly) crypto.
Each move worked because the AI factory transferred — not the vertical knowledge. The same data pipeline, experimentation discipline, and operational architecture that ran the home-market product runs the new-market product.
The implication for talent
The most-valuable senior leaders are increasingly those who can manage the AI factory rather than those with deep vertical expertise. The latter is being recoded as a feature of the data pipeline rather than a feature of the executive’s CV.
A graduate reading: this rule predicts that as AI factories spread through the economy, the relative value of vertical expertise should decline, while the relative value of horizontal AI-factory capability should rise. Compensation data is consistent: senior ML engineers and data scientists are now compensated at the level of senior bankers in many regions, a structural shift since 2018.
A counter-current
A reasonable counter-argument is that vertical knowledge re-asserts itself in regulated industries, where domain-specific judgement is required not just to operate the factory but to design what the factory should do. In banking, healthcare, and law, the highest-value AI factories are designed by people who deeply understand the domain and can manage the factory; pure-AI-factory operators do not produce sustainable advantage. The Watson Health failure (Chapter 7) is the canonical illustration.
Rule 3 — Industry boundaries are disappearing; recombination is the rule
Industries originally evolved from traditional trades to support vertical specialisation. Those clear boundaries are dissolving.
Examples of cross-boundary recombination
| Originating industry | New industry entered | Driver |
|---|---|---|
| Search (Google) | Autonomous vehicles, healthcare, quantum | Horizontal AI factory |
| Commerce (Amazon) | Cloud, advertising, healthcare, grocery, logistics | Horizontal AI factory + customer relationship |
| Commerce (Alibaba) | Banking, logistics, cloud, AI | Same |
| Hardware (Apple) | Financial services, health, content | Customer relationship + brand |
| Social media (Meta) | VR/AR, AI infrastructure | Capability adjacency |
| Auto (Tesla) | Insurance, energy storage, robotics | Data + AI capability |
| Chip design (NVIDIA) | Software, applied AI, autonomous systems | Capability adjacency |
The structural reason
Digital interfaces let operating models cut across old verticals and enter new industries with new, highly connected business models. While traditional organisations suffer diminishing returns to scale or scope, digital networks enjoy increasing returns — both as they grow in size and as they connect to other networks (Arthur, 1989).
The 2024–2026 evolution: bidirectional recombination
The 2024–2026 evolution is that the recombination is becoming bidirectional. NVIDIA, a chip company, has become a major software platform (CUDA, DGX Cloud, Omniverse) and an applied AI firm (autonomous vehicles, drug discovery). Anthropic, an AI lab, is becoming an enterprise software company. OpenAI has become a consumer products company. The boundaries between “chip company”, “model lab”, “enterprise software firm”, and “consumer app” are dissolving.
This pattern is consistent with Schumpeter (1942)’s creative-destruction model, where dominant firms in one era reorganise into new firms operating across the boundaries of old industries during the era’s transition. The current AI transition is producing creative destruction at a rate the strategy literature has not seen since the dot-com era.
Rule 4 — From constrained operations to frictionless impact
Digital operating models remove traditional operating constraints. Ant Group serves an order of magnitude more customers than the largest traditional bank. Facebook reaches an order of magnitude more people than the US postal system. Information moves instantaneously at near-zero marginal cost via networks to infinite numbers of recipients.
The frictionless-systems problem
Removing friction is not always good. Frictionless systems are prone to instability and have difficulty finding equilibrium. Once in motion, they are hard to stop:
⚠️ The marketer’s paradise can be the citizen’s nightmare
A phony headline can spread with infinite speed to billions of people on a variety of platforms and morph to optimise impact and click-through. Even if specific content is flagged by a social network, multiple variants can still be communicated, “liked”, and retransmitted across the internet. The vast reach and impact was inconceivable in the days of friction-heavy newspapers. (Iansiti and Lakhani (2020), Ch. 9, paraphrased.)
The 2024–2026 evolution: AI-mediated amplification
The 2024–2026 evolution adds AI-mediated amplification on top of digital amplification. Generative AI lowers the cost of producing convincing variants of content; agentic AI lowers the cost of distribution; reasoning models lower the cost of crafting messages tuned to specific audiences. The net effect is a multiplicative reduction in the cost of mounting a sophisticated information operation, with downstream consequences for elections, financial markets, and public health.
Specific manifestations as of 2026:
- Deepfakes in political campaigns (multiple 2024 elections globally featured AI-generated audio and video disinformation).
- AI-generated phishing and social engineering at scale (the 2025 Anthropic Antigravity demonstration showed prompt-injection attacks against agentic systems).
- Algorithmic market manipulation (the SEC’s 2025 actions against AI-powered pump-and-dump schemes are early indicators).
- Synthetic identity fraud (criminal networks using LLMs to generate plausible synthetic identities at scale for credit and benefits fraud).
The regulatory response
The regulatory response — covered in detail in Chapter 14 — is structurally lagging the technology. The EU AI Act came into force on 1 August 2024, but its enforcement infrastructure is still being built. The US has no comparable federal legislation as of mid-2026. The frictionless-impact failure mode is, structurally, a problem the framework predicts and the regulatory system struggles to keep up with.
Rule 5 — Concentration and inequality will likely get worse
As digital networks carry more transactions, network hubs gain power. Once a hub is highly connected in one sector (Airbnb in home rentals, Alibaba in peer-to-peer retail), it gains advantages as it links to a new sector (Airbnb in travel experiences, Alibaba in financial services). The pattern produces concentration of wealth, power, and relevance across markets, industries, and geographies.
The empirical pattern as of 2026
The empirical evidence as of 2026 is that this rule is largely vindicated. Stanford’s AI Index 2025 (Stanford HAI, 2025) documents:
- US private AI investment of $109.1B in 2024 versus China’s $9.3B.
- Seven US hyperscalers (Microsoft, Google, Amazon, Meta, Apple, Nvidia, Tesla) account for the majority of global AI infrastructure spending.
- Top quintile of AI-using firms — McKinsey’s “rewired” cohort — capture a disproportionate share of the gains (16–30% productivity improvements vs single-digit averages).
The geographic concentration is striking. Of approximately $250 billion in 2024 global private AI investment, more than 60% landed in firms headquartered in California. Of the top 50 AI-native firms by 2025 enterprise value, fewer than five are headquartered outside the United States and China.
The implication for sovereignty and policy
The implication for sovereignty and policy is taken up in Chapter 14. For the strategic-management reader, the key implication is that firms outside the US and China should expect to operate as platform consumers rather than platform producers in the AI era. The EU’s strategy — develop competitive AI capability through SAP, Mistral, and the EU AI Continent — is the explicit policy response; whether it succeeds is open.
The Agrawal-Gans-Goldfarb framework: three solution layers
If Iansiti and Lakhani (2020) describe what AI-enabled competition looks like, Agrawal, Gans, and Goldfarb (2022) explain why most enterprise AI doesn’t disrupt anything yet. Their framework distinguishes three layers of AI deployment.
| Layer | Description | Example | Disruption potential |
|---|---|---|---|
| Point solution | Bolted onto an existing workflow without restructuring it | A bank adds an LLM to its call centre script suggestion tool | Low — workflow unchanged |
| Application solution | Modifies part of a system but preserves overall architecture | Mortgage origination uses ML credit scoring instead of FICO | Medium — one stage reshaped, others adjust |
| System solution | Redesigns interdependent decisions across the entire value chain | Insurance shifts from risk transfer to risk prevention | High — entire industry economics shift |
The Between Times
Agrawal, Gans, and Goldfarb (2022) call the current era the “Between Times” — the period after AI’s promise has been demonstrated but before its full potential is realised. They argue that we are in a proliferation of point and application solutions; the truly disruptive system solutions remain rare. The historical analogue is electricity: it took roughly forty years from Edison’s 1882 Pearl Street station for factories to redesign around electrical power (replacing line-shaft architecture with distributed motors at each workstation; David (1990)).
The insurance system-solution example
Agrawal, Gans, and Goldfarb (2022)’s clearest worked example is home insurance. Today, insurance is risk transfer: the homeowner pays premiums, the insurer pools and pays out claims. With three super-powerful AIs predicting:
- Lifetime customer value × probability of converting (for marketing allocation),
- Likelihood of filing a claim × claim magnitude (for pricing),
- Legitimacy of any claim (for fraud detection),
the insurer can:
- Allocate marketing optimally (target high-LTV, high-conversion-probability customers);
- Price premiums precisely (or decline if expected loss exceeds price);
- Settle claims in seconds;
- And — most disruptively — offer risk prevention as a service: subsidise a leak-detection sensor whose installation cost is less than the expected reduction in claim cost.
The fourth move transforms insurance from a financial industry into a hybrid IoT-and-services industry. This is what a system solution looks like.
The healthcare system-solution analogue
The healthcare equivalent is the shift from episodic treatment to continuous monitoring and prevention. With AI predictions of cardiac risk from continuous wearables, an insurer or integrated provider can intervene earlier (medication, lifestyle modification, early treatment) at lower cost than treating an acute event.
The deployment requires redesigning every link in the chain:
- Data collection: consumer wearables → clinical data systems.
- Permissioning: HIPAA, GDPR, local equivalents.
- Reimbursement: preventive vs episodic billing codes.
- Care delivery: continuous vs visit-based.
The point-solution version of the same idea — a smart device that warns the patient — captures little of the value. The system-solution version captures most of the value but requires regulatory, payer, and clinical-workflow changes that take years.
The pharmaceutical-discovery system-solution analogue
AlphaFold + generative chemistry + automated wet-lab + clinical-trial AI is potentially a system solution that compresses drug discovery from a decade to a year. Each component exists; the system-level integration that captures the full value does not yet. Isomorphic Labs (Chapter 7) is the most credible attempt to build it.
The lending system-solution analogue
The Ant Group case from Chapter 3 is the archetypal lending system solution: 3-1-0 lending compresses the entire credit decision chain into a software-mediated process. Most fintech lending firms are application solutions (better credit scoring on top of traditional origination workflow); Ant is the rare system solution.
Why system solutions are rare
The reason point and application solutions dominate the deployment statistics is that system solutions face three structural barriers:
- Cross-firm coordination. A system solution often requires changes outside the firm’s boundary — payers, regulators, partners, customers. Coordination cost is high and timelines are long.
- Regulatory adaptation. A system solution often requires regulatory changes that the regulator must initiate. The lag between technology readiness and regulatory accommodation is typically 5–15 years.
- Risk concentration. A system solution puts more decisions on the algorithm’s critical path, concentrating risk. Firms that implement system solutions face exposure that point-solution firms do not.
The implication is that system solutions emerge slowly even when they would be economically rational immediately, and the firms that build them often do so by accumulating complementary capabilities over multi-year horizons rather than by direct strategic intent.
The commoditisation debate
Iansiti and Lakhani (2020) argue that the AI factory is the moat. The 2024–2026 evidence pulls in two directions on this claim.
The case for commoditisation
The DeepSeek-R1 release in January 2025 (DeepSeek-AI, 2025) demonstrated that frontier reasoning capability can be achieved at one-tenth to one-hundredth of the previously assumed cost — and that the resulting model can be released MIT-licensed. Llama 4 (April 2025), Qwen, and Mistral followed similar paths. Open-weight models now match closed frontier models on most benchmarks.
Implications:
- Foundation model capability is becoming a commodity input, not a moat.
- Foundation model API prices have fallen 90%+ since GPT-4’s launch.
- The gap between open and closed frontier models has closed to roughly 6–12 months on most benchmarks.
- Switching costs between model providers (in well-architected enterprise deployments) are days rather than years.
The case for AI-native disruption
The Menlo Ventures 2025 State of GenAI report (Menlo Ventures, 2025) and Foundation Capital’s 2026 outlook (Foundation Capital, 2026) document a striking pattern: AI-native startups are taking material market share from incumbents in agile departments where speed of iteration matters more than integration depth.
| Startup | Founded | 2025–2026 valuation/ARR | What it disrupts |
|---|---|---|---|
| Cursor (Anysphere) | 2022 | $1B+ ARR by Nov 2025; 24 months from launch | GitHub Copilot in code editing |
| Glean | 2019 | $7.2B valuation Dec 2025; ARR $100M → $200M in 9 months | Microsoft 365 / SharePoint search |
| Perplexity | 2022 | $20B valuation Sep 2025 | Google Search |
| Harvey AI | 2022 | $11B valuation Mar 2026; ~$200M ARR | Westlaw / LexisNexis |
| Sierra | 2023 | ~$10B valuation 2025 | Salesforce Service Cloud, Zendesk |
| Cognition (Devin) | 2023 | ~$4B valuation 2025 | Junior software engineer roles |
| Anysphere/Cursor enterprise | 2023 | 500+ Fortune 500 customers by 2026 | Visual Studio / IntelliJ |
The Menlo Ventures finding: in finance and operations, startups now hold 91% of the AI-native software share; in market research, sales, marketing, and product, the figures are similar. Where AI-native disruption is weak: IT and data science, where reliability and deep integrations outweigh speed.
The synthesis
The two views can be reconciled. Foundation models are commoditising; AI factories are not.
- Cursor’s moat is not the model — it’s the editing surface, the repo-level context, the diff approval flow, the developer feedback loop. These are AI factory advantages, not model advantages.
- Glean’s moat is the cross-system permission graph, the entity resolution, the personalisation of search to organisational context. These are AI factory advantages.
- Harvey’s moat is the regulated-document RAG infrastructure, the legal-vocabulary fine-tuning, the citation-verification layer. These are AI factory advantages.
The Iansiti–Lakhani thesis survives — but its locus moves from algorithms to the operational architecture surrounding them.
The startup vs incumbent race
An additional implication is that AI-native firms can move faster than rewired incumbents in the early years of a market — but the rewired incumbents (DBS, JPMorgan, Walmart) tend to win the long run because they have access to proprietary data, distribution, and customer relationships that the AI-natives must build from scratch. The right framing is “ten-year race” rather than “who’s ahead in 2026.”
A useful framework is to think of three competitive trajectories:
- AI-native disruptor wins the segment. Cursor disrupts GitHub Copilot. Perplexity erodes Google Search. The disruptor’s advantage compounds because they iterate faster than the incumbent.
- Rewired incumbent absorbs the disruption. GitHub responds to Cursor with a rewrite. Google responds to Perplexity with AI-Overviews. The incumbent’s distribution and data advantages eventually overwhelm the disruptor.
- Stalemate with niche specialisation. Both survive; the disruptor occupies a premium niche and the incumbent occupies the volume segment. The market grows.
Which trajectory wins depends on (a) how durable the AI-native’s iteration speed advantage is, (b) how quickly the incumbent can rewire, (c) the role of distribution and proprietary data, and (d) the segment’s switching costs.
The five ethical categories
Iansiti and Lakhani (2020)’s Chapter 8 — “The Ethics of Digital Scale, Scope, and Learning” — groups the new ethical challenges of AI-enabled firms into five categories, each of which receives extended treatment in Chapter 14.
Digital amplification
Frictionless networks amplify whatever is on them — including misinformation, polarisation, and engagement-optimised emotional content. The 2018 Cambridge Analytica scandal at Facebook, the algorithmic-amplification findings in the 2021 WSJ Facebook Files, and the YouTube radicalisation literature document the empirical pattern.
Bias
Buolamwini and Gebru (2018)’s Gender Shades study showed face-classification accuracy disparities of up to 34% between lighter-skinned men and darker-skinned women — a finding that catalysed the entire algorithmic-fairness research programme. The pattern recurs across modalities: speech recognition with disparate accuracy on regional accents, hiring AI with disparate selection rates by gender or race, credit scoring with disparate approval rates by zip code.
Cybersecurity
AI both expands the attack surface and provides new attack tools (deepfake-mediated social engineering, automated vulnerability discovery, prompt injection). The 2025 Anthropic Antigravity prompt-injection demonstration illustrates the new class of risk.
Control
When algorithms drive the operational critical path, the locus of decision-making authority is unclear. The Robodebt and Toeslagenaffaire cases (Chapter 12) show that the responsibility-attribution problem is the largest source of public-sector AI failure.
Inequality
The seven US hyperscalers control most global AI infrastructure spending; the top quintile of AI-using firms capture 16–30% productivity gains while the bulk capture single digits. The geographic concentration in California (~60% of 2024 global AI investment) is the empirical signature.
The responses
Microsoft’s six AI principles — fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability — are the firm’s response to these categories and have become the de-facto template for enterprise AI ethics policies. We compare them against Google, IBM, OpenAI, and Anthropic responses in Chapter 14.
Antitrust implications
The Iansiti-Lakhani framework’s antitrust implications, which we introduced in Chapter 3 §Chapter 3, §3.16, deserve a graduate-level treatment here.
Khan’s Amazon’s Antitrust Paradox
Khan (2017) argues that the consumer-welfare standard of contemporary antitrust law — which evaluates firm conduct primarily by whether it raises consumer prices — is structurally unable to address platform-monopoly concerns. Platforms like Amazon can systematically lower prices to consumers (the welfare-positive surface) while simultaneously extracting rents from suppliers, third-party sellers, and labour (the welfare-negative depths).
The strategic implication is that the AI factory’s competitive advantages may not be politically sustainable even when they are economically efficient under the consumer-welfare standard. Firms that build dominant AI factories should expect a regulatory response that is not anticipated by the framework — and the more frictionless the impact (Rule 4), the faster the response.
Wu’s curse of bigness
Wu (2018) argues that platform concentration has reached levels at which the political economy of bigness — not just narrow consumer-welfare effects — should reactivate the structural-antitrust tradition of Brandeis. The AI factory’s increasing returns make this more pressing: the concentration is endogenous to the technology, not a contingent result of market conduct.
The graduate-level implication
We should read Iansiti and Lakhani not as celebrating the AI factory but as describing a structural condition that public policy will eventually have to address. The five-rule framework acknowledges this in Rule 5; the regulatory response we develop in Chapter 14 is, in part, the public-policy reaction to the factory’s economics.
Strategic implications for the 2026 firm
If you take Iansiti and Lakhani (2020), Agrawal, Gans, and Goldfarb (2022), and the AI-native disruption evidence seriously together, the strategic playbook for the established firm in 2026 is the following six items, each with operational specifics.
1. Build the AI factory
Architecture beats algorithms. The four-component factory (Chapter 3) and the six Rewired capabilities (Chapter 4) are the operational specification. Investment timeline: 3–7 years. Investment scale: 1–3% of revenue annually for the duration. Outcome metric: domain-level transformations, not pilot count.
2. Invest in system solutions, not just point solutions
The 80% of gains comes from the 20% of deployments that redesign workflows. For each priority domain, ask: is the AI deployment a point solution (workflow unchanged), an application solution (one stage reshaped), or a system solution (the whole value chain redesigned)? Bias the portfolio toward system solutions.
3. Watch for collisions, not for entrants
The threat is rarely a new bank; it is a tech firm that decides banking is one more vertical where its AI factory transfers. The firms most exposed to collision are those in industries with (a) data-rich operations, (b) commoditisable customer relationships, (c) regulatory protection that is eroding. Construct the collision map: which adjacent-industry firms could enter your industry with their existing AI factory, and what would they do?
4. Treat foundation models as commodities
Multi-vendor, model-agnostic, ready to swap when DeepSeek-R2 lands. The architectural specification:
- Abstraction layer (LangChain, LlamaIndex, custom) over foundation-model APIs.
- Multiple vendors (OpenAI, Anthropic, Google, plus open-weight self-hosting for sensitive workloads).
- Standardised evaluation (in-house benchmarks for each use case, run continuously).
- Routing logic that picks the right model for each query based on cost, latency, quality.
5. Take ethics seriously, early
Microsoft’s six principles, ISO 42001 (ISO/IEC, 2023), the EU AI Act (European Commission, 2024) — these are not bureaucracy; they are the operating constraints of digital scale, scope, and learning. The regulatory infrastructure is being built; firms that wait to comply pay more later than firms that build compliance into their factory architecture from the start.
6. Plan for the long run
AI-native disruption is real but partial; the rewired-incumbent path requires a 5–10 year horizon and the full Rewired six-capability investment. Short-cycle responses (24-month transformation programmes; CEO-mandated “AI strategies” with annual sunset) systematically underperform the long-cycle approach.
Exercises 5.1
The collision map. Pick a firm in your industry. (a) Identify the three most-likely collision threats (firms from adjacent industries that could enter yours with their AI factory). (b) For each, estimate the timeline to credible threat. (c) Construct the firm’s response strategy.
The five rules in your industry. Apply each of Iansiti and Lakhani (2020)’s five rules to your industry. Which rule is most binding now? Which will be most binding in 5 years? In 10? Defend.
A formal collision model. Using the formal collision sketch in §Chapter 5, §5.2, specify parameter values for a specific industry pair (e.g., banking — fintech, pharmacy — Amazon Pharmacy, hospitality — Airbnb). (a) Estimate \(q^*\), the crossover scale. (b) Compare with the actual scale of the digital firm in 2026. (c) What does this tell you about the durability of the traditional firm’s competitive position?
The system-solution playbook. Agrawal, Gans, and Goldfarb (2022)’s insurance system-solution example imagines insurers becoming risk-prevention firms. Construct an analogous system-solution example for retail banking, healthcare, higher education, or another industry of your choice. Specify all the cross-firm coordination, regulatory adaptation, and risk-concentration changes required.
The Menlo finding interrogated. The Menlo Ventures finding is that AI-native startups dominate in “agile departments” and incumbents hold ground in IT and data science. (a) Why does deep integration favour incumbents? (b) Identify a function where the pattern might reverse in the next 3 years.
Microsoft principles compared. Read Microsoft’s, Google’s, IBM’s, OpenAI’s, and Anthropic’s published AI principles. Identify (a) the principles that are common across all five, (b) the principles unique to each, (c) the principles that reflect each firm’s specific business position. What does the comparison suggest about the practical operationalisation of AI ethics?
The Khan critique applied. Khan (2017) argues that consumer-welfare-standard antitrust cannot address platform monopolies. (a) Defend Khan’s argument using the AI factory framework from Chapter 3. (b) Identify the strongest counter-argument. (c) What would a “platform-aware” antitrust standard look like operationally for AI factories?
The DeepSeek shock revisited. Why did DeepSeek-R1’s release on 20 January 2025 produce a $600B Nvidia loss? What assumption did the market revise? Construct the strongest counter-argument that the revision was wrong (i.e., that Nvidia’s pre-DeepSeek valuation was right and the post-DeepSeek correction was an over-reaction).
The strategic implications for an emerging-market firm. Pick a firm in an emerging-market country (Malaysia, Indonesia, India, Brazil, South Africa). (a) Apply the six strategic implications from §Chapter 5, §5.12. (b) Identify the binding constraint. (c) Construct a 5-year strategic plan that addresses the binding constraint while making progress on the other five implications.
The ten-year race. Construct two scenarios for the next ten years (2026–2036): one in which AI-native disruptors win their segments; one in which rewired incumbents win. Identify the three structural factors that distinguish the scenarios. For each factor, identify what evidence we should look for in the next 24 months that would resolve the uncertainty.
The frictionless-impact externality. §5.6 argues that frictionless impact creates externalities that the framework predicts and the regulatory system struggles to address. (a) Identify a specific externality from a 2024–2026 deployment in your country. (b) Estimate the social cost. (c) Construct a regulatory or industry-led response.
The recombination forecast. §5.5 argues that industry boundaries are dissolving. Identify two industries that are most likely to recombine in the next 5 years. (a) What firms would lead the recombination? (b) What does the resulting industry look like? (c) What policy response should we expect?
Further reading
For the foundational treatment, read Iansiti and Lakhani (2020) Chapters 6–9. For Agrawal, Gans, and Goldfarb (2022), read the entire book (it is short and the worked examples are essential). For the platform-economics literature, Rochet and Tirole (2003), Rochet and Tirole (2006), Arthur (1989), and Eisenmann, Parker, and Van Alstyne (2006) are the foundational reads. For Schumpeterian creative destruction, Schumpeter (1942) chapters 7–8. For the antitrust critique, Khan (2017) and Wu (2018). For the AI-native disruption pattern, Menlo Ventures (2025) and Foundation Capital (2026) are the public-record sources. For the commoditisation debate, follow the DeepSeek-AI (2024) and DeepSeek-AI (2025) technical reports alongside the corresponding Stanford HAI (2025) cost analysis. For the ethical-framework literature, the IEEE Ethically Aligned Design document and the OECD AI Principles (2019) provide international benchmarks; the EU AI Act (European Commission, 2024) is the operational benchmark.
References for this chapter
- Iansiti, M. and Lakhani, K. R. (2020). Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Harvard Business Review Press.
- [Reference for
secnot in bibliography] - Schumpeter, J. A. (1942). Capitalism, Socialism, and Democracy. Harper & Brothers.
- McKinsey & Company (2025). The state of AI: Global survey.
- Kaplan, J., McCandlish, S., Henighan, T., et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.
- Hoffmann, J. et al. (2022). Training compute-optimal large language models (Chinchilla). arXiv:2203.15556.
- Stanford HAI (2025). AI Index Report 2025.
- Arthur, W. B. (1989). Competing technologies, increasing returns, and lock-in by historical events. Economic Journal 99(394): 116–131.
- Acemoglu, D. (2024). The simple macroeconomics of AI. NBER Working Paper 32487.
- Agrawal, A., Gans, J., and Goldfarb, A. (2022). Power and Prediction: The Disruptive Economics of Artificial Intelligence. Harvard Business Review Press.
- David, P. A. (1990). The dynamo and the computer: An historical perspective on the modern productivity paradox. American Economic Review 80(2): 355–361.
- DeepSeek-AI (2025). DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv:2501.12948.
- Menlo Ventures (2025). 2025: The state of generative AI in the enterprise.
- Foundation Capital (2026). Where AI is headed in 2026.
- Buolamwini, J. and Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. FAT.
- Khan, L. M. (2017). Amazon’s antitrust paradox. Yale Law Journal 126(3): 710–805.
- Wu, T. (2018). The Curse of Bigness: Antitrust in the New Gilded Age. Columbia Global Reports.
- ISO/IEC (2023). ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System.
- European Commission (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council (Artificial Intelligence Act). Official Journal of the European Union.
- Rochet, J.-C. and Tirole, J. (2003). Platform competition in two-sided markets. Journal of the European Economic Association 1(4): 990–1029.
- Rochet, J.-C. and Tirole, J. (2006). Two-sided markets: A progress report. RAND Journal of Economics 37(3): 645–667.
- Eisenmann, T., Parker, G., and Van Alstyne, M. W. (2006). Strategies for two-sided markets. Harvard Business Review 84(10): 92–101.
- DeepSeek-AI (2024). DeepSeek-V3 technical report. arXiv:2412.19437.