Chapter 10 — Marketing, media, energy

This chapter covers three distinct sectors that share a common dynamic: AI deployment at scale has produced both substantial commercial value and substantial public-policy concern. The chapter’s three-part structure reflects the disparate sectors’ different deployment patterns, but the structural lessons connect: in each, the AI deployment trajectory has been shaped as much by external constraints (privacy regulation, labour negotiation, energy supply) as by internal technical capability. The cases that illustrate these dynamics — Apple’s App Tracking Transparency in marketing, the 2023 Hollywood strikes in media, Microsoft’s nuclear procurement in energy — together describe a 2024–2026 deployment landscape where the AI capability question is increasingly subordinate to the deployment-environment question.

The marketing sector is where AI-driven personalisation has been deployed earliest and at deepest scale, and where the resulting privacy-and-power concerns have triggered the most consequential regulatory responses. The media sector is where generative AI has produced both rapid creative-tool deployment and an unresolved labour-and-rights conflict that will shape industry economics through 2030. The energy sector is where AI’s deployment in operations (grid management, renewable forecasting, predictive maintenance) is genuinely productive, while AI’s consumption of energy is producing one of the contemporary period’s most-significant infrastructure constraints. Each sector has cautionary cases of comparable depth to the Watson Health, Klarna, and Boeing MAX cases of preceding chapters.

This chapter develops the three sectors across fourteen sections. Sections 10.1–10.4 cover marketing and advertising: the programmatic-advertising arc, generative AI in ad creative, the privacy-driven disruption (ATT, cookie deprecation, GDPR), and the Cambridge Analytica aftermath. Sections 10.5–10.9 cover media and entertainment: streaming-platform recommendation, generative video and the Sora wave, the 2023 Hollywood strikes, the music-rights battles, and AI in news and journalism. Sections 10.10–10.12 cover energy: grid management and renewable forecasting, energy-operations applications, and the energy intensity of AI itself. Section 10.13 covers Australian and Malaysian regional context across the three sectors. Section 10.14 sketches the 2026 frontier with attention to cross-sector convergences.

10.1 The programmatic advertising arc

Online advertising is among the longest-running ML-deployment domains. The earliest meaningful ML deployment in the field was Yahoo’s CTR-prediction work in the late 1990s; Google AdWords (launched 2000) used ML-supported relevance scoring from inception; the late-2000s rise of programmatic advertising — where ad placements are auctioned algorithmically across publisher inventory — established the contemporary technical stack. By the mid-2010s, programmatic accounted for the majority of digital ad spend; by 2024, programmatic accounts for over 90% of digital display advertising globally.

The technical stack involves several specialised ML systems. Demand-side platforms (DSPs; Google’s DV360, The Trade Desk, Amazon DSP) execute bidding decisions on behalf of advertisers. Supply-side platforms (SSPs; Magnite, Index Exchange, PubMatic) manage publisher inventory. Ad exchanges match the two sides via real-time bidding (RTB) auctions running at sub-millisecond latency. The bidding decisions involve probabilistic predictions of click-through rates, conversion probabilities, and lifetime value, with auction-theoretic optimisation of bid prices given the predictions. The infrastructure handles tens of billions of auctions per day at the major exchanges.

The data-flywheel structure. Advertising’s ML-deployment economics are particularly clean. Each ad impression produces a feedback signal (click or no-click; conversion or no-conversion); the feedback updates the prediction model; the updated model improves subsequent bidding decisions. The flywheel turns at extreme velocity (real-time updates against sub-second feedback in some configurations). The data accumulation across many advertisers and contexts produces substantial network effects: a DSP with broader bidding history makes better predictions than one with narrower history. Google, Meta, Amazon, and TikTok are positioned at this layer with cumulative-data positions that newer entrants cannot match.

Google as the historical leader. Google’s advertising business — built around AdWords (search ads) and AdSense (display ads on partner sites), with substantial extensions through YouTube, the Google Display Network, and the Performance Max product family — produced over USD 280 billion in revenue in 2024, representing approximately 78% of Alphabet’s total revenue. The ML infrastructure underlying this business has been continuously developed since 2000. The deployment depth is among the largest of any commercial ML application.

Meta’s positioning. Meta’s advertising business produced approximately USD 165 billion in 2024 revenue. The strategic positioning is different from Google’s: Meta’s surfaces (Facebook, Instagram, WhatsApp) are content-and-social-graph-driven rather than search-driven; the targeting infrastructure relies heavily on user-behaviour modelling. The Apple ATT impact (Section 10.3) was particularly damaging to Meta because of this dependency on cross-app tracking signals; Meta’s response has been to invest heavily in on-platform signal capture (creators, shopping, video) that does not depend on the cross-app tracking that ATT compromised.

The 2024–2026 generative-AI extension. All major advertising platforms have added generative-AI capabilities for ad creative generation (Section 10.2), automated campaign management (where the platform’s AI handles optimisation decisions traditionally made by human marketers), and conversational interfaces for advertisers. The deployment is broad but the value capture is modest at the per-advertiser level; the major platforms’ competitive positioning has shifted only marginally as a result. The structural question — whether generative AI changes the platforms’ relative positioning, or merely raises the operational baseline across the industry — remains unsettled. The current pattern suggests the latter.

10.2 Generative AI in ad creative

Ad creative generation is the most-visible application of generative AI in marketing in 2024–2026. The deployment thesis: where ad creative was previously produced by human creative teams (copywriters, designers, production companies) at substantial cost and time, generative AI can produce variants of comparable performance at near-zero marginal cost. The scale of variant production (hundreds or thousands of creative variants per campaign) supports finer-grained personalisation and better-optimised creative selection.

Meta Advantage+ Creative. Meta launched Advantage+ Creative in 2023 with progressive feature expansion through 2024–2025. The product allows advertisers to upload assets (product images, baseline ad copy) and have Meta’s AI generate variations — different headlines, images, layouts, calls-to-action — that are then tested against each other in delivery. Meta’s published performance data (mid-2024 disclosures) reports 5–15% improvement in cost-per-action metrics relative to manually-produced creative, with the impact varying by advertiser size and category. The product has been particularly successful at SMB scale, where the alternative (human creative production) is most cost-prohibitive.

Google Performance Max. Google’s equivalent product, Performance Max (launched 2021, with generative-AI extensions added through 2023–2024), automates campaign management across Google’s ad surfaces (Search, YouTube, Display, Shopping, Discover). Generative AI generates ad variants from advertiser-provided inputs; ML-driven optimisation handles bid management, audience selection, and creative-rotation. The automation level is substantially higher than traditional Google Ads campaign management; advertiser control is correspondingly reduced, which has produced both efficiency gains for many advertisers and frustration for advertisers wanting more granular oversight.

Amazon DSP and the retail-platform pattern. Amazon’s advertising business has integrated generative-AI ad creation since 2024, with particular strength in product-page-derived creatives (the AI generates ads from existing product imagery and metadata). The retail-platform integration is structurally important: Amazon has direct access to the product-data and conversion-data that drive ad performance, in ways that pure-advertising platforms (Google, Meta) do not.

The independent-vendor ecosystem. A separate ecosystem of generative-AI-for-advertising vendors operates outside the major platforms. Pencil (founded 2018), Anyword (founded 2013), Jasper (founded 2021 in Texas), and Persado (founded 2012) each offer different positioning — copy-only, image-only, end-to-end, segment-specific — at varying price points. The market has consolidated through 2023–2025; smaller vendors have struggled to compete with platform-integrated alternatives that offer comparable capability with deeper feedback loops.

The deployment economics. Generative AI in ad creative produces cost compression but not cost elimination. The human creative teams that previously produced ads remain employed, with shifted responsibilities — managing AI-generated outputs, providing strategic direction, handling brand-voice consistency, producing the highest-stakes creative work that still requires human judgement. The pattern is consistent with Acemoglu’s recent productivity work: AI augmentation rather than replacement, with measurable but modest productivity gains.

Open questions about creativity and distinctiveness. A specific concern raised by advertising professionals (and partly supported by emerging research) is that broad adoption of generative-AI-driven creative produces increasingly homogeneous outputs across competing brands. When all advertisers use similar foundation models with similar prompt patterns, the resulting creative tends toward a regression-to-mean style that may reduce brand distinctiveness over time. The structural implication, if validated, is that brands’ competitive advantage from creative-and-brand-distinctive advertising diminishes as the industry generates more advertising via AI; advertising effectiveness may compress across competitors. The empirical evidence is still accumulating; the question is one to watch through 2026–2030.

10.4 The Cambridge Analytica era and reputational damage

The 2018 Cambridge Analytica revelations are the contemporary period’s deepest case of a marketing-data ecosystem failure with broad public-policy consequences. The case is structurally important because it shaped the regulatory environment that subsequent AI deployments operate within, and because the underlying failure modes — broad data access without granular consent; commercial use without ethical oversight; political-influence applications that the originating platform did not anticipate — generalise to many AI-deployment contexts.

The case. Cambridge Analytica was a UK-based political consultancy that, in 2014–2015, obtained data from approximately 87 million Facebook users via a third-party app developer (the academic researcher Aleksandr Kogan, operating under the company GSR). The data was obtained via Facebook’s then-permissive Graph API, which allowed app developers to obtain not only the data of users who explicitly installed the app (approximately 270,000 Kogan-app installations) but also the data of those users’ Facebook friends. Cambridge Analytica subsequently used the data to construct psychographic profiles intended to support targeted political-campaign advertising, with applications including the 2016 US presidential campaign of Donald Trump and the UK Brexit referendum.

The 2018 revelations — published by The Guardian (Cadwalladr) and The New York Times (Rosenberg, Confessore, Cadwalladr) on 17 March 2018, drawing on material provided by former Cambridge Analytica employee Christopher Wylie — produced rapid escalation. Within days, Facebook’s market capitalisation fell by approximately USD 100 billion; CEO Mark Zuckerberg testified before US Congress (April 2018) and the European Parliament (May 2018); regulatory investigations opened in the US, UK, EU, and several other jurisdictions; Cambridge Analytica filed for bankruptcy in May 2018.

The legal and regulatory aftermath. Facebook’s settlement with the US Federal Trade Commission (July 2019) included a USD 5 billion penalty — at the time, the largest privacy-related penalty in FTC history — and substantial corporate-governance reforms including the appointment of an independent privacy committee. The UK Information Commissioner’s Office issued the maximum penalty under pre-GDPR law (GBP 500,000); subsequent UK regulatory action has continued. The Securities and Exchange Commission separately fined Facebook USD 100 million in 2019 for misrepresenting the privacy risks. The EU’s GDPR enforcement against Facebook (Section 10.3) has been partly traceable to the Cambridge Analytica context.

The structural lessons. The case yields five lessons that have substantially shaped contemporary AI deployment.

Lesson 1 — broad data access creates unmanageable risk. Facebook’s Graph API was designed for legitimate developer use cases (third-party apps that integrated with Facebook social features); the same design enabled the Cambridge Analytica abuse. Once data is broadly accessible, the platform cannot meaningfully control downstream use. The lesson generalises: AI systems that have broad data access (foundation models trained on web-scraped data; enterprise systems with access to customer databases) face the same risk pattern.

Lesson 2 — consent at the user-app level does not extend to the user’s network. The Kogan app obtained explicit consent from its 270,000 direct users; the consent did not legitimately extend to the 87 million friends of those users whose data was also accessed. The architectural decision to allow app-developer access to friends’ data was the foundation of the failure. The lesson generalises: AI systems that work with relational data must obtain consent from all parties whose data is used, not just the directly-interacting party.

Lesson 3 — political and high-stakes applications require additional ethical oversight. Cambridge Analytica’s political application of the data — psychographic targeting in election campaigns — was not contemplated in Facebook’s original API design or in Kogan’s research-purpose claims. The downstream-use problem is structural: once data is shared, the originating platform cannot effectively control how it is used. The lesson generalises to AI applications in political contexts, including the 2024 election deepfake concerns (Section 10.9).

Lesson 4 — the post-incident management is itself a failure mode. Facebook’s initial response to the revelations was defensive — emphasising that the data sharing was technically permitted by the API at the time, that the company had taken corrective action when the issue was discovered in 2015, and that the responsibility lay with Cambridge Analytica rather than Facebook. The defensive posture extended the reputational damage and the regulatory exposure substantially. The pattern recurs in many AI deployment failures (compare Boeing 737 MAX in Section 9.7).

Lesson 5 — regulatory consequences extend across years. The Cambridge Analytica case has been a significant driver of subsequent regulatory action: the GDPR enforcement intensification (2018 onward); the FTC’s structural reform of Facebook (2019 settlement); the broader political-and-public-discourse demand for tech-platform regulation; the UK Online Safety Act (2023); the EU Digital Services Act (effective 2024). The compounding regulatory cost over the 2018–2026 period substantially exceeds the initial USD 5 billion FTC penalty. The lesson: a single high-profile failure can produce a regulatory environment that constrains the entire industry’s subsequent deployments.

The case is referenced in the Watson Health (Chapter 7) and Klarna (Chapter 8) cautionary cases as the third dimension of the broad pattern of AI-deployment failures: failure-of-validation (Watson), failure-of-staging (Klarna), and failure-of-data-governance (Cambridge Analytica). The three together describe much of the contemporary AI-deployment risk landscape.

10.5 The streaming-wars recommendation infrastructure

Streaming media — video, audio, and increasingly other formats — has been one of the most-mature AI deployment domains for over a decade. The competitive dynamics among streaming platforms (Netflix, Disney+, Amazon Prime Video, Apple TV+, HBO Max/Max, Hulu, Paramount+, Spotify, Apple Music, YouTube Music) have made recommendation infrastructure a primary strategic asset.

Netflix as the canonical case. Netflix’s recommendation infrastructure, developed continuously since the company’s DVD-rental era, is the longest-running consumer-recommendation system at scale. The 2006–2009 Netflix Prize (Section 8.1) established matrix factorisation as the field’s standard; the post-prize era at Netflix has seen the infrastructure evolve through deep-learning-based architectures. The 2024 Netflix recommendation system uses a combination of deep neural networks for candidate generation, ranking models for personalisation, and contextual signals (time of day, device, recent history) for fine-tuning. The deployment scale is substantial: Netflix’s 280+ million subscribers globally generate billions of recommendation impressions daily.

The strategic role of recommendation at Netflix is documented in published company communications: approximately 80% of viewing comes from recommendation rather than search, suggesting that the recommendation infrastructure is the primary discovery mechanism on the platform. The infrastructure’s competitive role is dual — it improves user retention (better recommendations → more engagement → lower churn) and it informs content investment decisions (the recommendations infrastructure provides demand-prediction signals that inform which shows to greenlight or licence). The latter is particularly distinctive: Netflix’s content investment strategy has been substantially data-driven for over a decade, with ML-supported predictions informing decisions worth tens of billions of dollars.

Spotify and audio recommendation. Spotify’s recommendation infrastructure is comparable in sophistication, with adaptations for the audio domain. Discover Weekly (launched 2015) became the platform’s signature personalisation feature; the infrastructure has expanded through Daily Mixes, Release Radar, the Spotify Wrapped annual feature, and more recently the AI DJ (launched 2023, an AI-voiced radio-style format). The audio context has specific characteristics that differ from video — listening sessions are longer, attention is more divided, taste is more genre-and-artist-anchored — that the ML methodology accommodates with audio-specific architectural choices.

YouTube and the foundation-model frontier. YouTube’s recommendation system (Section 8.1) has evolved from the 2016 two-tower neural architecture to substantially more sophisticated systems. The 2024 system incorporates transformer-based architectures, multi-task learning across the platform’s many video-format surfaces (long-form, Shorts, music, kids), and increasingly foundation-model-based content understanding. The 1+ billion daily active users on YouTube produce a data flywheel that no other video platform can match.

TikTok and the algorithmic-discovery thesis. TikTok’s For You algorithm (Section 8.1) represents a different strategic positioning: the algorithm is not just a discovery mechanism within a broader content surface but is the entire surface. Users do not search or browse the platform’s content library in any traditional sense; the algorithm presents content. The strategic implications are substantial: where Netflix benefits from its recommendation infrastructure as a feature within a broader product, TikTok’s recommendation infrastructure is the product. The 2024 US legislative action (the 2024 Protecting Americans from Foreign Adversary Controlled Applications Act, which mandates ByteDance’s divestiture of TikTok or its US operations) is partly a response to this strategic centrality of the algorithm.

The streaming wars and content economics. The competitive dynamics among streaming platforms have produced substantial industry consolidation through 2022–2026: Warner Bros. Discovery’s 2022 formation; Disney’s 2023 restructuring of its streaming business; Paramount Global’s 2024 merger with Skydance Media. The content economics have tightened — the 2018–2022 period of aggressive content investment (the “peak TV” era) has given way to discipline, with streaming platforms reducing content spend while focusing investment on data-validated successes. The recommendation infrastructure plays a role in this discipline: the platforms with better infrastructure can more confidently invest in content with predictable demand; platforms with weaker infrastructure are more exposed to investment errors.

The 2024–2026 generative-AI extension. The contemporary generative-AI wave is extending streaming-platform AI in several directions: AI-generated personalised promotional materials (auto-generated previews tuned to user taste; personalised thumbnails); conversational discovery (chatbot interfaces over the recommendation infrastructure); AI-supported content production (the more controversial dimension; see Sections 10.6 and 10.7). The deployment of conversational discovery is at early scale; Netflix introduced limited conversational features in 2024 with continued expansion through 2025. The strategic question is whether conversational discovery substantially changes user behaviour or remains a marginal augmentation to the dominant browse-and-recommendation paradigm.

10.6 Generative AI in content production — Sora, Veo, Runway, Pika

The 2023–2026 period has seen the emergence of generative video as a viable content-production capability. The trajectory has been unusually fast even by AI-field standards: at the start of 2023, generative video was demonstrably uncompetitive with conventional production; by mid-2025, generative video at frontier-model capability is approaching production-utility for specific applications.

OpenAI Sora. Sora was announced by OpenAI in February 2024 with a public demonstration of generated video at unprecedented quality and length (up to one minute of coherent video at 1080p resolution). The model was made available to a limited research-access cohort through 2024; broader public availability launched in December 2024 via the Sora subscription product (USD 20/month tier integrated with ChatGPT Plus; USD 200/month tier with substantially expanded usage). The launch was substantial both technically and commercially; Sora became the early-2025 reference for what was possible in generative video. Sora 2, released in October 2025, extended the capability with improved physics simulation and longer generations.

Google Veo. Google’s Veo (announced in May 2024 with Veo 1; Veo 2 in December 2024; Veo 3 in 2025) is the parallel offering from Google DeepMind, integrated with Google’s broader Gemini-and-Vertex-AI infrastructure. Veo’s capability has been competitive with Sora through 2024–2025, with Google’s integration of the model with the YouTube and Workspace ecosystems providing a different distribution surface than OpenAI’s.

Runway. Runway (founded 2018, New York) was an early-stage generative-video startup before the Sora-Veo wave; the company’s Gen-1 (2023), Gen-2 (2023), Gen-3 Alpha (2024), and Gen-4 (2025) models have been targeted at the professional creative-tools market. The product positioning emphasises integration with established post-production workflows; Runway has been used in mainstream film and television production for specific applications (visual-effects supplementation, concept generation, certain background elements). The 2024 partnership announcements with Lionsgate and several studios indicate the commercial trajectory.

Pika Labs. Pika (founded 2023, Palo Alto) has been a competitive entrant emphasising rapid iteration of features. The Pika 1.0 release (December 2023) and subsequent updates have positioned the company at the consumer-and-creator end of the generative-video market.

The deployment realities. Generative video in 2024–2026 is operational at narrow scales and aspirational at broader scales. The specific applications that have reached commercial scale include short-form social-media content (creators using generative tools to produce content for TikTok, Instagram Reels, YouTube Shorts), advertising creative (variant generation for video ads), early-stage concept work (storyboarding, visual-effects pre-visualisation), and specific narrow film-and-television applications. The applications that remain aspirational include feature-length generation, character-consistent narrative across long durations, and the broader replacement of conventional production. The trajectory is upward; the timeline for the broader applications is debated, with some industry commentators projecting feature-length generative production by 2027–2028 and others projecting it as a 2030+ trajectory.

The structural questions. Three questions define the medium-term outlook.

The capability question. Foundation-model capability in video generation has improved substantially through 2023–2026; whether the trajectory continues at the same rate, or whether technical limits are encountered, is unsettled. The energy and compute requirements of frontier generative video are substantial (a Sora 2 generation consumes orders of magnitude more compute than equivalent text generation); whether the cost trajectory supports broad deployment depends on continued compute-cost reduction.

The labour question. The film, television, and adjacent creative industries’ labour is structurally affected by generative video’s deployment. The 2023 Hollywood strikes (Section 10.7) explicitly negotiated AI-related contract terms; the durability of these terms across subsequent contracts is one of the most-significant industry-trajectory questions. The labour dimension is not separable from the capability and economic questions; the negotiation between technology deployment and labour protection will shape how the technology is actually used.

The rights question. Foundation-model training on copyrighted video content (films, TV, music videos, user-generated content) has produced an unresolved legal landscape. The major lawsuits — including the New York Times v. OpenAI (Section 10.9), the multiple music-industry lawsuits (Section 10.8), and parallel litigation in film and television contexts — will shape what training data is legitimately available and on what terms. The outcome materially affects the economic structure of the generative-AI industry.

10.7 The 2023 Hollywood strikes — labour-AI collision

The 2023 Hollywood writers’ and actors’ strikes were the most-significant labour-AI collision in any industry to date. The strikes produced AI-related contract provisions that have been substantially incorporated into subsequent labour negotiations across creative industries. The case is structurally important because it demonstrates that AI deployment in industries with organised labour can be substantially constrained by negotiation, in ways that AI deployment in industries without organised labour is not.

The Writers Guild of America strike. The WGA’s 11,500 members went on strike on 2 May 2023, with the strike running 148 days until 27 September 2023 — the longest WGA strike since the 1988 strike. The bargaining context included multiple disputes (residuals, streaming-era compensation, room-size minimums), but AI was a particularly visible concern. The WGA’s position included specific demands: that AI not be used to write or rewrite material credited to writers; that AI not be used as source material; that writers’ work not be used to train AI without consent and compensation; that the union have approval rights over any AI-generated content used in production.

The eventual settlement (memorandum of agreement reached 24 September 2023; ratified 9 October 2023) included specific AI provisions: AI-generated material cannot be considered source material under the contract; AI cannot be used to write or rewrite literary material; writers retain residuals on material developed using AI tools (provided the writer is the credited author); studios cannot require writers to use AI; writers’ material cannot be used to train AI without consent. The provisions were substantially what the WGA had sought.

The SAG-AFTRA strike. The Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA, with 160,000+ members) went on strike on 14 July 2023, with the strike running 118 days until 9 November 2023. AI concerns were similarly central, with specific issues including digital replicas of actors (synthetic-performer technology), background-actor scanning (where productions had been using full-body 3D scans of background actors), voice cloning, and likeness rights for deceased performers.

The eventual settlement (tentative agreement 8 November 2023; ratified 5 December 2023) included specific AI provisions: digital replicas of performers require consent and compensation for each use; performers must be informed of AI use; performers retain control over likeness use posthumously (within bounded periods); compensation structures for AI-generated performance use were established. The compensation provisions are particularly notable: a digital replica use of an actor’s likeness commands the same fee structure as a conventional performance.

The structural implications. The strikes produced AI-related contract provisions that constitute the clearest labour-protective response to AI deployment in any major industry to date. The provisions have substantially constrained Hollywood AI deployment; specific applications that the technology supports (digital replicas, fully-AI-generated content, training on actors’ performances) are either prohibited or require specific consent and compensation. The labour-management balance in Hollywood AI is therefore structurally different from the balance in industries without organised labour.

The provisions’ durability across subsequent contracts is the next test. The current WGA and SAG-AFTRA contracts run through 2026; the next negotiation cycle will determine whether the AI provisions are maintained, expanded, or modified. The studios’ position has been that the provisions are unduly restrictive; the unions’ position has been that the provisions are minimum protections against rapidly-improving technology. The 2026 negotiation will substantially shape industry trajectory.

The transferability question. Whether the Hollywood pattern transfers to other creative industries — music, journalism, advertising — depends on whether those industries’ labour is similarly organised. Some industries have strong professional associations (the Authors Guild in publishing; the National Press Club and various journalism unions; the Recording Academy’s policy positions) but weaker collective-bargaining apparatus. Others (advertising, much of digital content production) have minimal organised-labour presence. The Hollywood pattern thus represents a specific labour-organised-industry response that may or may not generalise; the generalisation will be unevenly distributed across industries.

10.8 Music AI — Suno, Udio, and the rights battles

Generative music has followed a trajectory parallel to but distinct from generative video. The technical capability emerged earlier than video (Google’s Magenta project from 2016; OpenAI’s MuseNet from 2019; Anthropic-and-others audio research throughout the period); the commercial deployment has accelerated through 2022–2026 with the rise of dedicated generative-music platforms.

Suno. Suno (founded 2022, Cambridge MA, by Mikey Shulman and other former Kensho AI employees) has been the highest-profile generative-music entrant. The company’s product (Suno V1 in 2023; V2 in 2024; V3 and V4 through 2025) generates complete songs — vocals, instrumentation, lyrics — from text prompts. The product’s user interface is consumer-friendly; users without musical training can produce songs of meaningful quality in minutes. The Series B funding round (2024) at approximately USD 500 million valuation, led by Lightspeed Venture Partners, signalled the commercial maturation of the category.

Udio. Udio (founded 2023, by former Google DeepMind researchers including David Ding) launched its public product in April 2024 with capabilities competitive with Suno. The company’s positioning has emphasised audio quality and music-creator-tool integration over Suno’s consumer focus. The Series A funding (2024, approximately USD 100 million round) at comparable scale signalled the category’s competitive density.

The RIAA lawsuit. In June 2024 the Recording Industry Association of America filed lawsuits against Suno and Udio (filed in federal court in Massachusetts and New York respectively) on behalf of major record labels (Sony Music Entertainment, Universal Music Group, Warner Records). The complaints allege that both companies trained their models on copyrighted recordings without licence, and seek statutory damages of up to USD 150,000 per infringed work — potentially billions of dollars in cumulative damages given the scale of the alleged training data.

The cases are structurally important because they directly address the foundational question of generative-AI training-data legitimacy. Suno and Udio have not denied that their models were trained on copyrighted music; their defence has focused on fair-use arguments, on the transformativeness of the resulting outputs, and on the absence of substantial market harm. The RIAA’s position is that the training itself constitutes infringement regardless of output use, and that the resulting market harm — generative music potentially displacing licensed music in many use contexts — is substantial.

The litigation is ongoing as of 2026; outcomes will substantially shape the broader generative-AI rights landscape. Adjacent cases (the New York Times v. OpenAI; various visual-art rights cases; the music-publishing-side lawsuits against Anthropic, OpenAI, and Lyrica filed earlier) are part of the same broader landscape; the cumulative resolution will determine how foundation-model training operates in copyright-intensive domains.

The economic implications. If the courts rule for the RIAA, the licensing-required regime would substantially raise the cost of building generative-music systems. The major foundation-model labs (OpenAI, Anthropic, Google, Meta) would face training-data cost increases; smaller startups would face proportionally larger barriers. The economic shift would advantage the major labs (who can negotiate licensing at scale) over smaller entrants (who cannot). Conversely, if the courts rule for Suno-Udio (recognising fair-use protection for foundation-model training), the training-data landscape remains broadly open but the commercial implications for music creators are more challenging.

The producer-and-creator response. Within the music industry, the response to generative AI has been mixed. Some major artists (Drake, Imogen Heap, Bjork) have engaged with the technology in specific projects, often with the artist retaining significant control. Other artists (Taylor Swift, the broader Authors Guild of America’s music-side equivalents) have been more sceptical. The 2024 ARTIST Act (proposed US legislation introducing federal protection for voice and likeness rights) reflects the political momentum for artist-protective frameworks; the legislation has not yet passed but indicates the policy direction.

10.9 News, journalism, and the AI-content problem

Generative AI’s interaction with news and journalism has produced one of the most-contested AI-deployment landscapes. The interaction operates across multiple layers: AI-generated news content (both legitimate AI-augmented reporting and inauthentic AI-generated content); foundation-model training on news content; AI-supported newsroom operations; AI-driven content moderation and platform policy.

The New York Times v. OpenAI lawsuit. Filed in December 2023 in the Southern District of New York, The New York Times v. OpenAI and Microsoft is the most-prominent contemporary lawsuit alleging that foundation-model training on copyrighted news content constitutes infringement. The complaint includes specific examples of GPT-4 reproducing substantial portions of New York Times articles in response to particular prompts; the relief sought includes damages potentially in the billions of dollars and an injunction requiring destruction of training data and models containing The Times’ content. The defendants’ positions have invoked fair-use and adjacent doctrines.

The case is structurally similar to the music-rights cases (Section 10.8) but operates in a different doctrinal landscape. News content has specific characteristics that complicate the fair-use analysis: the content is heavily-time-sensitive (current news is most valuable when fresh); substitution effects between source articles and AI-generated summaries are direct; the news industry’s economic model is already structurally challenged. The Times has been unusually willing to litigate aggressively; many other news organisations have negotiated licensing arrangements with foundation-model providers (notable arrangements include Axel Springer, AP, News Corp, the Atlantic, Vox Media, the Wall Street Journal’s licensing of WSJ content). The bifurcation — some news organisations licensing, others litigating — creates an uneven licensing landscape that shapes both the litigation outcomes and the commercial landscape.

AI-generated content on news sites. A separate concern is the deployment of generative AI to produce content directly on news sites. The 2023 incident at Sports Illustrated (where AI-generated content with fabricated author profiles was published, exposed in November 2023 by Futurism) was particularly damaging to the publication’s reputation; CNET had a similar incident earlier in 2023. The 2024 trajectory has produced more cautious deployment: most major news organisations now have explicit AI-use policies that distinguish AI-augmented reporting (acceptable with disclosure) from AI-generated reporting (largely prohibited). The implementation of these policies is variable; smaller publications have adopted aggressive AI-content strategies that the major publications have largely resisted.

The deepfake problem. Foundation-model-driven generation of synthetic content depicting real people — deepfakes — has been a persistent concern since the technology’s emergence in 2017. The 2024 trajectory has been concerning: the technology’s accessibility has increased substantially (consumer-grade deepfake creation tools are now readily available); the 2024 election cycle in multiple countries (US, India, Indonesia, EU member states) saw demonstrably synthetic content circulating; specific incidents (the January 2024 robocall using a synthetic Joe Biden voice; multiple instances in the Indian election; the synthetic-AI political ad case in the South Korean election) have produced regulatory responses including the FCC’s February 2024 declaratory ruling that AI-generated voices in robocalls are illegal under the Telephone Consumer Protection Act.

Election integrity 2024 — what happened and didn’t. The 2024 election cycle was widely anticipated to be the “AI election” with substantial deepfake-driven manipulation. The actual outcome was more mixed. Specific deepfake incidents occurred but did not appear to have determinative effects on outcomes; platform-level countermeasures (content provenance, AI-content labelling, fact-checker integration) were partially effective; voters proved more resilient to AI-generated misinformation than the most-pessimistic projections suggested. The post-election analysis (several major studies including one from the Stanford Internet Observatory) concluded that the 2024 election was substantially affected by AI-generated content but that the effects were less catastrophic than feared. The pattern is encouraging in the short term but concerning in the long term: as the technology continues to improve and as the platforms’ countermeasures face increasing costs, the future-election trajectory is uncertain.

The content-moderation interaction. Major platforms (Meta, X, TikTok, YouTube) operate AI-driven content moderation at scale. The systems have been a mixed success — they handle volume that human moderation could not match, but they produce high false-positive rates and false-negative rates that affect both users and platform reputation. The 2024 X transition under Elon Musk’s ownership has been particularly controversial; the company’s reduction in human moderation capacity, combined with policy changes, has produced documented increases in harmful content circulation. The Trust and Safety industry’s broader trajectory (covered in Chapter 8 in the retail context) applies similarly here.

10.10 Grid management and renewable forecasting

Energy operations are the third major domain this chapter addresses. AI deployment in energy has been substantial and largely productive — predictive maintenance for generation and transmission infrastructure, demand forecasting, grid-stability management, renewable-generation forecasting. The deployment is generally lower-profile than the marketing or media deployments but operationally important.

The grid-management challenge. Electricity grids must continuously balance generation against demand within tight tolerances; deviations trigger frequency-and-voltage instability that can cascade into major outages. The historical management approach combined extensive operational discipline (dispatch scheduling, reserve requirements, frequency-control mechanisms) with relatively conservative engineering margins. The transition to renewable generation — wind and solar are inherently variable; batteries and other storage are increasingly significant components — has substantially increased the management complexity. AI deployment in grid management is largely a response to this increased complexity.

Wind and solar forecasting. Wind and solar generation depends on weather, which is forecastable to varying degrees of accuracy. The economic value of accurate generation forecasting is substantial: better forecasts allow grid operators to schedule conventional generation more efficiently, reducing both costs and emissions. ML-based forecasting has produced substantial improvements over classical numerical-weather-prediction approaches, particularly at short horizons (hours-to-days).

The most-publicised case is Google DeepMind’s wind-energy work, announced in 2019. DeepMind partnered with Google’s renewable-energy operations team to apply ML to wind-power forecasting at Google’s wind-farm portfolio. The deployment used deep neural networks to forecast wind-power output 36 hours ahead, with the forecast feeding into day-ahead electricity-market scheduling decisions. DeepMind’s reported outcome was approximately 20% increase in the value of Google’s wind energy relative to baseline forecasting approaches — driven by more-accurate market scheduling rather than by increased generation. The work was operationally deployed at Google’s wind-farm scale (approximately 700 MW capacity at the time) and has continued through subsequent extensions.

Battery storage optimisation. Battery energy storage systems are increasingly significant components of modern grids. Optimising battery dispatch — when to charge, when to discharge, at what price — requires forecasting future electricity prices, generation conditions, and demand. ML deployment in battery optimisation is a specific use case where the methodology has produced documented value. Tesla’s Autobidder platform, developed for Tesla’s grid-scale battery deployments, uses ML-driven price forecasting to optimise battery dispatch; the platform’s revenue from energy-market arbitrage is reportedly material to the project economics.

The major-utility deployment. Major utility operators globally have deployed ML-based grid management systems through 2018–2026. Notable examples include National Grid plc (UK; ML-based demand forecasting); EDF (France; nuclear-plant operations optimisation); Engie (France/Belgium; renewable forecasting); EnergyAustralia and Origin Energy (Australian context); Tenaga Nasional (Malaysia). The deployments are operational rather than transformative — they produce measurable efficiency gains but do not fundamentally change grid management’s structural challenges.

The grid-stress events of 2024. A specific 2024 development worth noting is the increased stress on grids globally driven partly by AI-related electricity demand growth (Section 10.12). Multiple regions experienced grid-stress events through 2024 that were partially attributable to data-centre-driven demand growth: PJM Interconnection (the US Mid-Atlantic) reported substantial demand growth driven by Northern Virginia’s data-centre concentration; ERCOT (Texas) reported similar pressure from data-centre and crypto-mining demand; Ireland’s grid faced multiple stress events in 2024 partly attributable to data-centre concentration around Dublin. The grid-stress events have produced policy responses including data-centre permitting reform in multiple jurisdictions and accelerated investment in transmission and generation capacity.

10.11 Energy operations — predictive maintenance and beyond

Beyond grid management, AI deployment in energy operations covers a broad spectrum of applications: predictive maintenance for generation and transmission infrastructure, demand-side management, oil-and-gas applications, exploration-and-extraction support, and energy-efficiency optimisation across industrial-and-commercial applications.

Predictive maintenance for power generation. Power-generation equipment — turbines, generators, boilers, transformers, transmission infrastructure — has been a major predictive-maintenance deployment domain (Chapter 9, Section 9.2 covers the manufacturing-side parallels). GE Power, Siemens Energy, and Mitsubishi Heavy Industries have all developed substantial in-house predictive-maintenance platforms for their installed-equipment bases. The deployment economics are similar to manufacturing predictive-maintenance: cost savings on the order of 10–20% relative to preventive-maintenance baselines, with corresponding reliability improvements.

Wind-turbine and solar-farm operations. Renewable-generation infrastructure has specific predictive-maintenance challenges. Wind turbines operate in remote locations with high access costs; component failures can cascade into significant downtime. Solar farms have lower per-unit failure rates but more components in aggregate. ML-based monitoring (combining vibration analysis, thermal imaging, and operational telemetry) has produced documented value in renewable operations; the Vestas Wind Systems and Siemens Gamesa platforms (the major wind-turbine OEMs’ service offerings) integrate substantial AI components.

Oil and gas applications. The oil and gas industry has been a substantial AI deployer, despite the industry’s broader regulatory and reputational pressures. Application areas include reservoir characterisation (where seismic data is processed using ML methods); drilling-operations optimisation (where real-time sensor data informs drilling parameters); refinery-operations control (where ML-supported process control reduces emissions and improves yields); and predictive maintenance for offshore-and-onshore infrastructure. Major operators (ExxonMobil, Shell, BP, TotalEnergies, Aramco, Petronas) have all developed substantial AI capabilities in-house; the integration with their broader operations technology stacks (Schlumberger, Halliburton, Baker Hughes, Weatherford) extends the deployment depth.

The methodology transfer. The AI methodologies deployed in energy operations are largely the same as those deployed in manufacturing (Chapter 9): time-series ML for predictive maintenance; computer vision for asset inspection; reinforcement learning for control systems; generative AI for documentation and operations support. The transferability is not coincidental; the underlying problems are structurally similar. The energy sector has historically been somewhat slower than manufacturing in adopting frontier methodologies, partly reflecting the regulated-utility structure of much of the sector and partly reflecting the longer asset-life cycles that produce slower technology-refresh rates. The 2024–2026 trajectory has shown some convergence; the major energy operators are increasingly comparable to manufacturing leaders in AI deployment depth.

10.12 The energy intensity of AI itself — the data centre problem

The 2024–2026 period has produced one of the contemporary era’s most-significant infrastructure constraints: the energy intensity of AI itself, particularly the foundation-model-training and -inference loads. The constraint connects energy and AI in ways that are reshaping both industries.

Data-centre electricity consumption. Data centres globally consumed approximately 460 TWh of electricity in 2022 — about 2% of global electricity consumption (IEA estimate). The IEA’s 2024 Electricity 2024 analysis projected that data-centre consumption could reach 1,000+ TWh by 2026, with the increase predominantly driven by AI-related compute. Subsequent analyses (Goldman Sachs, BCG, McKinsey through 2024–2025) projected even more aggressive growth trajectories, with data-centre electricity consumption potentially reaching 4–7% of global electricity consumption by 2030. The growth represents the single largest individual source of electricity-demand growth in advanced economies and produces substantial pressure on grid infrastructure, generation capacity, and transmission capability.

The major hyperscaler procurement. The hyperscale data-centre operators (Microsoft, Google, Amazon, Meta) have responded to the energy-supply constraint with substantial generation-procurement programmes. The procurement strategy combines renewable-energy purchases (the established approach), specific renewable-development partnerships, and increasingly nuclear procurement.

The Microsoft Three Mile Island deal, announced in September 2024, exemplifies the pattern. Microsoft contracted with Constellation Energy to restart Unit 1 of the Three Mile Island nuclear plant (which had been retired in 2019) with the entire output dedicated to Microsoft’s data-centre operations under a 20-year power-purchase agreement. The deal is unusual in its specifics — Three Mile Island has historical resonance as the site of the 1979 partial-meltdown accident at the adjacent Unit 2 reactor (Unit 1 was unaffected; the adjacent Unit 2 has remained shut since 1979) — but it reflects a broader pattern of hyperscaler nuclear procurement.

Amazon’s parallel announcements in 2024 included a USD 650 million purchase of a data-centre campus from Talen Energy, the campus to be powered by adjacent nuclear capacity at the Susquehanna nuclear plant; a 2025 partnership with Dominion Energy for additional nuclear capacity in Virginia. Google’s October 2024 announcement of a multi-reactor agreement with Kairos Power for small modular reactor (SMR) procurement signalled the same pattern with newer-technology nuclear. Meta’s similar 2024 agreements followed the trajectory. The cumulative scale of hyperscaler nuclear procurement announced through 2024–2025 exceeded 10 GW of contracted capacity, comparable in scale to the entire active nuclear-development pipeline outside Asia.

The grid-and-supply-chain implications. The hyperscaler energy-procurement strategy produces substantial grid-and-supply-chain implications. The nuclear capacity coming online from 2024–2030 is largely committed to hyperscaler use rather than to general grid supply, which reduces the supply available to other consumers. The 2024 grid-stress events (Section 10.10) reflect this dynamic; data-centre demand growth that was previously a marginal grid issue has become a primary concern for grid operators in regions with substantial data-centre concentration.

Carbon-aware computing. A specific response to the energy-intensity problem is carbon-aware computing — the practice of scheduling AI workloads to coincide with periods of low-carbon electricity availability. The methodology was pioneered by Google’s data-centre operations through 2020–2022; subsequent extension to other hyperscalers and to enterprise compute has been substantial. The technical implementation involves real-time grid-carbon-intensity monitoring and workload-scheduling integration; major providers (Google Cloud, Microsoft Azure, AWS) have published carbon-aware-computing tools through 2023–2025. The deployment realism check: carbon-aware computing reduces emissions per unit of compute but does not reduce total emissions; the underlying load-growth dynamic dominates the carbon-intensity-of-each-compute-cycle dynamic at current scales.

The DeepMind data-centre cooling case. A separate AI-energy case worth noting is Google DeepMind’s 2016 work on data-centre cooling optimisation. DeepMind applied reinforcement learning to the control of cooling systems at Google’s data centres, with the system producing approximately 40% reduction in cooling energy consumption (Evans and Gao, 2016). The deployment was operational rather than research-only — the system has continued in production at Google data centres through subsequent updates. The case is structurally important because it demonstrates AI’s capacity to reduce its own energy consumption, partly mitigating the broader load-growth concerns. The 40% figure has become a frequently-cited reference; the structural conclusion — AI-driven optimisation can produce substantial energy savings in adjacent operations — generalises to many infrastructure-operations contexts.

The structural question. The AI-energy interaction is one of the contemporary era’s most significant infrastructure challenges. The technology improvements (more-efficient inference; quantisation; sparse activation; distillation to smaller models) are reducing per-unit-of-AI-output energy intensity, but the aggregate AI deployment is expanding faster than the per-unit improvements compensate. The grid-and-generation-capacity expansion is multi-year (transmission upgrades take 5–10 years; new generation 5–15 years); the AI deployment growth is annual. The mismatch produces the contemporary tension. Resolution will come from some combination of: AI-deployment growth slowing; per-unit efficiency improvements continuing; generation-and-grid capacity expanding rapidly; AI-deployment migrating to lower-cost geographies (the geographic redistribution dynamic); and possibly substantial AI-deployment moderation in response to energy-cost economics. The 2026–2030 trajectory will reflect the actual interplay of these forces.

10.13 Australian and Malaysian regional context across the three sectors

The Australian and Malaysian contexts for marketing, media, and energy differ from the US and EU patterns described in the chapter’s main sections. The differences are partly structural (different industry compositions; different regulatory frameworks; different cultural-and-language characteristics) and partly stage-of-development (smaller domestic markets; different stages of digital-economy maturation).

Australian marketing and advertising. The Australian advertising market is substantial (approximately AUD 25 billion in 2024), with major domestic players (NewsCorp Australia; Nine Entertainment Co.; Seven West Media) competing alongside the global platforms (Google, Meta, TikTok). The local-market dynamics include strong public-broadcaster presence (ABC, SBS) that operates on different commercial logic from the private players, and an active Australian Competition and Consumer Commission (ACCC) regulatory presence that has been notably aggressive on platform issues. The 2021 News Media Bargaining Code — which requires Google and Meta to negotiate with Australian publishers for news-content licensing — is one of the most-prominent regulatory-platform interventions globally; the code’s effect on subsequent platform behaviour has been substantial (Meta’s August 2024 decision to discontinue Facebook News in Australia is partly traceable to the code’s economics). The 2024–2025 expansion of the code to cover AI-generated content sourcing is in active legislative consideration.

Malaysian advertising and digital marketing. The Malaysian advertising market is smaller (approximately RM 3-4 billion in 2024) but has grown rapidly through 2018–2024 with the broader digitalisation. The market is dominated by the global platforms (Google, Meta, TikTok), with significant local-language complications (Bahasa Malaysia, Mandarin, Tamil) that produce different deployment patterns from the English-language-dominant markets. Local digital agencies (Servis Public, IPG Mediabrands Malaysia, Dentsu Malaysia) compete on local-context-and-language expertise; the 2024–2025 generative-AI extension has been adopted broadly across the agency landscape with applications particularly in vernacular-language ad copy generation.

Australian media — streaming and content production. Australian content production is substantial relative to the country’s population, with strong sectors in film (Pacific-region production hub for major Hollywood productions; the Gold Coast and Sydney/NSW production cluster), television, and increasingly streaming-original content. The major Australian streaming platforms include Stan (Nine Entertainment Co.), Foxtel/Binge, Kayo Sports, and SBS On Demand alongside the global services. The 2024 trajectory has produced specific Australian-content investments by Netflix, Amazon, and Apple TV+ that have anchored the local production ecosystem. AI deployment in Australian content production has followed the global pattern; the Hollywood strike provisions (Section 10.7) are largely incorporated into Australian production agreements via the major US studios that fund much of the production.

Malaysian media context. Malaysia’s media context combines a multi-language local market (Malay, Mandarin, Tamil, English) with regional Southeast Asian content production. Astro is the dominant pay-TV operator; mediaprima operates the major free-to-air channels. The streaming market is dominated by global platforms (Netflix, Disney+, Prime Video) with growing regional platforms (iflix in earlier era, Viu, the Astro Go service). AI deployment in Malaysian media has been concentrated at the platform level rather than in local production; the local production scale does not yet support substantial in-house AI investment.

Australian energy. Australia’s energy transition is one of the most-significant in the OECD: high renewable-resource potential (solar; wind; some emerging green-hydrogen capacity); coal-heavy legacy generation that is being phased out; AEMO (Australian Energy Market Operator) as the relatively-sophisticated market operator; substantial state-government policy variation. AI deployment in Australian energy operations has been substantial, with AEMO’s grid-management infrastructure incorporating AI-based forecasting and scheduling. The Australian green-hydrogen push (Fortescue Future Industries; Origin Energy; multiple state-government programmes) is a notable AI-deployment frontier, with the operations of green-hydrogen infrastructure benefiting from ML-based optimisation.

Malaysian energy. Malaysia’s energy transition is underway but slower than Australia’s. Tenaga Nasional Berhad (TNB) is the dominant utility; Petronas is the dominant oil-and-gas operator; the broader energy mix remains coal-and-gas-dominated with growing renewable share. The National Energy Transition Roadmap (2023) is the country’s framework for transition through 2050. AI deployment in Malaysian energy operations is at mid-scale; TNB’s grid-management AI investments have been substantial through 2020–2025; Petronas operates substantial AI capability in its upstream-and-downstream operations including some of the most-sophisticated regional applications.

The data-centre context in both countries. Both Australia and Malaysia have been substantial beneficiaries of the data-centre-investment wave through 2022–2026. Australia’s data-centre concentration in Sydney (with substantial expansion from AWS, Google, Microsoft, Equinix, and others through 2023–2025) reflects favourable infrastructure conditions including international connectivity. Malaysia’s Johor data-centre cluster — anchored by significant 2023–2025 investments from Microsoft (USD 2.2 billion), AWS, ByteDance, NVIDIA-backed capacity, and many others — has emerged as one of Southeast Asia’s primary new data-centre concentrations, partly driven by Malaysia’s relatively favourable energy economics, water availability, and geographic position adjacent to Singapore. The data-centre concentration produces both economic opportunity and infrastructure stress in both countries; the policy responses are still developing.

10.14 The 2026 frontier and cross-sector convergences

The marketing, media, and energy sectors operate at different paces and on different time-constants, but the 2024–2026 period has produced cross-sector convergences that will define the next phase. Five trajectories warrant attention.

Trajectory 1 — the regulatory-environment maturation. Each of the three sectors has been substantially shaped by regulatory action through 2024–2026: marketing by the ATT and DMA-DSA-AI-Act sequence; media by the Hollywood strikes and the IP litigation; energy by the data-centre permitting and grid-stress responses. The trajectory through 2026–2030 will see further regulatory maturation. The EU AI Act’s full implementation through 2025–2027 will produce particular implications across all three sectors. The US regulatory trajectory is less predictable; the post-2024-election direction depends substantially on administration-level priorities.

Trajectory 2 — the data-and-rights resolution. The unresolved foundation-model-training-data legal landscape — the NYT v. OpenAI case; the RIAA cases against Suno-Udio; the visual-art rights cases; the Hollywood union provisions — will work through the courts and licensing markets through 2026–2028. The eventual resolution will substantially shape what training data is legitimately available and on what economic terms. The structural implications cross sectors: media-content rights affect generative video deployment; news-content rights affect AI-supported marketing; the overall licensing market emerges as a substantial new economic layer.

Trajectory 3 — the energy-AI infrastructure interaction. The data-centre energy intensity dynamic (Section 10.12) is among the most-binding contemporary constraints. The trajectory through 2026–2030 will see substantial generation-capacity expansion (nuclear; renewable; some natural-gas additions) addressing the supply side; per-unit-of-AI efficiency improvements addressing the demand side; and likely substantial geographic redistribution of AI infrastructure toward lower-cost-of-electricity regions (the Malaysian Johor cluster is one example; specific Nordic, Canadian, and Middle Eastern locations are others). The interaction with broader climate-and-energy policy will be central: ambitious AI-deployment trajectories that are not aligned with climate-and-energy capacity will face increasing political and regulatory friction.

Trajectory 4 — the labour-displacement question across sectors. Each sector has labour-AI dynamics with distinct patterns. The Hollywood strike provisions in media (Section 10.7) are the most-organised labour response. Marketing has been less organised but produces displacement in specific job categories (junior creatives, mid-level analysts) that aggregate to substantial economic impact. Energy has the smallest direct labour-displacement effect at this stage, with operations workforces relatively stable. The cross-sector pattern resembles the broader Acemoglu-and-Restrepo (2020) analysis: labour effects are heterogeneous, concentrated in specific roles and geographies, and not well-handled by aggregate-labour-market metrics.

Trajectory 5 — the cross-sector convergence point. The longest-run trajectory observable in 2026 is the convergence of the three sectors at AI-infrastructure scale. Marketing AI requires compute and data; media AI requires both plus rights infrastructure; energy AI requires sensor and operational integration with the same compute-and-data foundation. The convergence point is AI infrastructure as critical infrastructure — the foundation-model layer, the cloud-and-compute layer, the data-and-rights layer, and the energy-and-physical-infrastructure layer all operating as integrated infrastructure that the three sectors (and many others) build upon. The structural implications for industrial policy, antitrust, and economic policy more broadly are substantial; they will define the policy conversation through 2026–2030.

The three sectors of marketing, media, and energy have produced some of the contemporary AI period’s most-detailed cases — both successes and cautionary tales. The integration of these cases with the analytical frameworks of Parts I–IV is what produces graduate-level competence in the field. The retail/finance/healthcare/manufacturing patterns of preceding chapters and the marketing/media/energy patterns of this chapter together cover the major commercial-deployment domains; subsequent chapters cover the remaining sectors and the cross-cutting frontier topics.

References for this chapter

Programmatic advertising and digital marketing

  • Ghose, A. (2017). TAP: Unlocking the Mobile Economy. MIT Press.
  • Goldfarb, A. and Tucker, C. (2019). Digital economics. Journal of Economic Literature 57(1): 3–43.
  • Alphabet Inc. (2024). Annual report.
  • Meta Platforms Inc. (2024). Annual report.

Apple ATT and the privacy disruption

  • Apple Inc. (2021). App Tracking Transparency framework documentation.
  • Meta Platforms Inc. (2022). Q4 2021 Earnings Call. February 2022.
  • UK Competition and Markets Authority (2022, 2024). Privacy Sandbox investigation reports.
  • European Commission (2018, 2023, 2024). General Data Protection Regulation; Digital Markets Act; Digital Services Act; AI Act.

Cambridge Analytica

  • Cadwalladr, C. and Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian and The Observer, 17 March 2018.
  • Rosenberg, M., Confessore, N., and Cadwalladr, C. (2018). How Trump consultants exploited the Facebook data of millions. The New York Times, 17 March 2018.
  • US Federal Trade Commission (2019). Settlement with Facebook Inc., July 2019.
  • US Securities and Exchange Commission (2019). Settlement with Facebook Inc., July 2019.

Streaming and recommendation

  • Gomez-Uribe, C. A. and Hunt, N. (2015). The Netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems 6(4).
  • Netflix Inc. (2024). Annual report.
  • Spotify Technology S.A. (2024). Annual report.
  • ByteDance Ltd. and TikTok (2024). Public communications and US regulatory disclosures.

Generative video

  • OpenAI (2024). Sora announcement and technical disclosures. February 2024 and December 2024.
  • Google DeepMind (2024). Veo announcement and technical materials.
  • Runway AI Inc. (2023, 2024, 2025). Gen-1, Gen-2, Gen-3, Gen-4 product documentation.
  • Pika Labs (2023, 2024). Product launches and corporate communications.

Hollywood strikes

  • Writers Guild of America (2023). 2023 MBA Memorandum of Agreement.
  • Screen Actors Guild–American Federation of Television and Radio Artists (2023). 2023 TV/Theatrical Memorandum of Agreement.
  • The Hollywood Reporter, Variety, Deadline (2023, 2024). Coverage of WGA and SAG-AFTRA negotiations and AI-related provisions.
  • Stoll, J. and Chmielewski, D. (2023). The strike that defined the AI era of Hollywood. Reuters.

Music AI and rights

  • Recording Industry Association of America (2024). Suno and Udio complaints, June 2024.
  • Suno Inc. (2024). Series B announcement and product documentation.
  • Udio (2024). Public launch and Series A communications.
  • Authors Guild (2023, 2024). Statements on AI training data.

News and journalism

  • The New York Times Company v. OpenAI Inc. and Microsoft Corp. (2023). Complaint, S.D.N.Y. December 2023.
  • Futurism (2023). Sports Illustrated AI-generated content investigation, November 2023.
  • Stanford Internet Observatory (2024). 2024 election integrity post-mortem.
  • US Federal Communications Commission (2024). Declaratory ruling on AI-generated voice in robocalls, February 2024.

Energy AI

  • DeepMind / Evans, R. and Gao, J. (2016). DeepMind AI reduces Google data centre cooling bill by 40%. DeepMind blog post, July 2016.
  • Witherspoon, S. and Strbac, G. (2019). Wind power forecasting with deep learning. DeepMind / Google.
  • International Energy Agency (2024). Electricity 2024.
  • Goldman Sachs Equity Research (2024). AI/data centre electricity demand outlook.
  • Tesla Inc. (2024). Autobidder product documentation.

Hyperscaler nuclear procurement

  • Microsoft Corporation (2024). Three Mile Island Unit 1 power-purchase agreement announcement, September 2024.
  • Amazon Web Services (2024). Talen Energy data-centre acquisition; subsequent nuclear partnerships.
  • Google LLC (2024). Kairos Power small modular reactor agreement, October 2024.

Australian and Malaysian context

  • Australian Competition and Consumer Commission (2021–2024). News Media Bargaining Code reports.
  • Australian Energy Market Operator (2024). Annual report.
  • Tenaga Nasional Berhad (2024). Annual report.
  • Petroliam Nasional Berhad (Petronas) (2024). Annual report.
  • Malaysia Investment Development Authority (2024). Johor Data Centre Cluster outlook.
  • Microsoft Corporation (2024). Malaysia data-centre investment announcements.