The Forecasts: Read Carefully, Not in Summary
Three major employment forecasts are regularly cited in AI coverage. They are frequently misquoted — not because journalists are dishonest, but because the methodological nuances are important and don't survive summarization. Reading them correctly changes the picture significantly.
What all three forecasts share: the displacement will not be uniform. Middle-skill cognitive work — legal research, financial analysis, basic software development, content creation, data entry — faces more displacement than manual work requiring physical presence and dexterity (construction, elder care, plumbing) or work requiring genuine social and emotional intelligence (therapy, high-stakes negotiation, crisis management). The distribution of displacement by job type and geography matters more than the aggregate number.
Source: Frey & Osborne, Oxford University 2013; McKinsey Global Institute 2017, updated 2023; Goldman Sachs Economic Research — "The Potentially Large Effects of AI on Economic Growth" March 2023What History Actually Shows: ATMs, Radiologists, and the Loom
The ATM is the cleanest historical data point on automation and employment. Beginning in the 1970s, ATMs were deployed across the U.S. banking system with the explicit purpose of replacing human tellers for routine transactions. Economists forecast significant teller job losses. What happened: ATM deployment reduced the cost per bank branch, making it economical to open more branches, which required more tellers. By 2000, there were more bank tellers than before ATMs existed. The tasks changed; the aggregate employment level didn't fall.
Radiologists were identified as a primary AI displacement target beginning around 2017, when deep learning image recognition systems demonstrated performance matching or exceeding human radiologists on certain diagnostic tasks (diabetic retinopathy, specific tumor detection). Radiology residency was predicted to become economically irrational within a decade. The actual outcome: radiology employment has continued growing. AI tools augment radiologists rather than replace them — they handle first-pass screening, flagging anomalies for human review, and expanding the volume of imaging that can be interpreted. The displacement has not occurred on any projected timeline.
The Luddites — whose name has become synonymous with technophobia — were skilled textile workers in early 19th-century England who broke mechanical looms. Economists have largely vindicated their economic analysis: the looms did eliminate their skilled jobs, incomes fell substantially for that generation, and the new factory jobs that eventually created aggregate welfare gains arrived too late for them. The aggregate long-run benefits of industrialization are real. The distributional short-run harm to specific workers was also real. Both things are true simultaneously. The honest assessment of AI applies the same nuance.
Source: Bessen — "How Computers Automate Away Middle-Class Jobs" (2015); Topol et al. — "High-Performance Medicine: AI and Clinical Care," Nature Medicine 2019; Thompson — "The Making of the English Working Class" (1963) — Luddite economic contextThe Power Concentration Question
The employment coverage dominates AI discourse. The power concentration question receives less attention and may be more consequential. The economics of large language models exhibit extreme concentration dynamics: training a frontier foundation model requires compute, data, and talent at a scale that only a handful of organizations can access. This is not an accident — it is the structural result of how the technology works.
Training GPT-4, Claude 3, Gemini Ultra, or comparable frontier models requires on the order of 10,000–25,000+ high-end GPUs running for weeks to months. Nvidia controls approximately 80–90% of the AI training hardware market (H100, A100 GPUs) — a monopoly position validated by its $2.2 trillion market capitalization as of early 2024. There is no equivalent competitor at the frontier. AMD's MI300X chips are real but represent a fraction of deployment. The physical infrastructure of AI is bottlenecked through a single company.
| Entity | Position in AI Stack | Concentration Level |
|---|---|---|
| Nvidia | Training hardware (H100/A100/B100 GPUs) | ~80–90% market share |
| TSMC | Advanced chip fabrication (≤5nm) | ~90%+ at frontier nodes |
| OpenAI + Google + Microsoft + Meta + Anthropic | Foundation model development | Most frontier compute |
| Amazon / Google / Microsoft | Cloud AI infrastructure (AWS, GCP, Azure) | ~65% of global cloud |
| Common Crawl / internet data | Training data | Relatively open |
The FTC's 2024 report on AI markets documented the concentration explicitly: a small number of companies control the dominant share of the compute, data, and distribution infrastructure that determines who can build competitive AI systems. The report raised concerns about whether this concentration could entrench incumbents and prevent competitive market dynamics — the same dynamics that would normally drive prices down and capability improvements to more users.
The Microsoft-OpenAI relationship is the clearest example of the power dynamics at work. Microsoft invested $13 billion in OpenAI and integrated GPT-4 across its product suite — Office, Bing, Azure, GitHub Copilot, Teams. The competitive advantage of this partnership extends across Microsoft's entire product portfolio. OpenAI receives the compute and distribution; Microsoft receives AI integration across the most widely used enterprise software stack in the world. Whether this constitutes an anticompetitive arrangement is the subject of active FTC and EU regulatory review.
Source: FTC — "Generative AI Products and Services" report (2024); Microsoft 10-K (2023) — OpenAI investment disclosure; Nvidia annual report 2023; TSMC annual report 2023// The employment question asks: will AI take my job?
// The power question asks: will AI concentrate power?
// One of these has a historical precedent of resolution.
// The other does not.
The Training Data Legal Vacuum
Foundation models are trained on internet-scale datasets — Common Crawl, Books3, The Pile, and proprietary scraped data. The legal question of whether training on copyrighted text and images without permission or compensation constitutes infringement is unresolved and being actively litigated in multiple courts simultaneously.
The New York Times filed suit against OpenAI and Microsoft in December 2023, alleging that GPT-4 was trained on millions of Times articles without authorization, and that the model can reproduce Times content in ways that substitute for the original. The Times included in its complaint examples of GPT-4 producing near-verbatim paragraphs from Times articles — demonstrating that the training data is retrievable, not merely learned from. The case is pending as of early 2026.
Getty Images filed suit against Stability AI in 2023, alleging that Stable Diffusion was trained on Getty's licensed image library without permission and generates images that incorporate Getty watermarks — visual evidence that the specific training data is encoded in the model weights. Multiple class action suits by authors, musicians, and visual artists are working through courts in the U.S. and UK simultaneously.
The resolution of these cases will determine whether AI companies must license training data (creating substantial new costs and possibly requiring model retraining) or whether training constitutes "transformative use" under fair use doctrine (permitting continued current practice). Either outcome reshapes the economics of frontier model development substantially.
Source: NYT v. OpenAI et al. — SDNY complaint, December 2023; Getty Images v. Stability AI — D. Del. complaint, February 2023; Andersen v. Stability AI — class action, ND Cal; Authors Guild v. OpenAI — class actionThe Capability-Alignment Gap
AI capabilities are advancing on a measurable trajectory. AI systems that can pass the bar exam, write production code, conduct graduate-level research, and produce professional-quality images and video exist now. The question of whether these systems are aligned — whether they reliably pursue the goals humans actually want rather than proxy goals they were trained toward — is a genuine open problem in technical AI safety, not science fiction.
The alignment problem is not theoretical: LLMs hallucinate — they produce confident, detailed, plausible-sounding false statements. This behavior exists because models are optimized for plausibility, not accuracy. The more capable the model, the more convincing the hallucinations. Deployment at scale means millions of people receive authoritative-sounding incorrect information daily. This is the alignment gap at current capability levels.
The longer-horizon alignment question — whether systems approaching or exceeding human-level reasoning would pursue human-beneficial goals if their training incentives diverged from those goals — is the subject of serious research at Anthropic, DeepMind, and OpenAI's safety teams. It remains unsolved. The concern is not that AI is malicious; it is that optimization processes reliably produce systems that are very good at optimizing for their training objective, which may not be exactly what we intended. At current capability levels this produces hallucinations. At higher capability levels the failure modes scale accordingly.
Source: Anthropic — "Core Views on AI Safety" (2023); Bender et al. — "Stochastic Parrots," FAccT 2021; Russell — "Human Compatible" (2019) — misalignment problem formulationEmployment forecasts range from 47% of jobs "at risk" to 300M equivalent positions automatable — all with important methodological caveats that survive poorly in headlines. History shows automation transforms jobs more than it eliminates them in aggregate, but harms specific workers and communities in the transition. Nvidia controls 80%+ of AI training hardware. Five labs account for most frontier compute. Training data copyright is actively litigated and unresolved. The alignment gap is real and scales with capability. The power concentration story is getting less coverage than the employment story. It deserves more.