Futures · Case #0401
Data
Goldman Sachs 2023: AI could automate tasks equivalent to 300M full-time jobs globally — 18% of work Nvidia controls 80%+ of AI training hardware (H100/A100 GPUs) — monopoly position in foundation model compute OpenAI, Google DeepMind, Meta AI, Anthropic, and Microsoft account for most foundation model compute globally ATM deployment (1970s) did NOT reduce bank teller employment — it reduced cost per branch, banks opened more branches Copyright lawsuits: NYT v. OpenAI (2023), Getty Images v. Stability AI — training data ownership unresolved Radiology: identified as early AI displacement target in 2017 — radiology employment has continued growing since Goldman Sachs 2023: AI could automate tasks equivalent to 300M full-time jobs globally — 18% of work
📁 Red String · Case #0401 · Futures · Part 1 of 3
Goldman Sachs 2023 · Oxford 2013 · McKinsey 2023 · FTC Reports · SEC Filings · Court Records

AI AND THE
HUMAN FUTURE:
THE HONEST ASSESSMENT

300 million jobs automatable. Nvidia controls 80%+ of training hardware. The five biggest AI labs account for most foundation model compute globally. The employment question gets the coverage. The power concentration question doesn't. Both matter — but not equally.

By R. Connell · Red String 300M automatable jobs 80% Nvidia GPU market share 14 primary sources
METHODOLOGY: Employment forecasts sourced to original papers with methodology noted. AI market concentration from public revenue/compute data, SEC filings, and FTC analysis. Legal case claims from court dockets. Historical automation analogies sourced to economic research. Where forecasts conflict, both sides are presented with their methodological basis.

The Forecasts: Read Carefully, Not in Summary

Three major employment forecasts are regularly cited in AI coverage. They are frequently misquoted — not because journalists are dishonest, but because the methodological nuances are important and don't survive summarization. Reading them correctly changes the picture significantly.

Oxford 2013
47%
U.S. jobs classified as "high risk" of automation within 10-20 years
CAVEAT: "High risk" means the tasks are susceptible to automation — not that jobs will be eliminated. The methodology analyzed 702 occupations by task composition, not employment outcomes. Published 2013, before LLMs.
McKinsey 2023
15–30%
Work activities automatable by 2030. Also projected AI could add $2.6T–4.4T annually to global economy.
CAVEAT: "Activities" not "jobs" — most jobs contain multiple activities; automation of one activity changes, not eliminates, the role. McKinsey also projected 20M+ new AI-related job categories.
Goldman Sachs 2023
300M
Full-time-equivalent jobs whose tasks could be automated by generative AI globally (18% of work)
CAVEAT: "Could be automated" ≠ "will be automated." Same report projected AI adds 7pp to global GDP (~$7 trillion) and noted historical automation increases productivity while creating new job categories.

What all three forecasts share: the displacement will not be uniform. Middle-skill cognitive work — legal research, financial analysis, basic software development, content creation, data entry — faces more displacement than manual work requiring physical presence and dexterity (construction, elder care, plumbing) or work requiring genuine social and emotional intelligence (therapy, high-stakes negotiation, crisis management). The distribution of displacement by job type and geography matters more than the aggregate number.

Source: Frey & Osborne, Oxford University 2013; McKinsey Global Institute 2017, updated 2023; Goldman Sachs Economic Research — "The Potentially Large Effects of AI on Economic Growth" March 2023

What History Actually Shows: ATMs, Radiologists, and the Loom

The ATM is the cleanest historical data point on automation and employment. Beginning in the 1970s, ATMs were deployed across the U.S. banking system with the explicit purpose of replacing human tellers for routine transactions. Economists forecast significant teller job losses. What happened: ATM deployment reduced the cost per bank branch, making it economical to open more branches, which required more tellers. By 2000, there were more bank tellers than before ATMs existed. The tasks changed; the aggregate employment level didn't fall.

Radiologists were identified as a primary AI displacement target beginning around 2017, when deep learning image recognition systems demonstrated performance matching or exceeding human radiologists on certain diagnostic tasks (diabetic retinopathy, specific tumor detection). Radiology residency was predicted to become economically irrational within a decade. The actual outcome: radiology employment has continued growing. AI tools augment radiologists rather than replace them — they handle first-pass screening, flagging anomalies for human review, and expanding the volume of imaging that can be interpreted. The displacement has not occurred on any projected timeline.

The Luddites — whose name has become synonymous with technophobia — were skilled textile workers in early 19th-century England who broke mechanical looms. Economists have largely vindicated their economic analysis: the looms did eliminate their skilled jobs, incomes fell substantially for that generation, and the new factory jobs that eventually created aggregate welfare gains arrived too late for them. The aggregate long-run benefits of industrialization are real. The distributional short-run harm to specific workers was also real. Both things are true simultaneously. The honest assessment of AI applies the same nuance.

Source: Bessen — "How Computers Automate Away Middle-Class Jobs" (2015); Topol et al. — "High-Performance Medicine: AI and Clinical Care," Nature Medicine 2019; Thompson — "The Making of the English Working Class" (1963) — Luddite economic context
Legal Research Automation — The Law Firm Case Study
Documented Outcome
AI legal research tools (Casetext, Harvey AI, LexisNexis Protégé) now perform first-pass research in minutes that previously took junior associates hours. Aggregate law firm employment has not declined. What has declined is entry-level associate hiring — specifically, the volume of first-year associate work that was billed at associate rates to clients. This is a distributional shift within the profession (billing less first-year work, hiring fewer associates) rather than aggregate displacement. Law school enrollment has not declined. The people most affected are graduating law students entering a market with fewer entry-level positions — not the existing legal workforce.

The Power Concentration Question

The employment coverage dominates AI discourse. The power concentration question receives less attention and may be more consequential. The economics of large language models exhibit extreme concentration dynamics: training a frontier foundation model requires compute, data, and talent at a scale that only a handful of organizations can access. This is not an accident — it is the structural result of how the technology works.

Training GPT-4, Claude 3, Gemini Ultra, or comparable frontier models requires on the order of 10,000–25,000+ high-end GPUs running for weeks to months. Nvidia controls approximately 80–90% of the AI training hardware market (H100, A100 GPUs) — a monopoly position validated by its $2.2 trillion market capitalization as of early 2024. There is no equivalent competitor at the frontier. AMD's MI300X chips are real but represent a fraction of deployment. The physical infrastructure of AI is bottlenecked through a single company.

EntityPosition in AI StackConcentration Level
NvidiaTraining hardware (H100/A100/B100 GPUs)~80–90% market share
TSMCAdvanced chip fabrication (≤5nm)~90%+ at frontier nodes
OpenAI + Google + Microsoft + Meta + AnthropicFoundation model developmentMost frontier compute
Amazon / Google / MicrosoftCloud AI infrastructure (AWS, GCP, Azure)~65% of global cloud
Common Crawl / internet dataTraining dataRelatively open

The FTC's 2024 report on AI markets documented the concentration explicitly: a small number of companies control the dominant share of the compute, data, and distribution infrastructure that determines who can build competitive AI systems. The report raised concerns about whether this concentration could entrench incumbents and prevent competitive market dynamics — the same dynamics that would normally drive prices down and capability improvements to more users.

The Microsoft-OpenAI relationship is the clearest example of the power dynamics at work. Microsoft invested $13 billion in OpenAI and integrated GPT-4 across its product suite — Office, Bing, Azure, GitHub Copilot, Teams. The competitive advantage of this partnership extends across Microsoft's entire product portfolio. OpenAI receives the compute and distribution; Microsoft receives AI integration across the most widely used enterprise software stack in the world. Whether this constitutes an anticompetitive arrangement is the subject of active FTC and EU regulatory review.

Source: FTC — "Generative AI Products and Services" report (2024); Microsoft 10-K (2023) — OpenAI investment disclosure; Nvidia annual report 2023; TSMC annual report 2023

// The employment question asks: will AI take my job?
// The power question asks: will AI concentrate power?
// One of these has a historical precedent of resolution.
// The other does not.

Red String — AI and the Human Future, 2026

Foundation models are trained on internet-scale datasets — Common Crawl, Books3, The Pile, and proprietary scraped data. The legal question of whether training on copyrighted text and images without permission or compensation constitutes infringement is unresolved and being actively litigated in multiple courts simultaneously.

The New York Times filed suit against OpenAI and Microsoft in December 2023, alleging that GPT-4 was trained on millions of Times articles without authorization, and that the model can reproduce Times content in ways that substitute for the original. The Times included in its complaint examples of GPT-4 producing near-verbatim paragraphs from Times articles — demonstrating that the training data is retrievable, not merely learned from. The case is pending as of early 2026.

Getty Images filed suit against Stability AI in 2023, alleging that Stable Diffusion was trained on Getty's licensed image library without permission and generates images that incorporate Getty watermarks — visual evidence that the specific training data is encoded in the model weights. Multiple class action suits by authors, musicians, and visual artists are working through courts in the U.S. and UK simultaneously.

The resolution of these cases will determine whether AI companies must license training data (creating substantial new costs and possibly requiring model retraining) or whether training constitutes "transformative use" under fair use doctrine (permitting continued current practice). Either outcome reshapes the economics of frontier model development substantially.

Source: NYT v. OpenAI et al. — SDNY complaint, December 2023; Getty Images v. Stability AI — D. Del. complaint, February 2023; Andersen v. Stability AI — class action, ND Cal; Authors Guild v. OpenAI — class action

The Capability-Alignment Gap

AI capabilities are advancing on a measurable trajectory. AI systems that can pass the bar exam, write production code, conduct graduate-level research, and produce professional-quality images and video exist now. The question of whether these systems are aligned — whether they reliably pursue the goals humans actually want rather than proxy goals they were trained toward — is a genuine open problem in technical AI safety, not science fiction.

The alignment problem is not theoretical: LLMs hallucinate — they produce confident, detailed, plausible-sounding false statements. This behavior exists because models are optimized for plausibility, not accuracy. The more capable the model, the more convincing the hallucinations. Deployment at scale means millions of people receive authoritative-sounding incorrect information daily. This is the alignment gap at current capability levels.

The longer-horizon alignment question — whether systems approaching or exceeding human-level reasoning would pursue human-beneficial goals if their training incentives diverged from those goals — is the subject of serious research at Anthropic, DeepMind, and OpenAI's safety teams. It remains unsolved. The concern is not that AI is malicious; it is that optimization processes reliably produce systems that are very good at optimizing for their training objective, which may not be exactly what we intended. At current capability levels this produces hallucinations. At higher capability levels the failure modes scale accordingly.

Source: Anthropic — "Core Views on AI Safety" (2023); Bender et al. — "Stochastic Parrots," FAccT 2021; Russell — "Human Compatible" (2019) — misalignment problem formulation
◆ The Honest Summary

Employment forecasts range from 47% of jobs "at risk" to 300M equivalent positions automatable — all with important methodological caveats that survive poorly in headlines. History shows automation transforms jobs more than it eliminates them in aggregate, but harms specific workers and communities in the transition. Nvidia controls 80%+ of AI training hardware. Five labs account for most frontier compute. Training data copyright is actively litigated and unresolved. The alignment gap is real and scales with capability. The power concentration story is getting less coverage than the employment story. It deserves more.

Primary Sources
[1]
Frey & Osborne — "The Future of Employment," Oxford University (2013). 702-occupation analysis, 47% high-risk classification. Most cited AI employment study.
[2]
McKinsey Global Institute — "The Economic Potential of Generative AI" (2023). 15–30% work activities automatable by 2030. $2.6T–$4.4T annual value addition.
[3]
Goldman Sachs Economic Research — "The Potentially Large Effects of AI on Economic Growth" (March 2023). 300M job-equivalent automation; 7pp GDP addition.
[4]
Topol et al. — "High-Performance Medicine," Nature Medicine (2019). AI in radiology and diagnostics; augmentation vs replacement outcomes.
[5]
Bessen, J. — "Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth" (2015). ATM/bank teller employment data. Automation increases productivity, employment transforms.
[6]
FTC — "Generative AI Products and Services: Report and Order" (2024). Market concentration analysis. Compute, data, distribution bottlenecks documented.
[7]
Microsoft 10-K Annual Report (2023). $13B OpenAI investment disclosure. Azure AI infrastructure revenue reporting.
[8]
Nvidia Annual Report (2023/2024). Data center GPU revenue. Market position disclosure.
[9]
NYT v. OpenAI & Microsoft — SDNY Complaint (December 2023). Copyright infringement allegations. Near-verbatim reproduction examples.
[10]
Getty Images v. Stability AI — D. Delaware (February 2023). Image training data copyright. Watermark evidence of training data encoding.
[11]
Anthropic — "Core Views on AI Safety" (2023). Alignment problem framing. Current research priorities.
[12]
Bender et al. — "On the Dangers of Stochastic Parrots," FAccT 2021. Hallucination mechanism. Training objective vs desired behavior gap.
[13]
Russell, S. — "Human Compatible: Artificial Intelligence and the Problem of Control" (2019). Misalignment problem technical formulation. Oxford University Press.
[14]
TSMC Annual Report (2023). Advanced node fabrication market position. ≤5nm production capacity.