Deep Analysis: What 580 Years Actually Shows
Five deep-dive analyses extending the headline findings — the spreadsheets counterfactual, two eras of institutional response, incumbent-vs-cohort decomposition, the capability–adoption gap, and the methodological appendix.
1. The Spreadsheets Counterfactual The one case where massive task exposure produced MORE of the occupation, not less
In a sample of 20 disruptions spanning 580 years, a single case breaks the displacement pattern: accountants grew from ~1 million (US, 1985) to ~1.4 million (US, 2020) after Excel substituted the core numerical-calculation task. That’s not a data anomaly — it’s the empirical basis for the entire augmentation narrative.
Three structural conditions converged. All three had to hold.
All three hold → Type 2 (Transformation + Market Expansion) pathway and employment grows. Any condition fails → drift toward Type 3 (absorbed), Type 4 (restructured), or Type 5 (eliminated).
Applied to AI: each occupation can be scored against the three conditions. The scorecard shows which roles have the structural shape to follow the spreadsheets path (Type 2: transformation + growth) and which drift toward absorption or elimination.
| Occupation | C1: Elastic demand | C2: Intact ladder | C3: Core task stays human | Predicted pathway |
|---|---|---|---|---|
| Lawyers | ✓ | ✓ | ✓ | Type 2 candidate — judgment & advocacy remain core |
| Junior developers | ? | ✓ | ✓ | Type 2 or Type 3 — hinges on software-demand elasticity |
| Radiologists | ✓ | ✓ | ✗ | Type 3 absorption — image reading IS the core task |
| Paralegals | ✗ | ? | ✗ | Type 3 or Type 5 — document review IS the role |
✓ condition holds · ✗ condition fails · ? uncertain / depends on external factor
This filter replaces vibes-based augmentation claims with empirical scoring. Each ISCO 3-digit group can be scored against the three conditions using existing Layer 1 (AI exposure, education distribution) and Layer 2 (Cedefop growth projections) data. The output is a predicted pattern type per occupation — augmentation candidate, absorption candidate, or displacement candidate.
2. Two Eras of Institutional Response Pre-1930 displacement occurred without institutional response apparatus
One of the cross-case metrics in this project is “institutional response lag” — the years between first measurable displacement and first major policy, regulatory, or retraining response. The metric is structurally undefined for pre-1930 disruptions. “Policy response” as an institutional category did not yet exist.
This is not a data gap. It is a structural observation about the sample itself. The 20 cases partition into two eras:
Pre-Bismarck industrial workers (1760–1880) were structurally outside any institutional response perimeter. Guild protections had been obsoleted by the mechanised loom and factory system; welfare-state protections had not yet been invented. The gap closed not by extending existing institutions but by inventing new ones (Bismarck social insurance 1883, Factory Acts, labour law). That is the first coverage gap in the historical sample.
The current freelance/platform/cross-border-remote coverage gap is the second. The Lisbon freelancer working for a San Francisco startup is visible to the Portuguese tax system but uninterpretable — no contract-non-renewal event fires, “self-employed IT services” covers thriving and displaced identically, and no jurisdiction has reach to the SF employer. Data exists; reading capacity doesn’t. Historical precedent suggests this gap closes through institutional invention, not extension — but the compression of institutional response speed since Bismarck (~4–5x faster, Platform Work Directive 2024 came within ~9–11 years of platform displacement becoming visible) implies closure in 15–40 years, not the 50–120 suggested by the pre-Bismarck baseline alone.
3. Incumbent vs Cohort: Aggregate Numbers Hide the Displacement Pattern Who actually bears the cost when a technology is institutionally buffered
Two of the best-documented buffered disruptions in the dataset — German industrial robots (1994–2014) and US telephone operators (1917–1940) — both produce aggregate employment numbers that look benign until you decompose them by cohort. The incumbent workers protected by the institutional buffer do fine. The next cohort does not.
+11.44 days same-employer retention, −0.13 pp foreclosed youth entry
Per additional robot per 1,000 workers over 20 years (IV, p<0.01): incumbent manufacturing workers gain +11.44 additional days employed at the same employer (Table 4 Panel A col 2) while young workers see a −0.1335 percentage-point drop in manufacturing-entry share (Table 5 col 1, p<0.05). Returnees from unemployment are unaffected. The 276,507 aggregate manufacturing-job decline is driven entirely by the foreclosed next cohort, not by displacement of the current cohort. Mode 3 buffering via co-determination (Betriebsverfassungsgesetz 1972) + works councils + apprenticeship reassigns incumbents in-firm; the bill lands on the kids who never get hired.
Operator employment held at the aggregate; entry collapsed for the 16–25 cohort
At the city cutover event: operator employment falls −44.3% and all-industry employment −26.2% per cutover. But the age-gradient decomposition shows the collapse is concentrated in the 16–25 cohort: −53.6% vs −32.6% for 26+. AT&T retained incumbents via residual-task reassignment (clerks +10%, electrical engineers +13%, mechanics +7%) while youth entry into operator roles evaporated. Same pattern as the German robot case: the incumbent cohort is buffered; the next cohort is foreclosed.
Implication for AI exposure. The Brynjolfsson et al. 2024 finding of −14% entry-level employment in US IT occupations post-ChatGPT, paired with the Anthropic Economic Index showing 14% higher AI-use concentration among workers aged 25–34, is the same shape. The aggregate US tech-employment numbers don’t yet show displacement — but the entry pipeline for junior developers is contracting. The historical precedent says this is the leading indicator. Institutional buffering protects the incumbent and shifts the cost to the cohort that hasn’t entered yet. Lens 1 of the companion AI Labour Suite requires a youth-entry-rate metric for exactly this reason.
Cost distribution beyond the cohort. Dauth also finds productivity gains accrue to capital, not wages — labor productivity +0.5365 log-points per robot per 1,000 workers (Table 7, IV, p<0.05), average wages flat-to-negative. Medium-skilled workers (~75% of the German manufacturing workforce) bear the wage cost of the insider-outsider wage-restraint bargain: −€63 to −€800 per year depending on robot-exposure gradient. Mode 3 buffering is not costless. It protects the incumbent at the cost of the cohort and the wage.
4. The Capability–Adoption Gap: Displacement Velocity is Bounded by Adoption, Not Capability Theoretical exposure doesn’t tell you what will actually happen this year
Two datasets measure AI’s reach into occupations, and they disagree by a factor of three. Eloundou et al. (OpenAI, 2023) estimates task-level capability exposure of ~94% for computer and mathematical occupations — the share of work tasks where an LLM could meaningfully reduce time. The Anthropic Economic Index (2025) measures observed AI-use coverage of ~33% — the share of actual conversations in which workers in those occupations are using Claude. The gap is not noise. It is the installation-phase Perez signal.
Historical precedent matches. Strowger filed the automatic-switching patent in 1889. The first commercial automatic exchange opened in Norfolk, Virginia in 1919 (30-year capability-to-commercial gap). The last manual exchange was retired in 1978, on Catalina Island (89-year capability-to-full-mechanization window). Between 1889 and 1919, operator employment continued to grow. Between 1919 and 1978, adoption expanded at a bounded pace set by installation cost, switching-equipment production capacity, and the organizational-complexity lag documented in Feigenbaum & Gross 2023/2025. Capability was not the binding constraint. Adoption was.
Telephone-operator mechanization windows · 9 countries · verified Apr 14, 2026
Orange bars = hostile-test passes (political-economy regime stress). Gray bar = Switzerland scale-effect outlier. Dashed band marks the 59–74 year convergent envelope.
| Country | Window (yr) | Period | Role |
|---|---|---|---|
| Switzerland | 36–42 | Zurich 1917 → ~1959 | Scale outlier |
| Germany | 58 | Hildesheim 1908 → Uetze 1966 | State PTT |
| United States | 59 | Norfolk 1919 → Catalina 1978 | Private monopoly |
| Austria | 62 | Vienna 1910 → Fräulein vom Amt retired 14 Dec 1972 | 5 political regimes |
| Poland | ~63 | Łódź 1928 → Mława ~1991 | 3 political regimes |
| Spain | 64 | Balaguer 1924 → 1988 | Franco dictatorship |
| United Kingdom | ~64 | early 1910s → late 1970s | State PTT |
| France | 66 | Nice 1913 → 1979 | State PTT |
| Australia | ~74 | 1912 → mid-1980s | State PTT |
Eight of nine countries land in a 58–74 year envelope despite spanning private monopoly, state PTT, dictatorship, and multiple regime transitions within one adoption window. Switzerland compresses to 36–42 years on scale (small dense country, universal deployment faster) not regulatory regime. Organizational complexity dominates political regime as the adoption-window predictor.
Implication for AI exposure scoring. Use observed-exposure data (Anthropic Economic Index) as the leading signal, not theoretical-capability data (Eloundou). The gap between them is the window in which displacement is possible but not yet realised. Plan around adoption curves, not capability curves. The historical base rate says full mechanization takes decades from commercial availability, even when the technology is already technically capable of replacing the task.
5. Methodological Appendix: Cross-Case Feasibility Matrix Where the numbers are solid and where they’re derived
Not every case in this sample has equally strong data for every metric. The following matrix shows data confidence per disruption per metric, so readers can see where the quantitative claims are solid and where they rest on estimation. Per BR-21, every derived value in disruptions-data.json ships with derivation_method and uncertainty_band fields — clean numbers would propagate false precision into downstream analysis.
Metrics: (a) time from commercial viability to 50% task displacement · (b) institutional response lag · (c) reskilling adjacency · (d) geographic/demographic concentration · (e) peak annual displacement rate.
Of 20 cases: 7 are high-feasibility across most metrics (PC/Office Automation, Internet/E-commerce, ATMs, Containerisation, Industrial Robots, GPS/Ride-hailing, Telephone Operators — plus Spreadsheets, E-commerce/Retail with mixed profile). 8 are medium-feasibility with per-cell confidence flags. 5 are low-feasibility and appear in case narratives but not in the structured metrics table (Printing Press, Steam/Railways, Electricity, Elevator Operators, LLMs/Knowledge Workers). 75% of cases are workable — scope matched to data density, not forced quantification where data doesn’t support it.
The LLMs/Knowledge Workers case is low-feasibility because the disruption is current and data is still accumulating. This is the case the other 19 are meant to calibrate. The thin-data-on-the-case-we-most-want-to-predict problem is structural, not a flaw in this project.