Executive Summary
The latest data from Yale’s Budget Lab and Brookings (Oct 1-2, 2025) shows no major AI-related job losses yet. But beneath that stability lies a quieter shift as AI agents are starting to replace the tasks that help people get thor first work experience. As these tools take on drafting, scheduling, and information-sorting, the path to professional development narrows. This briefing argues that policymakers and employers must act now to safeguard entry-level learning opportunities before they disappear unnoticed.
Key Points
The Yale/Brookings study finds overall employment steady, even in AI-exposed sectors, suggesting time to plan rather than panic.
“Assistant-style” agents target the routine tasks (emails, data entry, summaries) that juniors rely on to learn.
The OECD’s September Outlook signals easing labour markets and shifting skills demand, while the ILO’s late-September workshop highlighted risks to job quality and equitable access.
If entry-level work dries up, students and newcomers (those without networks) will be hit hardest.
Analysis
The October 2025 Yale/Brookings findings contrast with popular fears of automation induced unemployment. Instead, they paint a subtler picture, which is that AI integration without large layoffs, but with growing task displacement. For senior employees, AI can enhance productivity. For early-career workers, it erases the very tasks that serve as training grounds.
The “first-rung problem” isn’t about job counts; it’s about skill formation. When generative AI handles first drafts and meeting notes, junior employees lose opportunities to build judgement, communication, and confidence. The OECD and ILO both point to a narrowing window for upskilling, one that risks deepening inequality if entry pathways aren’t protected.
Policy Implications
Set internship standards:
- Require that AI-assisted teams give interns “human-first” drafting opportunities, with supervisors reviewing both AI and human output.
Mandate transparency:
- Employers using AI for task support or performance evaluations should disclose tool use and ensure assessments measure human capability.
Track first-rung health:
- Public or large private employers should report annual data on internships, junior roles, and the percentage of work AI tools perform.
Invest in AI-adjacent training:
Governments can fund short, supervised modules (i.e. prompt design, information verification, data hygiene, etc) to build readiness without replacing practice.