Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era

Community Article Published August 20, 2025

Labour market researchers and policymakers have long relied on task-based frameworks to make sense of how technology reshapes work. The most widely used examples are the U.S. O*NET Content Model, the International Standard Classification of Occupations (ISCO) maintained by the ILO, and the European Skills, Competences, Qualifications and Occupations (ESCO) taxonomy. These frameworks break occupations down into their component tasks and activities and link them to the skills and knowledge required.

But in the generative AI era, the fit between these frameworks and reality is shrinking fast. While O*Net is widely used to measure the impact of AI on labour (see Frey and Osborne, Handa and Tamkin et al., Tomlinson et al.), the O*NET categories only partially capture the forms of automation now possible. And the issue goes deeper than AI alone. These taxonomies were designed not just before large-scale AI deployment, but before most work itself became digital-first. Over the past two decades, work deliverables have increasingly taken digital forms, spreadsheets, databases, emails, slide decks, code repositories, that fit neatly into analytics pipelines and are therefore easier to automate. At the same time, trillion-dollar firms have emerged whose core business is digital technology and automation. Together, these shifts make it much harder for classical taxonomies to reflect the complexities of today’s world of work.

That leaves us with a fundamental mismatch: we are trying to measure the future of work with tools built for the past. The limitations are visible in the research literature as well, where applications of existing taxonomies to AI are scattered.[1] Workflows today are increasingly granular, hybrid, and mediated through digital systems in ways the old frameworks never anticipated.

Part of the problem lies in the assumptions built into existing frameworks. Most task descriptors were designed around human-centric performance, encoding what people can or cannot do within the limits of human attention, dexterity, or cognitive bandwidth. They rarely anticipate machine-specific constraints, such as which modalities a system can handle, or how easily tasks can be parallelised at scale. This becomes even more apparent with generative and multi-modal AI. Taxonomies have categories for “operating machinery” or “preparing documents,” but they lack any meaningful way to capture “synthetic content creation,” where an AI can generate an advertising campaign or replace an entire class of creative roles such as fashion models or voice actors [Thomas]. These are not incremental automations of sub-tasks; they are one-step substitutions of whole professions. Similarly, the emergence of hybrid human-AI workflows blend judgment, prompting, and oversight in ways that no current classification can describe.

While this means taxonomy-based approaches are insufficient to assess the broader impact of AI on labour by themselves, they need not be discarded entirely. They can still serve as a useful tool to describe the use of AI systems in structured ways, generate comparable statistics, and support further analysis; provided they are sufficiently transparent and adapted to the realities of contemporary work.

What We Need in a Taxonomy for AI and Labour Impact Research

Even with their limitations, task-based frameworks can still provide useful building blocks if we are clear about what can be salvaged and what needs to change. Task-level granularity remains valuable for spotting where AI is most relevant, whether automating routine duties or augmenting complex workflows. Their hierarchical structure, from broad categories like “Information Processing” down to specific actions, still enables analysis at multiple levels. And their existing links to occupational classifications remain indispensable for producing comparable labour statistics across countries.

But to make these taxonomies genuinely fit for the AI era, several changes are needed. Categories must become dynamic and digital-first, expanding to include emerging AI-native tasks and synthetic roles that go well beyond the traditional scope of “operating machinery” or “processing paperwork.” The design must be modular, allowing task lists and descriptors to evolve as AI capabilities advance, rather than remaining frozen in a snapshot of past work practices. And perhaps most importantly, they need to embed worker input: mechanisms for workers themselves to signal which tasks they want automated, augmented, or preserved, such as learning from their established policies for AI use. As recent OECD surveys (2023) show, workers differ sharply in their preferences: often welcoming automation of repetitive or burdensome tasks while insisting that interpersonal and judgment-heavy work remain human-led.

The task is clear: keep what works, discard what doesn’t, and build new frameworks that reflect how AI is reshaping work today. That means taxonomies that are transparent, flexible, and give workers a say in what should be automated or preserved. If we don’t act, we’ll keep measuring the wrong things. If we do, we can build the evidence and policies needed to make AI strengthen, not erode, the future of work.

[1]: As pointed out by Brynjolfsson et al.. Various different applications of O*Net include the work of Eloundou et al., Fenoaltea et al., Pizzinelli et al.

Community

Sign up or log in to comment