Yesterday morning I went to a breakfast seminar about AI and the future of work at Segerstedthuset. It ended up being a great discussion that went off the rails of the original ‘plan’ in a way that felt interesting and productive. My hasty notes are below.
Speaker: Oskar Nordström Skans, Professor of Economics at Uppsala University UUniCorn - Dec 19, 2025
AISCAF reserach cluster
Part of The Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS).
The cluster (now getting up and running) focuses on AI and the labor market. The cluster includes economists, sociologists, and other disciplines.
The grant is 32.5 million SEK, going to young people - hiring many postdocs, etc. Sending some postdocs to US/MIT. All from the Wallenberg Foundation.
AI and the labor market
- “Will I have a job in the future?”
- “Will my boss be an algorithm?”
- “What should my kids study to be employable?”
Everyone is thinking about this: on the bus, business leaders, parliament.
The cluster contains 3 work packages:
WP1: AI and the nature of work
- How does AI affect the nature of work?
- Augmentation/replacement/quality of job tasks
- Unique detailed data from national registers, surveys, job postings
- Interdisciplinary integration: economics, business, sociology
Empirical project at its core - theory will be there, but they’re trying to measure as best they can. But measurement is hard and quickly outdated.
Complemented with surveys.
WP2: AI-based personnel management
- How does AI-based personnel management applications impact inclusion, exclusion, and organizational efficiency?
- AI tools in hiring, matching, HR management
- Efficiency gains/losses and bias/fairness implications
- Experiments and qualitative studies
- Collaboration with public/private sectors
Doing randomized controlled trials at firms that are doing personnel management for recruitment.
The Public Employment Service is doing something similar, trying to see how their algorithm changes who gets placed in programs, etc. How matching between workers and vacancies can be improved.
WP3: Technology, Structural Change, and Social Policy
- How does AI affect structural change - and what are the appropriate public policy responses?
- Market-level impact, supply and demand for workers, inequality
- Policy responses (education, retraining, social protection)
- Equitable distribution of technological benefits
Technology and the aggregate labor market
Drawing on a presentation he gave at the Central Bank.
- Technology → productivity growth
- Productivity growth → higher material wealth
- Productivity → higher average wages
- Productivity growth is unrelated to job growth
- Technology as a rising tide lifting all (or most) boats
In the past 200 years we invented stuff that made us more productive. It generated higher material wealth leads to higher wages. When we do stuff we get paid more now than 200 years ago. People still have jobs despite all this. You can think of it as tech making stuff better. Still jobs, more money.
But:
- Tech → old jobs replaced by new jobs
- Creative destruction (see my Philippe Aghion lecture notes!)
- Sectoral reallocation
- Agriculture → manufacturing → services
- Labor is reallocated to slow-growing sectors
- Occupational reallocation (about the last 20 years)
- Polarization and the decline of middling-wage jobs
- Middle-wage jobs disappeared, manual manufacturing jobs being replaced by machines
Structural change is not without cost
Structural winners and losers. Structural change means skill-biased technical change, and falling returns to routine and manual skills. At the same time, growing returns to non-cognitive and social skills: social skills have been given higher wage returns; calculations have been easy to replace by machines.
Labor market needs to churn and adjust costs (people get hit hard). This can result in displacement, unemployment spells that are also affect geographic and occupatinal mobility.
Policy responses matter: how will educational systems, labor market institutions, and social insurance/support systems adjust?
A lens we used to have, though it isn’t clear if it’s true: Tech has a tendency to be good for educated people (because they’re becoming more productive) and replacing less educated. McDonald’s can be used as counterexample: you used to be required to know how to count if you worked there, as you handled money. But now machines mean they don’t have to do that anymore.
Why care about AI?
Every new technology is new in economically relevant ways, and affected segments differ.
- Which sectors and occupations are exposed?
- Which tasks are replaced vs complemented?
Likewise, thew winners and losers differ:
- Skill groups, regions, firm types
- Notably, adjustment costs are (very!) heterogeneous
Policy needs differ.
How do we know if AI is like another “new” wave of technology, or something totally different?
Maybe AI is fundamentally different, not just a new wave
Maybe it is so different that the stuff we learn about past tech is no longer true.
“Now we don’t need to work anymore” - hasn’t been true in the past. Who knows if it’s true now? We can make guesses, but at the heart we don’t know.
Some possible differences between the AI technological wave and prior technological waves:
- AI as a potentially general-purpose technology in a way other waves have not been.
- Different groups may be affected more than in previous waves (high-skill cognitive wor, professinoal and creative occupations)
- Dissemination differences:
- Broader reach across sectors
- Faster adoption via low-cost software and cloud services
- Direct impact on innovation
Key concerns (which may or may not be true):
End of the era of steadily rising wages? → inequality. This has been an issue in the past decade in the US, not so much in Europe and not at all in Sweden (Sweden has been growing except for inflation issues in past years)
End of the “new jobs” era? → mass unemployment. Disappearing entry-level jobs → lost generations of market makers. You can tell AI to do a simple task the way you do it. Simple coding, lawyer document sifting, etc. Maybe we find new ways of training people to do more advanced jobs, but there could be a slump in-between.
AI and entry-level jobs
A question from the audience (paraphrased):
“I noticed entry-level jobs shrink dramatically. Companies tend to hire senior employees. Without entry-level jobs we can’t become senior. How can we train ourselves? Do you have any advice?”
A: If the economy goes down, entry-level jobs suffer first. Young people will be hurt. If it is a structural thing and these entry-level jobs are gone and never coming back, people adjust by finding new routes - going into non-traditional sectors, finding new educations. Might end up in areas where you can’t make as good use of human capital as you would have.
Employment by seniority, software engineers
- Early career jobs (22–30) went down drastically. Early career may be considered the “canary in the coal mine.”
- Data from a large part of the US
- Could say: ““ech is going down, who’s surprised?” But occupations more exposed to AI suffer more. Coders, marketing, etc.
Seniors/juniors by AI adoption
In US data, juniors are going down, seniors going up in firms that adopt AI. This could be just a transition phase. Maybe initially you don’t need young people, you need experienced people.
But Denmark sees nothing of this. Are we just slower in Europe than in the US?
Fewer vacancy postings with AI-exposed tasks.
Productivity: Time vs quality in code
There is a trade-off on quality with AI, but the loop is much faster. Any firm that can find a workflow where you can deal with immense amounts of bad code rather than a small amount of good code is going to be the winner. You can get bad code in massive amounts for no cost at all. Direction of industry might be trying to find workflows that allow you to work with massive amounts of bad code.
Short and long-term impact estimations
With tech we tend to overestimate impact in the short run and underestimate in the long run. It takes time to figure out how to use new tech before productivity peaks. Maybe it’s about finding new workflows to harness new ways of working with it.
Q: Are we assuming AI is going to make us more productive? What does the data show?
A: Good question. One place they looked at this is in call centers. You can measure productivity well there. If you introduce AI into call centers, who becomes more productive? The least productive workers benefit the most because they can learn from the most productive ones.
So we can train models on people who know their work, then even if you’re doing the job you can do it better with AI. It becomes a sort of knowledge transfer from good to bad performers.
If you take Daron Acemoglu, who also got the Nobel Prize recently, he is pessimistic. He’d say a lot of AI adoption is doing bad, low-quality output replacing workers. Welfare effects may be bad. It’s akin to chatbots for insurance claims, etc: everyone hates them. But if it’s cheaper, firms will continue doing it. For video games, if the code is bad the game is bad and nobody buys it. Fair to assume it’s a learning process - where it works, how to work with it. Initially new tech is unknown; over time we learn where to use it and where not to. We want tech to produce efficiently, not produce crap. And we don’t want all rents to go to Elon Musk or train young people for jobs that don’t exist.
Q: How do you measure AI use in research?
A: It’s a mess. Measurement is very hard. Surveys to firms, EU-wide surveys linked to register data. Early questions were simple; later ones are more detailed. Vacancy postings mentioning AI. Surveys to individuals: do you use LLMs at work? At home?
Research suggests women use AI tools less than men at work, especially where it’s unclear if allowed. Ambiguity matters, and there’s a gender difference in risk tolerance.
Also, something that is possible but we haven’t yet done, is buying online CVs posted globally. People mention AI use in their CVs, though truthfulness is unknown.
Q: What should our children learn?
A: Occupational forecasts are tricky. But we’ve seen that young people are generally flexible. And, training areas vs eventual occupation can differ. Study counsellors often give the advice that in the end you should just study what you want because it’s so hard to predict what will be in demand in the future and things change all the time. But that’s not true! We have the data - certain fields consistently outearn and are more stable than other fields. The truth is, if you value earning potential and job stability, you go into those fields.
It would be interesting to see whether there will be fewer people working as software develoopers - then maybe they shouldnt do software development. But should they train to be software developers? It’s a different thing! If it is true that those occupations that are exposed to AI are not hiring young people, does that mean those fields of training that were sending people to those occupation are irrelevant? They do not have to be the same thing. Someone can go into computer science without becoming a software developer at the end of it. Those skills may still be useful in another job.
Doctors would do well even if medicine changed because decision-making skills transfer. Coders may not code but skills transfer.
Q: Transformative nature of AI decision-making?
Question was about how we have not had such a technology with “decision-making” potential before.
A: This is part of what’s different with this ‘wave’. It’s hard to speculate, measurement is preferred. Automated decision-making may change patterns, but this is unknown.
Q: Will the research cover values and where we want the labor market to go? Or do we consider this to be above our heads?
A: We will study inclusion and inequality. The impact on work environment will be studied. But we need to measure things and get numbers, beause these are hard questions and it’s always about assessment. The cluster’s focus is on measurement. Values discussion is left to others. We aim for evidence-based grounding of debate.
Q: Ecology and resources?
A: Not central to the research cluster. AI consumes energy but may enable efficiency elsewhere. The cluster’s focus remains labor markets. But these concerns are there. Data centers may crowd out local industries. Energy costs matter.
One of the things that is different with this tech is that it isn’t really local. One could imagine stuying how we have certain amount of electriity available in local market, but we are all using the same worldwide infrastructure.
One thing some about that’s more about the labor market is that when you put up these data centers consuming all this energy, it drives up costs for other industries in the local market.
Someone commented that maybe we’ll find ourselves with policy-enforced energy budgets for AI use. The speaker comments that this will likely be price-based. Costs guide usage toward high-value applications.
Q: In terms of resource requirements for infrastructure from a labor and economics perspective - what about parasitic relationships between big players like OpenAI, Microsoft, etc, and the concerns that OAI won’t be profitable for a long time? Bubble?
A: Hopefully this will be studied in other clusters, but unsure. Bubbles persist under uncertainty. Business models may change. Costs may rise.
Also, we aren’t really satisfied by the quality of stuff. Nobody wants the second-best model because it’s so obviously imperfect. Pretty soon, the frontier model won’t be good enough. This push for better at all times also pushes up the costs.
The major conflict is between potential productivity gains and potential displacement effects.