Over lunch yesterday I attended a lecture by Amanda Lagerkvist, Professor at the Department for Informatics and Media, on how AI is set to shape and challenges our understanding of “the human.” The talk was organized by Uppsala University’s Centrum för forskning om funktionshinder - The Center for Disability Research.
Below are my rough notes. The lecture was in Swedish, so keep in mind that some of my translation/nuance may be off.
The speaker focused on exploring five main questions:
- “Solutionism”: allt ska lösas med ingenjördsdrivna lösningar? - “Solutionism”: everything will/should be solved with engineering-driven solutions?
- Att mäta människor… Eugenik + AI och biometri = sant? - Measuring people… Eugenics + AI and biometrics = true?
- Vilken definition av ‘intelligens’ förutsätts i utveklingen och hur formar den ideal och normer kring det mänskliga? - What definition of ‘intelligence’ is presupposed in evolution and how does it shape ideals and norms about the human?
- Kan det neuropsykiatriska självet bli alltför beroende av algoritmerna och vilka är riskerna när det upptäcks genom självdiagnosticering på videodelningsplattformar? - Can the neuropsychiatric self become too dependent on algorithms and what are the risks when it is discovered through self-diagnosis on video sharing platforms?
- Vilken framtid för funkis ser ni (o)möjliggöras av AI-utvecklingen? - What future for functionality do you see being (un)enabled by AI developments?
A meeting of minds (and disciplines)
The speaker’s perspective bridges technology and philosophy. She’s been interviewing AI professionals, data linguists, computer scientists, entrepreneurs, and drawing on philosophical and existential viewpoints. She focuses on the deeper human questions around AI: What are we? What do we want to become?
The three dimensions of AI
Referencing Mark Coeckelbergh, the speaker highlighted three angles:
- Technical – The nuts and bolts of machine learning and data processing.
- Political-Economic – AI as part of digital capitalism and big tech infrastructures.
- Narrative – The stories we tell about AI, from utopian visions to dystopian nightmares.
What is a human, anyway?
From biology and sociology to norm critique and existentialism, ideas about “the human” get complicated.
- Philosophical: John Locke emphasized our capacity to feel pain and empathize.
- Posthumanist: Suggests “human” might exceed biology.
- Disability reclamation: Challenges mainstream norms, asking who decides which bodies or minds are “ideal.”
AI developers’ view of humanity
AI developers the speaker has interviewed often emphasize efficiency or “helping humanity.” While that sounds nice, it can risk a narrow, solutionist perspective if we ignore social and emotional complexities. She also cited the UN Convention on the Rights of Persons with Disabilities, which highlights the potential of AI to empower—but also the pitfalls if it’s poorly designed.
The “Solutionism” trap
From a slide:
Sofia: Men en sak som jag ska berätta för dig ocksà Amanda. För att du var nämligen med när det hände, fast jag vet inte om jag sa det i mötet. Jag har kollat upp datumet ocksả … Att jag hade pả samma dag ett möte med medicinska, tekniska forskare och möte med dig …. Och efter det sả blev det en konflikt i mig. För de medicinska, tekniska forskarna är jag och min sjukdom ett problem som ska lösas. Medan för filosoferna är jag ett liv som ska levas. Det har verkligen skavt i mig sedan dess. Och fortsätter att skava. Pả ett konstruktivt sätt, men det är ändà väldigt intressant att tănka pả.
The tension flagged here is “solutionism”: the belief that every problem can be engineered away.
- An interviewee (Sofia) shared how, in medical/technical research contexts, she becomes a “problem to solve,” whereas philosophers see a person living a life.
- This friction can be uncomfortable, yet it also prompts deeper reflection on technology’s role in human experiences.
Biometrics, eugenics, and measuring people
Historically, we’ve seen pseudoscientific attempts to measure people’s worth — physiognomy, eugenics, and anthropometry. Today’s AI-driven biometrics can echo these old ideas. F
Eugenic logic can reemerge in the pursuit of “optimizing” humanity.
Defining intelligence
What counts as “intelligent”? Some definitions focus purely on computation, ignoring the embodied, relational, and sometimes contradictory parts of being human. Overly technical language about “optimization” and “exactness” risks overlooking the messy, vital qualities — vulnerability, creativity, contradiction - that make us who we are.
ADHD, algorithms, and online communities
The speaker focused on the prevalence of ADHD self-recognition and self-diagnosis via algorithmic social media platforms like TikTok. These spaces can help those with ADHD discover shared experiences:
- Self-identification: Realizing you’re not alone can be empowering.
- Rabbit holes: Algorithms can also fuel disinformation or narrow perspectives.
There’s tension between the relief of finding your “tribe” and the risk of letting algorithms define your identity.
AI “Apocalypse” vs. “Acceleration”
The speaker referenced the debates around pausing AI research vs. speeding it up. “AI doomers vs. AI accelerationists.” Some fear existential threats, others want more rapid progress. This tug-of-war between cataclysmic and messianic narratives also shapes public discourse.
“The idea that this stuff could actually get smarter than people - a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 60 years or even longer away. Obviously, I no longer think like that.
What we want is some way of making sure that even if they’re smarter than us, they’re going to do the things that are beneficial for us. But we need to try to do that in a world where there [are] bad actors who want to build robot soldiers that kill people. And that seems very hard to me.”
- Geoffrey Hinton, Nobel Prize laureate in Physics, “Godfather of AI”
Concluding thoughts: keep planting “weeds”
“Fortsätt att plantera ogräs—ställ svåra, irriterande frågor!” (Keep planting weeds—ask difficult, irritating questions!) Rather than let solutionism or hype take over, we need critical engagement, especially for marginalized groups like those with disabilities, to ensure AI fosters inclusion rather than perpetuating biases.
Ultimately, AI isn’t just a technical puzzle. It’s a mirror reflecting our assumptions about intelligence, worth, and what it means to be human. If we keep questioning and remain aware of the existential stakes, perhaps we can steer these technologies toward a more humane—and truly creative—future.
Impact and usefulness of these questions as a developer working with practical AI applications
Working with applications of AI for quite narrow use cases, these bigger questions can feel a little too ethereal for my day to day work - or rather, the day-to-day work can feel trivial/inconsequential for these bigger questions. But in the end, I do believe every grain of sand placed in the wall of AI-related development shapes both its future and our outlook on it. Building AI tools used by others impacts how the norms around AI use cases evolve over time. So I think it’s still interesting at worst and useful at best for a worker ant excavating a grain of sand to be cognizant of the larger anthill.