Liza Shulyayeva

From chatbots to autonomous agents - teenage me would be mindblown



I’ve been interested in the idea of virtual intelligence ever since I was a teenager in Alabama. I’ve played around with everything from building rudimentary chat bots to creating snails and experimenting with open-ended simulations.

As a kid messing with my chat bot, the idea of “strong AI” or “AGI” was largely unserious, considered just an Asimov-esque fantasy. Now, we’ve got MIT publishing reports about the road to artificial general intelligence.

And back then, the idea of actually working with ‘AI’ in any sense, to my teenage mind, seemed limited to a realm of genius I would never attain.

Today, the AI company I work for launched QA.tech 1.0. I’ve spent the last year and a half helping to build autonomous web app testing agents. I feel like a plumber, in the most fun way possible - wiring models and surrounding infrastructure together to make them do something useful.

It’s been an interesting duality, working with and learning about AI across two sides of my life:

On one side, the practical applications and that playing-in-a-sandbox feel of actually fricking building this stuff! And better still - doing it when it’s still in its early, undiscovered stages! Actually working with the thing I’ve been so interested in since I got my first computer. Discovering how to make agents useful and reliable and robust. Figuring out how to measure their performance and how to integrate them seamlessly into daily workflows. At QA.tech I’ve gotten to help build AI-powered PR review flows, test suggestion and discovery, tracing and logging infrastructure, and so much more.

On the other side, I’ve been exposed to - and devouring - the more academic, policy-based, and philosophical questions you tend to encounter in a university town like Uppsala.

As I wrote in another post after a lecture about how AI will change the norms around the human:

Working with applications of AI for quite narrow use cases, these bigger questions can feel a little too ethereal for my day to day work - or rather, the day-to-day work can feel trivial/inconsequential for these bigger questions. But in the end, I do believe every grain of sand placed in the wall of AI-related development shapes both its future and our outlook on it. Building AI tools used by others impacts how the norms around AI use cases evolve over time. So I think it’s still interesting at worst and useful at best for a worker ant excavating a grain of sand to be cognizant of the larger anthill.

AI realistm

Now that I’ve both used and worked with this for a while, I feel firmly in the camp of “AI realism”. AI in its current iteration is not the magic solution to anything and everything under the sun. Effects of things like AI-assisted coding and essay writing on productivity are not as clear-cut as we might’ve hoped, and there are real psychological consequences of relying on LLMs for social contact and advice.

On the other hand, I’ve seen first-hand how a purpose-build AI agent can test a complex web application faster and more effectively than traditional alternatives. I’ve seen how a custom GPT with my books as a knowledge base can function as a useful writing assistant. And honestly, I’ve experienced how Claude and ChatGPT can help me think through a difficult personal situation (in conjunction with actual therapy!)… as long as I remember that they are but a flawed mirror of myself.

The most extreme camps seem to be the most visible - AI hype is cringe and AI denial is delusional. I don’t know if the current iteration of AI is how we get to AGI (and am leaning to ‘probably not’). But what we have today isn’t going anywhere - I think we’re just going to keep honing in on useful applications and learning what does and doesn’t work in practice.

© 2025 · Liza Shulyayeva · Top · RSS · privacy policy