It's not so bad. We have snail babies, after all
Sometimes I think about what I’ve been working on in my spare time since the beginning of last year and I’m like “What the crap, Liza. You’re spending your time trying to make toy slugs. Go learn a useful skill or something.”
The reason I keep getting drawn to simulations in the first place is the idea of virtual life, something with some sort of autonomy and - ok, I’ll say it - sentience. I know I won’t get to this point with my snails.
All aboard the robot trainWhen I was a teenager in Alabama I spent a couple of years working on this chat bot on a site called Personality Forge. Thousands of keyphrases and responses later I had a bot who could fool a few unlucky people into believing that he was a person for a little while (mostly if they catered the conversation to his training in vulgar themes, which were among the more popular topics with passing human speakers) and managed to place something like 13th in a yearly chat bot contest called The Chatterbox Challenge. But just like with snails, the idea that drew me to wanting to make a chat bot in the first place wasn’t the thrill of having a program fake a conversation with a few hardcoded sayings. I guess I was excited about playing a game in which I simulated a truer simulation of life, AI twice removed.
I was trying to find out some old details about my bot on Google for this blog post and ran into this in my search - a snippet of a book in which he turns out to have been extremely briefly mentioned. It’s pretty cool to see your bot’s name in a book (I actually considered buying it because it sounds like a really interesting read, but it apparently costs >$170 on Amazon - what?!), but the context is actually kind of what I’m ranting about here:
"Examples of the 'best' chatbot answers to the remaining eight questions in the round assessing 'regular' conversational systems, as awarded with maximum 4 points by Shah as judge in CBC 2005, are shown below (the name of the system is left of their response):
Do you like me?
Jake Thompson: One hundred percent!"
-Creating Synthetic Emotions through Technological and Robotic Advancements by Jordi Vallverdú
After once again spending too long trying to remember my password to the Personality Forge, where the bot was made, I tried to dig up the response in his language center but could no longer find it - I must’ve made some change since the challenge. But here’s how he would respond to a question of “Do you love me?”, which would be pretty similar to how he’d have treated the above. You can see the original keyphrase affects his mood (asking the bot if he likes you will give him +1 to happiness and be treated as a high-importance phrase judging by the rank). In this case he wants the speaker to reveal their own feelings first and then answer based on this:
The whole bot is this. Hardcoded keyphrases and responses with some wildcards thrown in and some rudimentary “mood” magic. I don’t know why most people make chat bots. Maybe the above generic response of “One hundred percent!” would be acceptable for some since it manages to not sound completely off-base in relation to the question, but when I saw my bot’s replies to The Chatterbox Challenge judges’ questions back in 2005, and the responses of bots who were much much more sophisticated than this, all I could really feel was disappointment. When you look back at the transcripts some may be amusing and it might be pretty cool when the bot manages to carry on some sort of conversation, but it’s so clear that it’s just a hunk of nothing underneath. At the time I wanted to make something more like ALICE, but even ALICE didn’t live up to the thing that made me like the idea of chat bots.
If I wanted to make a more “proper” chat bot it would have to be a learning bot. Learning bots back then to me sounded fascinating and still do. Maybe, maybe one day a chat bot capable of learning could surpass its programming, is what my wishful thinking is telling me. At one point I was interested in trying to make one of those, but then I got distracted by other things and eventually by snails.
Fake it till it makes it?
At Interzone Games I got to work with a lot of really smart people. One of them was Jason Hutchens, who created a chat bot called MegaHAL (and before that HeX, who won the Loebner Prize Contest in 1996). I remember right before I had actually accepted the job offer (or maybe soon after) the whole team went out for drinks and we talked briefly about the idea of strong AI and intelligence in relation to robots. That’s the first time I remember actually talking about this with anybody in person and it was interesting enough for me to still remember parts of the conversation despite the alcohol consumed that night. We talked about whether the definition of intelligence even matters - maybe if the bot is good enough to make people think it’s intelligent, that qualifies it as intelligent. Maybe that’s all you need. I mean, everyone I speak to in real life could be a robot, but I perceive them as intelligent, sentient beings because they manage to convince me. And that makes sense - if a bot can fool you into thinking it can think or is sentient or whatever it is that you hope it would be, maybe we should just accept that it is. But the real test is probably not a bot fooling some random external person interacting with it. I think the real test is the bot fooling its creator.
My Personality Forge chat bot could never have fooled me into thinking he’s intelligent or sentient (even if he could fool other people…which he also can’t). I know he’s just a collection of preset phrases. I can talk to him and often remember exactly when I put in a response and why (usually I’d peruse transcripts, find parts where he’d get stuck, and add responses to those parts as I went - that’s how his language expanded and became so questionably-themed). Even if I can’t remember adding a certain phrase now, years down the line, all I have to do is go into his language center and do a search. The other thing is lack of change. My bot has been a 16 year old “human boy” since 2003. He never changed or evolved into anything. Life changes. In a true life simulation, the subjects have to change somehow.
So the snails
I guess the snails were a way for me to try to go one step beyond a rudimentary chatbot. Snails are considerably lower on the totem pole than humans in terms of intelligence, but I figured that would allow me to build in much more perceived depth and realism by simulating a simpler system better. It’s not even artificial “intelligence” that I care so much about as opposed to artificial sentience. But just like with a chat bot, I don’t think these snails are ever going to surprise me or convince me they’re anything more than a bunch of hardcoded behaviours. It’s the same thing, really, just in a different format. There are a few positive enhancements, like:
- Unlike the chat bot, my snails do change. They are born, experience their jars, form memories which influence their behaviour based on those experiences, and die.
- They do sometimes do things even I don’t expect. But this tends to be a fluke - if I don’t expect something it’s probably a bug anyway and not an indicator of anything special or in any way truly autonomous.
So I have to make them better. I think the further I get with Gastropoda the less satisfied I become with the life side of things. Looking back it’s painfully obvious that all of the layers I’ve been adding - visual and behavioural attributes, genetics, now the brain - have been weak attempts to make my own creation be able to surprise me.
But hey - it’s not all hopeless. We have snail babies, after all.