Day 1 of ECAL 2017 - ISAL Summer School

Today was the first day of ECAL (European Conference on Artificial Life) 2017 in Lyon, France and my first year attending. It’s lucky I managed to go this year, because apparently this is the last ECAL! Next time it will be turning into the International Conference on Artificial Life and will be held much farther away (I think it is meant to be held in Japan). Although I am already getting my hopes up to go again, not only for the conference but to visit Japan once more!

Anyway, as a hobbyist hacking together an amateur snail simulation I sort of went in not getting my hopes up - I expected a large portion of the material to go over my head. Looking back, it may not have been a good idea to pay a substantial sum of money to visit a conference I wasn’t sure I’d get anything out of. But I’m glad I did, because the first day was excellent!

Today at ECAL they held the 4th ISAL (International Society for Artificial Life) Summer School. The day consisted of four talks providing what I thought was an excellent introduction to artificial life. I am very sleepy, so please forgive any typos or incoherent rambling, but here is a brief overview of the talks:

All of the talks were great. Artificial Chemistries was a little outside of my primary interest, but I definitely want to learn more about the topic.

Historical and Philosophical Perspectives on Artificial Life was the perfect start to the conference, focusing largely on the arrow of complexity hypothesis and testing for it, and going into some philosophical views on this as well as covering a few relevant simulations.

Digital Evolution was inspiring - it was mostly about AVIDA and examples of simulations that they have run. I think Charles Ofria mentioned he was running a tutorial or workshop on building simulations for the web tomorrow and I definitely plan to attend.

Open-Ended Simulations covered a concern that has been nagging at me for a while with my snails - in it Susan Stepney brought forward a definition and criteria for open-endedness and novelty, and explained the necessity of having simulations be self-modifying to truly achieve it. She also drew parallels between the concept of a scientific model of the physical world and how it easily translates into an OOP structure.

She explained a potential structure for an open-ended simulation: in it we have the concept of an instance (well…an instance), a model (a class), and a meta-model (an interface). She produced several examples, like a map. An instance might be the physical city of New York. A Model might be a topological map of New York. Another model of New York might be a navigational “map” in the form of driving directions. The same instance can be represented by many models. A Meta-Model would be a model of a model. Eg, a legend for a map of New York that shows what represents roads, bridges, pathways, etc.

Another example: A meta-model may be an Agent. A model of this meta-model may be an ant; another model of this meta-model may be a bird. And of course the instances would be individual birds and ants running around in the simulated world.

The problem, she explained, is making a jump to have the engineering side of the simulation - the actual program - adapt to the data that emerges at runtime.

For example, when building a simulation we may have the concept of an agent in mind - and an agent might be a bird or an ant, so we define an IAgent interface and a Bird and Ant class to implement it. We then run the simulation and have it create our first instances of birds and ants, and let it go. Now, at runtime, some birds might eventually start exhibiting some new behaviour, or develop some new trait. They might “evolve” into a brand new observable model, a “Flock”, with its own meta-model, an “Aggregate”. And a flock of birds may have special properties that didn’t previously exist. But that won’t be represented in our program, because we never anticipated a flock. And in our program a worker ant may never, for example, have the potential to evolve wings, because that wasn’t part of our original model. So there is very little room for novelty or open-endedness. Susan Stepney in her talk proposed that the main problem is making that jump from the “scientific model” we observe at runtime to the “engineering model” that is powering the simulation. Eg - maybe there is some sort of monitoring process that sees when something new may be starting to emerge, like a flock of birds, and modifies the program to account for it, writing new classes and interfaces on its own.

I took a lot of notes and maybe the answer to this question is already in them, but as I write this I am wondering how truly novel behaviour can emerge if we are trying to power the program by the data that emerges from the program - if the program doesn’t allow for that novelty, how can we even observe it to modify the program in the first place? I guess with a flock I can sort of see it since it is more of an aggregate of existing functionality/organisms, but if we never anticipate an ant growing wings, how is the simulation going to have an ant grow wings and then tell our program that it’s possible? Maybe I’m just too sleepy to really think about this right now.

Anyway, like I said, I took a lot of notes. I think I’ll try to compile them into some sort of readable format and post them here at some point.

© - 2021 · Liza Shulyayeva ·