Roee: Self Modifying Go Simulation Experiment - Part 2

Overview

In Part 1 we talked about the general setup of the world/grid and what the Agent and Instructions metamodels do. We left off at the part where the grid sends a copy of itself to whatever observer channels it has (currently just the aggregationObserver). So in this part we can go over my very rudimentary version of the “ORM Intentionaliser”.

Before we continue I feel I should reiterate once more than this isn’t meant to be an example of the architecture presented in the paper, but a bit of a hackaround to play with some of the general concepts described. I’ll admit by the end I was kind of sick of looking at this, so just got it to as reasonable a state of “completion” as I could muster.

Originally I had a lot of code snippets here…and I think I still do. But I ripped most of them about because it’s all on Gitlab anyway. Suffice to say the codebase is a bit of a buggy monstrosity and if you’re a fan of really unhelpful commit messages, this one’s for you!

ORM Intentionaliser

The ORM Intentionaliser lives in a separate package alongside the bootstrapped simulation. This is where aggregationObserver, aggregationReifier, and aggregationModifier live.

aggregationObserver

Here is how we initialize the aggregation observer:

func (e *AggregationObserver) Init(wg *sync.WaitGroup) {
	e.wg = wg
	e.potentialAggregates = make(map[string][]float64)
	e.c = make(chan world.Grid) // Use an unbuffered channel
	e.reifier = &aggregationReifier{}
	e.reifier.Init()

	go func() {
		fmt.Println("\n\nStarting aggregation observer")
		for {
			select {
			case g := <-e.c:
				e.Watch(g)
				if g.RemainingTicks == 1 {
					fmt.Printf("\nLast tick! Ticks simulated in grid: %d, ticks processed by observer: %d", g.Tick, e.ticksProcessed)
					e.analyzePotentials()
					e.wg.Done()
				}
			}
		}
	}()
}

We can see that when we receive a grid to our channel and run Watch() on the grid. This is where potential aggregates are detected.

func (e *AggregationObserver) Watch(grid world.Grid) {
	if len(grid.Population) == 0 {
		return
	}

	var fieldNames []string
	// find fields to check
	for _, a := range grid.Population {
		t := reflect.TypeOf(a).Elem()

		for i := 0; i < t.NumField(); i++ {
			fieldNames = lib.UnionStr(fieldNames, []string{t.Field(i).Name})
		}
	}
	for _, fn := range fieldNames {
		firstC := string(fn[0])
		if strings.ToLower(firstC) == firstC {
			continue
		}
		proximities := aggregate.GetProximities(grid.Population, fn)
		var vals []float64
		for _, p := range proximities {
			vals = append(vals, p.Distance)
		}
		sort.Float64s(vals)
		// Try to find "near" for this set of values. It needs to be less than the median
		near := e.findNear(vals)
		if near != -999 {
			e.potentialAggregates[fn] = append(e.potentialAggregates[fn], near)
		}
	}

	e.ticksProcessed++
	fmt.Println("\nDone with watch")
}

First we loop through the population of the grid (i.e. each agent) and reflect on its type. Then we get all fields associated with that type. We then loop through each field name. If it is an unexported field (i.e. starting with a lower case letter), we skip through it - we can’t observe unexported fields. If it is an exported field, we get a slice of proximities for that field between each agent. Here is what a proximity looks like:

type proximity struct {
	a1       world.Agent
	a2       world.Agent
	propName string
	Distance float64
}

Each type of field may have its proximity calculated differently. A Pos type would get a distance between two sets of x and y coordinates, an int might just run d = float64(intAbs(x - y)).

We create a new []float64 and copy each distance we find from every proximity into it. Then we sort the proximities in ascending order. Then we try to find a value that constitutes being “near” another agent in relation to that field. For now the “near” value is just the smallest observed distance as long as it is lower than the median of the set. If we find a suitable near value, we add that field with that value to a map of potential aggregates.

If are on our last tick, we analyze all the potentialAggregates we’ve collected throughout the run. We loop through each potential aggregate (a map of field names and distances) and find the mode from all the distances. The most frequently occurring distance becomes the minimum distance for the new aggregate. So we create a new qualifier that captures these membership requirements and add it to the reifier:

newQualifier := aggregate.AggregateQualifier{
    FieldName: propName,
    MinDist:   -1.00,
    MaxDist:   modeDist,
    MinQuant:  -1,
    MaxQuant:  -1}

e.reifier.AddParam(newQualifier)

Back to top

aggregationReifier

The next step, which is triggered from the main function, is to run all reifiers that are associated with each observer (in this case we only have aggregateReifier). So we go through each reifier and run Do(). Again we use very generic input parameters. Aside from a *sync.WaitGroup all we have is an arbitrary number of interface{} parameters. In our case now this will include only the new qualifier we just created, but leaves some room open (very messily) for multiple qualifiers or other kinds of parameters that an emergent reifier might use down the line.

// Currently this takes only one observation and reifies it..
// what if down the line we want to feed it multiple observations?
// Or rather one observation with multiple qualifiers
func (r *aggregationReifier) Do(wg *sync.WaitGroup, params ...interface{}) error {
	for _, p := range r.Params {
		newQualifier := p.(aggregate.AggregateQualifier)
		// Check if an aggregate with this kind of qualifier already exists.
		for _, q := range world.Qualifiers {
			aq, ok := q.(*aggregate.AggregateQualifier)
			if ok {
				if aq.FieldName == newQualifier.FieldName &&
					aq.MinDist == newQualifier.MinDist &&
					aq.MaxDist == newQualifier.MaxDist &&
					aq.MinQuant == newQualifier.MinQuant &&
					aq.MaxQuant == newQualifier.MaxQuant {
					wg.Done()
					msg := fmt.Sprintf("This qualifier already exists under the name of %s", aq.TypeToInstantiate.Name())
					return errors.New(msg)
				}
			}
		}

		n := generateName(newQualifier.FieldName)
		e := reifiedAggregate{name: n, qualifiers: []aggregate.AggregateQualifier{newQualifier}}
		r.reifiedAggregate = e
		r.modifier.AddReifiedEntity(&e)
	}
	// Now we need to send the aggregate to a modifier to actually modify the code...
	wg.Done()
	return nil
}

Back to top

aggregationModifier

I should note that I did not implement this as per the paper’s definition…the paper specifies that the modifier should modify the simulation to make existing models aware of the new types being created. I’m not really doing that at all, so my poor man’s version of the modifier is basically handling the code generation for the new types only.

Up next we run the modifiers that are associated with each reifier (this case just aggregationModifier):

func (m *aggregationModifier) Do(wg *sync.WaitGroup) error {
	for _, re := range m.reifiedEntities {
		newEmergentTypeName := re.Name()

		fName := fmt.Sprintf("%s.go", newEmergentTypeName)
		newEmergentTypeFullPath := fmt.Sprintf("%s/%s", m.agentLocationDir, fName)
		
		var aggregate *world.Aggregate
		t := reflect.TypeOf(aggregate).Elem()
		s := m.implementInterface(newEmergentTypeName, "aggregate", t)

		err := ioutil.WriteFile(newEmergentTypeFullPath, []byte(s), 0777)

		if err != nil {
			fmt.Println(err)
		}

		f := lib.OpenFile(newEmergentTypeFullPath)
		implementFields(f)
		m.implementInit(f, re.Name())
		m.implementQualify(f, re.Name())
		m.implementDestroy(f, re.Name())
		m.implementAddElement(f, re.Name())
		m.implementSetElements(f, re.Name())
		m.implementElementKeys(f, re.Name())
		m.implementRemoveElement(f, re.Name())
		m.instantiateQualifier(re.Name(), re.Qualifiers())
		addImports(f, []string{"sync"})
		f.Close()
	}
	wg.Done()
	return nil
}

So first we create a file name for the new type and put it in an appropriate location for that type back in the bootstrapped simulation directory. Then we get the Aggregate type via reflection (remember Aggregate in our case is a metamodel that is implemented as an interface). Here is what the Aggregate interface looks like:

// This is a meta model
type Aggregate interface {
	Init(w *Grid)
	GetId() int
	SetId(id int)
	GetAge() int
	GetName() string
	GetPos() Pos
	Getlock() *sync.RWMutex
	GetSize() (int, error)
	AddElement(e interface{})
	RemoveElement(e interface{})
	GetElements() map[string]*Listener // Calling this "elements" because these do NOT Have to be Agents.
	SetElements(elements []interface{})
	ElementKeys() []interface{}
	GetQualifiers() []Qualifier
	SetQualifiers(q []Qualifier)
	Getworld() *Grid
	Qualify(uid interface{}) bool
	Destroy()
}

Note that everything here is exported, but when we implement this interface not all the fields will be. Anything with a Get prefix will be created as a field in the emergent struct which implements this interface. The field name will be everything following the Get. So methods like GetId() int will create an exported Id int field in the struct. Methods like Getlock() *sync.RWMutex will create an unexported lock *sync.RWMutex field in the struct. To be honest in my haste I ended up using this as a way to have my modifier implement unexported fields that would never actually need the Get method because I knew it did what I wanted and didn’t want to specify the fields to implement in some other way at that point. For example, I’ll never actually use Getworld() anywhere…I will use aggregate.world; specifying Getworld() in the Aggregate interface simply makes my modifier add a world field.

Anything with a Set prefix will result in a field of that name being set in the method body. Eg SetId(id int) will have a body that does something like newAggregateType.Id = param0.

All of this is done via that first method we run up above, implementInterface(). It isn’t pretty. From here on we start generating Go source in all kinds of fun ways. Here is a taste of what’s happening, but you can also look at the actual source on Gitlab. There is so much there and so much I’d like to change, but frankly at this point after finishing this summary I just want to go back to working on snails for a while.

func (m *aggregationModifier) implementInterface(name, pkgName string, itype reflect.Type) string {
	var lines []string
	// Opening tag
	l1 := fmt.Sprintf("package %s", pkgName)
	l2 := fmt.Sprintf("type %s struct {}", name)

	lines = []string{l1}
	imports, s := implementMethods(itype, name)
	if len(imports) > 0 {
		lines = append(lines, "import (\n")
		lines = append(lines, imports)
		lines = append(lines, ")\n")
	}
	lines = append(lines, l2)
	lines = append(lines, s)

	return strings.Join(lines, "\n")
}

func implementMethods(itype reflect.Type, name string) (string, string) {
	var imports, inparams, outparams string
	var s string
	// Find what we need to implement
	for i := 0; i < itype.NumMethod(); i++ {
		var ret, body string
		m := itype.Method(i)
		inparams, imports = processInParams(m, imports)
		outparams, imports, ret, body = processOutParams(m, imports)

		s += fmt.Sprintf(`

	func (a *%s) %s(%v) %v {
		%s
		%s
	}

`, name, m.Name, inparams, outparams, body, ret)

	}
	return imports, s
}

After the new struct fulfils the requirements of the interface we refine it some more by implementing specific methods as needed. Like a method to get all member element keys, for example:

func (m *aggregationModifier) implementElementKeys(f *os.File, name string) {
	method := fmt.Sprintf(`func (a *%s) ElementKeys() []interface{}  {
		var keys []interface{}
		a.lock.RLock()
		for k, _ := range a.Elements {
			keys = append(keys, k)
		}
		a.lock.RUnlock()
		return keys
	}
`, name)
	implementMethod(f, "ElementKeys", method)
}

func implementMethod(f *os.File, name, body string) {
	// We will overwrite existing method if it already exists
	// Otherwise add a new method
	var newContents string
	var rewroteExisting bool
	f.Seek(0, 0)
	s := bufio.NewScanner(f)
	var nestingLevel int

	for s.Scan() {
		l := s.Text()
		if err := s.Err(); err != nil {
			fmt.Println(err)
		}
		trimmedL := strings.TrimSpace(l)
		fName := getFuncName(trimmedL)
		if fName == name {
			newContents += "\n" + body
			rewroteExisting = true
			nestingLevel = updateNestingLevel(trimmedL, nestingLevel)
		} else {
			if nestingLevel > 0 {
				nestingLevel = updateNestingLevel(trimmedL, nestingLevel)
			} else {
				newContents += "\n" + l
			}
		}
	}
	if !rewroteExisting {
		newContents += "\n" + body
	}

	f.Truncate(0)
	f.Seek(0, 0)
	f.WriteString(newContents)
	f.Sync()
}

We also need to handle adding new imports and such as needed, all of which is done in a generic modifier.go (here)

At the end of this whole process, our qualifier types are added to main.go, into a slice (there goes that extensional definition the paper discourages so much again). When running the first bootstrapped version of this, the qualifiers slice is empty. Now it might contain something like this:

var qualifiers = []world.Qualifier{

&aggregate.AggregateQualifier{FieldName: "Id",MinDist: -1.000000,MaxDist: 0.000000,MinQuant: -1,MaxQuant: -1,TypeToInstantiate: reflect.TypeOf(aggregate.AggregateId456{})},
&aggregate.AggregateQualifier{FieldName: "Pos",MinDist: -1.000000,MaxDist: 1.000000,MinQuant: -1,MaxQuant: -1,TypeToInstantiate: reflect.TypeOf(aggregate.AggregatePos477{})},
&aggregate.AggregateQualifier{FieldName: "Age",MinDist: -1.000000,MaxDist: 0.000000,MinQuant: -1,MaxQuant: -1,TypeToInstantiate: reflect.TypeOf(aggregate.AggregateAge4886{})},
}

Back to top

The new type

In the end what we end up with is a new source file like this one: AggregatePos7473.go. There’s lots of unsafe stuff here…and if I ever come back to this there are lots of things to fix. But I don’t know if I will, so right now we are where we are.

Back to top

Qualifier

The Qualifier is an existing metamodel and has an existing AggregateQualifier model. This is currently not modified by the simulation at all - all our ORM Intentionaliser does is instantiate them. The job of the qualifier is to a) instantiate new aggregates as needed and b) qualify agents that go through each existing aggregate.

For example, when aggregate.Qualify() runs for each aggregate each tick, the aggregate checks if the provided agent uid is in close enough proximity to any existing element in the aggregate to satisfy membership requirements:

func (q *AggregateQualifier) QualifyAgent(a1uid, a2uid interface{}) bool {
	q.world.Lock.RLock()
	a1 := q.world.Population[a1uid.(string)]
	a2 := q.world.Population[a2uid.(string)]
	q.world.Lock.RUnlock()
	if a1 == nil || a2 == nil {
		return false
	}
	proximity := GetProximity(a1, a2, q.FieldName)

	if q.MaxDist == -1 || proximity.Distance <= q.MaxDist {
		if q.MinDist == -1 || proximity.Distance >= q.MinDist {
			return true
		}
	}
	return false
}

Back to top

Restarting after source modification

I have a buildandrun.sh which runs at the end of a main function if a certain const tells it to:

#!/bin/sh -x

echo "building and running roee"
echo "Current dir" $PWD
go version

echo "running gofmt"
gofmt -s -w lib

echo "compiling"
cd cmd/roeesim;
go build

GOBIN=$GOPATH/bin go install

roeesim & disown roeesim

Back to top

Visualizing the results

There’s a small web application that I can run to visualize all of the json data I dump out, called roeeweb. It is a virtually unstyled super ugly page that just lets me go through the ticks in a run and click on a cell to see what agent is there and what aggregates it belongs to. This lets me see agents leave and enter aggregates, new aggregates being created, etc.

Roeeweb


func main() {
	r := newRouter()

	http.Handle("/", r)
	http.ListenAndServe(":8081", nil)

}

func newRouter() *mux.Router {
	r := mux.NewRouter()
	r.StrictSlash(true)

	r.PathPrefix("/static/").Handler(
		http.StripPrefix("/static", http.FileServer(http.Dir(getProjectRootPath() + "/html/static/"))),
	).Methods("GET")

	r.PathPrefix("/data/").Handler(
		http.StripPrefix("/data", http.FileServer(http.Dir(getProjectRootPath() + "/lib/orm/data/"))),
	).Methods("GET")

	tmpl := template.Must(template.ParseFiles(getProjectRootPath() + "/html/index.html"))

	files := lib.GetDataTickCounts(getProjectRootPath() + "/lib/orm/data")
	r.HandleFunc("/", func(rw http.ResponseWriter, req *http.Request) {
		varmap := map[string]interface{}{
			"datasets": files,
		}
		tmpl.ExecuteTemplate(rw, "index", varmap)
	})

	http.Handle("/", r)
	http.ListenAndServe(":8081", nil)
	return r
}


func getProjectRootPath() string {
	_, b, _, _ := runtime.Caller(0)
	folders := strings.Split(b, "/")
	folders = folders[:len(folders)-3]
	path := strings.Join(folders, "/")
	return path
}

Back to top

Conclusion

There isn’t really a conclusion. I messed around with a few of the concepts described in the original paper. It was fun and I think it did help me understand the paper in a better way. You know when you doodle on a piece of paper, or maybe fidget with a paperclip, when thinking sometimes? That’s kind of what this project was - fidgeting. I kind of thought about adding instruction mutation and a few other things, because if you actually this you’ll notice a pattern of disappointing non-action after a few ticks as the delete-instruction agents take over and settle in a checkerboard pattern ;) Now that would be a nice thing for an automated observer to recognize!

I didn’t even scratch the surface of the paper in terms of implementation, but that’s ok. I ended up with a bit of a monstrosity that modifies and runs itself. But it was fun and now I’m ready to go back to my snails!

Back to top

comments powered by Disqus