IJCAI Session Notes: Rebel Agents

What follows is a set of notes I took during two of the IJCAI Goal Reasoning workshop talks, about rebel agents. Both talks were presented by the same speaker and both focused on rebel agents. These talks were quite short and the speaker had to go fast, so I feel like I missed out on a lot of the information I wanted to record. As a result some parts are obviously omitted and more than usual are written from memory today, increasing potential inaccuracies or misinterpretations. Overall the impression I got of rebel agents is that in today’s climate it is a potentially somewhat inflammatory term for a very ethical trait. It seems that, potentially, any agent in a non-trivial domain would have to be a rebel agent and have the ability to go against its instructions in order to be an ethical agent.

The Ideal Rebellion: Maximizing Task Performance in Rebel Agents

By Dustin Dannenhauer, Postdoctoral Fellow of the National Research Council

What are Rebel Agents?

Rebel agents are agents which may reject, revise, or protest their goals, plans, or actions.

Some articles related to rebel agents:

We recognize the importance of an agent being able to say “no”.

When should an agent say “no”?

Rebellion may be warranted when:

Explainability

In user studies, a robot that can explain to a human using theory of mind why it was deviating from expectations (“I think you forgot, we need to go left and not right!”), was deemed to be more intelligent [presumably than a robot that could not explain? I think I missed part of the notes here]

There are a few papers/articles about how a robot can effectively express to a human that it refuses to perform a task.

Speech directives for rebelling.

There are 5 felicity conditions for rebellion and dialogue response.

  1. Knowledge: Do I know how to do x?
  2. Capacity: Am I physically able to do x now? Am I normally physically able to do x?
  3. Goal priority and timing: Am I able to do x right now?
  4. Social role and obligation
  5. [Unfortunately did not catch the last one]

Speaker’s work:

  1. Extended MIDCA (Metacognitive Integrated Dual-Cycle) architecture to handle rebellion and multi agent scenarios
  2. Empirically demonstrated when rate of rebellion improves performance according to different metrics
  3. A simulated multi agent drone domain: plant protection

Plant Protection Domain

Rebellion in MIDCA

An agent is given a goal to destroy an invasive species. But what if they see a native plant at the location? The agent recognizes that it may rebel against that goal. The agent observes the world, perceives/processes observations, interprets any instructions, evaluates any current plans & checks its goals, compiles its intentions, makes plans to achieve the current goal, acts and speaks, and goes back to perceiving its environment.

If the agent identifies the need for rebellion (eg identifies a native plant that may be killed in process of spraying invasive plant), it informs operator that the goal given to the agent is undesirable. If the operator disagrees with the agent, the agent may decide to rebel and drop the goal. The agent’s propensity to rebellion can be adjusted, as can the operator’s resolve.

Results: If the agent rebels more, it will save more native plants but not remove as many invasive plants. If the operator resolve is higher, more invasive plants will be removed but more native plants will be removed as well.

Maybe rebellion should be considered as a metacognitive process; we want a metagoal. For example, the goal may be “remove invasive plant.” The metagoal may be: “Rebel against that goal.” In MIDCA architecture, there is a separation between the cognitive and metacognitive layer, and maybe rebellion should be in the metacognitive layer.

Explaining rebel behaviour in goal reasoning agents.

As discussed earlier, rebellion will be important for agents (both embodied and digital) interacting with humans.

Some reasons for possible rebellion were recapped: differential information access, over-subscription (agent given more goals by many people, it has to choose only some), ethical conflict, impasse (goal not achievable due to resources), task violation (requested goal may be outside of the agent’s tasks), safety.

The need for explanation

When rebelling, behaviour is likely to be unpredictable.

Any agent that can rebel should be able to explain itself.

Desire from lawmakers: individuals affected by automated decision making have a right to explanation; see “right to explanation” (GDPR)

We need explainable goal reasoning for rebel agents. Agents can perform operations on their goals where goals are explicit structures; they can change or refine their goals.

Rebel agents will need to explain themselves to develop trust and understanding because they’ll be unpredictable. Rebel agents motivate the need for explainable goal reasoning. Explainable goal reasoning in an under-explored area, especially in human-agent teaming environments.

Some examples were provided in the form of an autonomous delivery drone. Maybe the operator tells the drone to deliver a package at a certain location. The operator cannot see what the drone can see as it is en route, and does not necessarily know all of the agent’s underlying goals. If a drone encounters a body of water and its instructions are to avoid water where possible, it might take a longer route to the delivery location. The operator might notice that the drone went around a body of water instead of flying over it. The operator should be able to ask questions to find out why the drone did this, and get an understandable explanation back. The kind of explanation the agent has to be capable of will depend on the audience: a drone operator may be able to find use in a more technical explanation than a customer.

comments powered by Disqus