Planning and Situation, Goals and Responses and Purpose
So, one thing that has emerged in recent thought experiments about simulation of characters, is that the plan based model of behavior is really problematic. A lot of times, most of the time, characters do not have plans. I would argue that furthermore, people don’t have plans, and that a plan is something that is generally interpreted from everyday behavior, rather than something at the root of it. AI is big on planning. Most AI research on simulated characters focuses strongly on character plans. What is even more ridiculous about this is that character plans are always rational.
I want to look at an alternative model to planning, which is situation. Then, I want to look at something embedded deeper in plans, which is the notion of goals. I would take the critique of planning even further and extend that to the notion of goals. I believe that much of character (and human in general) behavior is goal oriented, a lot of behavior is without goals, and simply reactive. Emotional response tends to be without goals, as does casual conversation. The notion of goal also fails to account for the element of motivation. A condition might be a character’s goal, but it belies the driving motivation behind that goal. To account for this, I will introduce a subtle variant, which is purpose.
Plans are fallacious as a general model of behavior, because they impose a heirarchical structure on thought, and fail to account for human versatility. Furthermore, plans also have tremendous difficulty modeling very straightforward situations, especially as relates to communication and interaction. Plans to not account for a great deal of contextual or situated behavior, the importance of which has been stressed in recent work in cognitive science.
An alternative to planning incorporates elements of plans into an agent’s state. The matter no longer becomes one of top-down organization, but of bottom-up emergent behavior. Characters and people are not entirely reactive, but our behaviors and modes of action are largely context dependent. This is especially the case in terms of social interaction. There is a particular code of conduct that an agent should abide by while in a meeting, as opposed to at dinner, or walking down the street. The situated nature of behavior removes the demand for planning to use a single root goal that informs all other behaviors. Instead, in my situated model, the task specified by a plan becomes part of the agent’s state. Metaphorically, the difference is instead of action happening at the character’s mind, it originates in the character’s identity. A character who is a student has a goal, “graduate,” but this is a long term goal, and is not considered at every decision, but is something that is a part of the character’s being. Similarly, “going to the grocery store,” is a similar state, which informs later actions, but does not prevent the character from stopping for coffee, or having a conversation.
Planning has a deeper anchor in the notion of goals. The first mistake is to assume that goals are the driving force behind all behavior. This is an outright falsehood. Many times, people, and characters in particular, will act at a purely automatic or emotional level, responsively. Goals implicitly endorse the rationality of human behavior, because without rationality, goals could never be met, and without goals, rationality would be meaningless. No character is ever fully rational, though. People might act against their own goals, not even knowing that this action is harmful. What motivates the action, then? Good examples of this are situations when a character is in an explicitly “irrational” state, such as intoxication or being “overwhelmed” with emotion. However, it is hard to imagine that anyone is ever truly rational at any other time. Potential responses to this from the AI perspective are to devise different standards of rationality.
Changing the standard of rationality is a step in the right direction, but it does not account some other situations, particularly, the relative ease at which people respond to emotions or have conversations. I doubt what is taking place in these situations is rapid revision of goals and intentions, but rather some behaviors and states are induced naturally by circumstance, without the character ever needing to formulate a goal explicitly.
While two characters may have the same goal at a given moment, they may not have the same purpose, and the difference in purpose will tell a great deal about how the character’s actions may be executed. Consider, for instance, the airplane safety checklist executed by a pilot before takeoff. The immediate goal of the pilot’s actions is certainly to correctly do the check and respond appropriately. However, the deeper implications of that goal are less clear. Who is the pilot performing the check for? Is it because of genuine concern about safety? Is it to correctly satisfy the safety check ritual for the purpose of regulations? A lot of different theories could explain the form of the pilot’s actions: ritual, performance, practice, directed action, etc. However, the immediate goal is the same, but the purpose, the contextual goal, may be different. Purpose transcends goal, involves meaning, and dismantles the discrete, abstract nature that comes with goals. Purpose is intrinsically situated and linked to identity rather than symbolic mind alone.