The

Memory-Based Opportunistic Reasoning

Under Construction

The following document is also available in postscript form at:


Anthony G. Francis, Jr.

College of Computing
Georgia Institute of Technology
Atlanta, Georgia 30332-0280
centaur@cc.gatech.edu

Advisor

Ashwin Ram

Thesis Proposal Committee

Ashwin Ram, Janet Kolodner and Kurt Eiselt

Abstract

The real world is dynamic and independent, presenting agents acting in it with new opportunities on its own schedule, not theirs. This dynamism poses the challenging problem of how to make an intelligent agent responsive to changes in the dynamic world that it lives in, even though those changes may not directly correspond to its current reasoning goals. The solution is to make the agent's reasoning memory-driven, allowing it to be reactive and opportunistic while taking advantage of its past experience. Because retrievals can now occur at any time, the agent's reasoning system needs integration mechanisms to incorporate remindings upon demand; however, because the amount of information provided by memory is potentially unbounded, the agent must also incorporate some form of utility-based control. We propose a theory of memory-based opportunistic reasoning to address the problem of opportunistic multistrategy reasoning in an intelligent agent. Memory-based opportunistic reasoning uses a utility-based metacontroller to integrate a context-sensitive, asynchronous memory system with a flexible, responsive reasoning system. This architecture provides the agent with the ability to generate spontaneous and opportunistic remindings and to take advantage of them when it is useful to do so. To validate this theory, we have developed a comparative algorithmic framework to test the theory under various conditions; this framework can be used not only to improve the reasoner by providing the metacontroller with information about the characteristics of its submodules and their utility, but also to evaluate the entire system to determine under what conditions the memory and reasoning systems as a whole are useful.

1. The Hunter and the Tiger

A cautionary tale about being an opportunistic agent in a dynamic world.


A hunter crests a hill. His rounds have been long and the sun is hot, and the emptiness in his belly is almost as bad as the dryness of his throat. On the way to the stream, he sees a rabbit caught in one of his traps, and decides to detour for a much-needed meal.

The real world is dynamic and independent: real world domains change independently of the actions and reasoning of agents operating within them. Many researchers now believe that the task of building intelligent agents in real-world domains requires that they be situated: the constraints of their reasoning should be tailored to the domains they function in. In particular, we maintain that one of the key requirements of a situated agent is opportunism: the ability of an agent to alter a pre-planned course of action to pursue new goals, based on some change in the environment.


As the hunter steps up to the rabbit, a tiger leaps from the bush, angered that anyone would dare to interrupt its meal. The hunter turns and runs, the tiger gaining on him rapidly.

Opportunities represent changes to the world which require a response on the part of the agent outside of the scope of its current plans. Opportunistic behavior rests on three capabilities. First, agents must be able to recognize opportunities. Second, agents must be able to suspend or modify their current goals in order to pursue the opportunity. While these two capabilities are sufficient to produce a minimal level of opportunism (Birnbaum 1990), a situated agent needs a third capability: it must be able decide whether or not it should pursue the opportunity, given the context and importance of its reasoning.


Quickly, the hunter reaches the stream and leaps over it; somehow, the opportunity to take a drink seems less enticing.

Traditional opportunistic reasoning systems (e.g., Hammond 1989, Simina & Kolodner 1995) depend on cues from the environment to satisfy the preconditions of suspended goals. In other words, traditional systems use externally generated opportunities relevant to some problem that the agent is pursuing. However, this fails to take advantage of the full potential of opportunistic reasoning: internally generated opportunities may provide the greatest potential improvement in an agent's performance (e.g., Ram 1989, 1991, Ram & Cox 1994, Ram & Hunter 1992).


Suddenly, the hunter remembers a cache of weapons he has hidden nearby, and veers towards a canyon. Sharp teeth and fangs aside, the tiger is doomed now.

Any change to an agent's memory, whether caused by an internal or an external event, may be potentially relevant to any goal or process maintained by the agent. This potentially unconstrained coupling arises because an agent cannot completely specify the conditions under which a piece of knowledge may be relevant; therefore, in contrast to traditional models, we claim that opportunities should be retrieved not on the basis of preconditions for action but instead on remindings of relevant situations.

As an example, consider the hunter and his weapons cache. Previous changes to the world - the trapped rabbit and the irate tiger - provided external cues which were directly relevant to suspended goals held by the hunter. Responding to the opportunity simply required reactivating the blocked goal - in the case of the rabbit, the goal to obtain food; in the case of the goal, the preservation goal to avoid the jaws of death. Remembering the cache, on the other hand, provides an opportunity to go on the offensive - which is unlikely to be relevant to any suspended goal maintained by the hunter. The tiger is not a common prey animal, and it's unlikely that the hunter ever directly attempted to attack one before (discretion being the better part of valor); therefore, any experiences the hunter has had with the use of his weapons are likely to require a great deal of adaptation. Instead of unblocking a suspended goal, the canyon reminding provides an opportunity to reformulate the problem of avoiding the jaws of death. Unlike the external changes, which enabled actions that the hunter already wished to perform, the internal reminding enabled a reasoning mechanism which radically changed the course of the hunter's activity. We claim that it is precisely this internal relevance to the reasoning mechanisms of an agent that makes reminding a powerful tool for opportunistic reasoning.

We propose a theory of how an agent operating in a complex, changing environment could flexibly and continuously remember the past and use those remindings opportunistically to chart a course of processing. This theory has three components: a theory of memory which allows anytime and asynchronous retrieval based on the agent's context, a theory of case-based problem solving that uses integration mechanisms to incorporate new information at any point during problem solving, and a theory of metacontrol based on an utility-based search through a space of task networks.

The theory is implemented in a system called NICOLE, which is divided into three sub-modules: the memory system MOORE, the case-based reasoner MPA, and the metacontroller TASKSTORM. To evaluate this theory, we propose a methodology based on computational models of artificial intelligence systems which allows both algorithmic and performance analyses of the systems in question. This theory allows the evaluation of the NICOLE system over a wide space of design decisions, domain characteristics and problem distributions, allowing us to evaluate both the system as a whole and the contributions of various components of the system to its overall performance.

In this proposal, we will outline a theory of memory-based opportunistic reasoning. Section 2 discusses the characteristics of real-world domains that require opportunistic reasoning, explores the requirements of opportunistic reasoning, and emphasizes the importance of internal remindings. Section 3 proposes the theory of memory-based opportunistic reasoning itself and outlines its implementation in the NICOLE system. Section 4 explores the implications of memory-based opportunistic reasoning on a theory of memory and discusses its implementation in MOORE. Section 5 explores the impact of memory-based opportunistic reasoning upon a theory of reasoning itself and discusses an instantiation of that theory in the MPA system. Section 6 explores how the previous theories impact a theory of control, and discusses its implementation in TASKSTORM. Section 7 proposes the computational model methodology and discusses how it can be used to evaluate this theory. Section 8 discusses the future directions of this research in both theory, implementation, and evaluation. Finally, Sections 9 and 10 review the contributions of this research and present some conclusions.

And now, we return to our story...


The hunter whips out a can of paint from his weapons cache, deftly drawing a hole in the canyon wall beside him and darting inside. The tiger runs after him, smacking into the painted hole full tilt and flattening out instantly like a giant pancake. After a few seconds, the flattened tiger peels off of the cliff face and flutters away in the afternoon breeze.

What, did you think I was going to kill the tiger?

2. Real-World Domains

An introduction to reasoning opportunistically in the real world.

2.1 The Dynamic World

Question: What are the characteristics of the real world relevant to building an intelligent agent?

Answer: The real world is dynamic.

The real world is rarely as simple as the blocks-world domains early AI systems played in. The real world shrouds its complete state from an agent's sensors; it changes stealthily and independently from an agent's expectations; and even the agent's own actions have no guarantee of success. Real-world domains are dynamic: they develop independently of the actions and reasoning of agents operating in them.

Question: What properties does an agent need to have to deal with a dynamic world?

Answer: Agents must be opportunistic.

Because of this dynamism, and also because agents have finite information and resources, it is not possible for an agent to completely anticipate future courses of events, or even completely anticipate the results of its own actions. Furthermore, because most agents are not omnipotent, it is not possible for an agent to satisfy all of its goals as soon as they arise. Therefore, agents must monitor the world, seeking changes that will permit blocked actions or threaten prior expectations. These changes are opportunities, and any agent that operates in a dynamic world requires the ability to reason and act on those opportunities.

2.2 Requirements for Opportunism

Question: What requirements does opportunism place on an agent?

Answer: Agents must be able to recognize opportunities, be capable of pursuing opportunities, and must be able to decide whether to pursue an opportunity.

Opportunities represent changes to the world which require a response on the part of the agent outside of the scope of its current plans. By definition, opportunities are not relevant to the goals that the system is directly pursuing; also by definition, opportunities must be relevant to some goal that the system maintains. To exploit an opportunity, an agent needs three capabilities:

All three of these abilities are important. At a bare minimum, an agent must be able to recognize an opportunity and the goals that it pertains to; but without the ability to suspend or modify its current goals to pursue the opportunities it has recognized, the agent will remain rigidly locked into a plan of action, leaving the effort it has spent in detecting the opportunity effectively wasted. A number of opportunistic reasoning systems have been built on these two principles alone (e.g., Hammond 1989, Simina & Kolodner 1995) but, as our hunter and tiger demonstrated earlier, more is required of an agent operating in a real domain: the agent must be able decide whether or not it should pursue the opportunity, given the context and importance of its reasoning. Where the inability to pursue opportunities leads to rigid behavior on the part of an agent, the inability to control that pursuit leads to an unfocused agent, unable to pursue any goal in the presence of the slightest distraction. These pathological behaviors are sometimes realized in human subjects with frontal lobe damage in the form of perseveration and environmental dependency (Levinson 1994).

All reasoning processes within an agent - planning, memory, perception - suffer from the same limited information, resource bounds and inability to predict the environment. Therefore, all reasoning processes must either be opportunistic themselves or must be able to participate in opportunistic activity on the part of the agent's overall reasoning system. This contrasts with traditional opportunistic reasoning systems, in which problem-solving or reasoning are opportunistic, but memory is not.


                       Internal Opportunities         External Opportunities     

    Exogenous       Suspended Reasoning Subgoal      Suspended External Goal     
    Generation               IMPROVISER                      TRUCKER             
                      (Simina & Kolodner 1995)            (Hammond 1989)         

    Endogenous         Pending Reasoning Goal         Pending External Goal      
    Generation                  MPA                     Blackboard Systems       
                        (Francis & Ram 1995)         (Hayes-Roth & Hayes Roth    
                                                              1984)              



Table 1. Taxonomy of Opportunities

2.3 Where do Opportunities Come From?

Question: What are opportunities, that an agent can recognize and act upon them?

Answer: Opportunities are changes, to the world or to the agent, which are relevant to some internal or external goal the agent holds but is not currently pursuing.

There are many ways of classifying opportunities. One way is to group them by the classes of action they enable: external opportunities enable external actions, while internal opportunities enable reasoning actions. Another way is to group them by their source: an exogenous opportunity is generated by a change in the world, while an endogenous opportunity is generated by a change within the agent. Table 1 lists a taxonomy of opportunistic reasoning systems broken down on the exogenous/endogenous and internal/external distinctions.

Traditional opportunistic reasoning systems suspend reasoning or action goals when their preconditions are not satisfied and wait for some external input to provide an object to satisfy those preconditions. In other words, traditional systems use exogenously generated internal and external opportunities, usually restricted to opportunities related to the problems that the agent is pursuing.

However, changes in the world (or within a reasoner) may provide potentially unbounded opportunities for reasoning or action, not only by unblocking goals but also by providing new goals, new information about the domain, or even new criteria for action. These kinds of internal remindings can be a powerful source of problem-solving or learning knowledge (e.g., Ram 1989, 1991; Ram & Cox 1994) but no overall theory exists of how to integrate systems that exploit opportunities relevant to their internal world with systems that exploit opportunities relevant to the external world.

2.4 Resolving the Open Questions of Opportunistic Reasoning

In summation, real-world domains seem to require opportunistic reasoning, yet existing theories of opportunistic reasoning fall short in two key areas:

We propose a theory of opportunism of that addresses both of these open questions. Specifically, we claim that opportunistic reasoning is best viewed as a memory-driven but utility-controlled process. As we have noted, all of an agent's reasoning systems need to participate in the opportunistic behavior of the system as a whole; therefore, this theory of opportunistic reasoning impacts theories of memory, reasoning and control. In the next four sections, we outline the theory of memory-based opportunism and discuss its impact on theories of memory, reasoning, and control.

3. Opportunities to Remember

Why memory-based opportunism poses new challenges.

Question: How can we make an agent that takes full advantage of the range of opportunities without losing control to the environment?

Answer: By making reasoning reminding-driven but utility-controlled.

3.1 Living With the Past

Life presents us with many opportunities to be reminded of the past. A friend might mention a movie opening this weekend, bringing to mind bad experiences with similar films; a significant other might mention her flight to a potential graduate school, bringing to mind unplanned details for a joint flight to Crete; an advisor might mention a class that needs to be taught, bringing to mind past experiences with teaching and the joy that it brings.

But bringing these experiences to mind is only part of the story. Once we have retrieved them, we need to act upon them appropriately - even though there is no guarantee that these remindings are relevant to our current problems, or even to any problems that we have faced in the past. Even when the experiences are relevant, we must recognize them as opportunities that we can exploit, and we must alter the way we act to incorporate the advice that these experiences bring.

The tradition of case-based reasoning attempts to tackle many pieces of this problem; various systems have examined how can we be reminded of the past (Domeshek 1992) and how can we use the past in our current situations, either as the foundation for problem solving (Kolodner 1993) or as a guidebook to avoid pitfalls (Hammond 1989). However, these pieces deal with static situations, in which new remindings and opportunities do not arise during the course of reasoning, or, if they do, they remain relevant to our current problem and current situation (but see Ram 1991).

The tradition of opportunistic reasoning attempts to tackle the other side of the coin: what to do when remindings appear outside the scope of our current reasoning context. Dealing with dynamic situations relevant to an agent's current goals is a difficult problem, but it has proven to be a powerful technique for problem solving (Hammond 1989), reasoning (Simina & Kolodner 1995) and learning (Ram 1991). But these systems tend to focus on how the external world affects the agent's reasoning, ignoring the potential benefits of remindings based on the agent's own reasoning activity.

Yet it is internal remindings that provide the greatest potential for leverage in reasoning. A number of systems depend on internal remindings for the basis of their problem solving (Redmond 1990, Hinrichs 1992, Ram & Cox 1994). But these systems depend on explicitly generated remindings through the use of knowledge goals that specify the additional information the systems need to solve their problems.

All of these systems generally solve a single problem. Yet real agents are often presented with opportunities outside the context of their current task, and hence have to make choices about what task to pursue - a choice outside the scope of most current systems. Furthermore, traditional systems tend to punt on the control policy for opportunistic reasoning, choosing the default policy of "If an opportunity can be pursued, then pursue it."

Dealing with the full range of all opportunities - exogenous and endogenous, internal and external, relevant and irrelevant - requires an approach which combines past research in reminding, opportunism, and knowledge goals with new techniques for integrating remindings into problem solving and for controlling the application of remindings in reasoning. An agent needs to take full advantage of the past, so it needs the ability to generate and process remindings; but it also needs the ability to direct that process towards useful action, so it needs the ability to control reasoning based on its utility.

3.2 Memory-Based Opportunistic Reasoning

Question: How can we make an agent's reasoning reminding-driven but utility-controlled?

Answer: By requiring that an agent's memory be capable of generating asynchronous remindings, an agent's reasoning be capable of integrating those remindings on demand, and by making the agent's control sensitive to the utility of reasoning operations.

Memory-based opportunism is based on two fundamental observations: first, processing of any opportunity begins with some change to an agent's working memory; second, any change to an agent's memory from any source can represent the seed for a potential opportunity.

Opportunism depends on changes to the agent's working memory because without those changes, the agent has no reason to alter its course of action. Traditional opportunistic reasoning systems depend on input from the outside world to provide them with this knowledge. But reasoning and memory both make changes to the system's working memory; there is no principled reason that a change caused by reminding or a reasoning step could not lead to an opportunity (while one might suspect that reasoning would already have access to the results it produces, a reasoning step might provide the key results needed by a suspended goal). This suggests that, at least with respect to opportunistic reasoning processes, we should not make a distinction between the internal and external worlds.

But any new information, whether internally or externally provided, can be relevant to any goal or other knowledge the agent might have. This potentially unconstrained coupling arises because an agent cannot completely specify the conditions under which a piece of knowledge may be relevant. Traditional opportunistic reasoning mechanisms use techniques like predictive encoding (Hammond 1989) and active goals (Simina & Kolodner 1995) to detect which changes are relevant, but in the general case an agent cannot predict every change relevant to a goal. Returning to the hunter and tiger, even if we assume that the hunter has a suspended goal to attack and kill predators when weapons are available, simply sighting the canyon does not directly unblock the goal. It is only when remindings about the canyon are retrieved that the goal can become active, and even then the hunter must invoke problem reformulation processes that adapt his current evasion strategy - most of whose constraints must remain in place if the hunter is to continue to avoid the gnawing jaws of death - into a weapon procurement strategy that is a subgoal in a new plan to eliminate the tiger.

Therefore, opportunities should be retrieved not on the basis of preconditions for action but instead on remindings of relevant situations. But a reminding can appear at any time, not necessarily at the point in the reasoning algorithm it was requested - assuming it was requested by or relevant to the currently active algorithms in the first place. Hence, the agent needs integration mechanisms to incorporate remindings dynamically into its reasoning processes. Furthermore, because the remindings that can be drawn from a situation are potentially unbounded, an agent need a control mechanism that selects reasoning actions not on the basis of what it can do, but instead on what appears useful to do.

So, in sum, an opportunistic reasoner needs to:

This trio of requirements - the generation of remindings, the integration of remindings, and the control of that integration - form the theory of memory-based opportunistic reasoning. In more detail, memory-based opportunistic reasoning proposes that in order to take full, useful, opportunistic advantage of its past experience, an agent should have three primary capabilities:

3.3 Implementation: The NICOLE System

This theory of memory-based opportunistic reasoning is being implemented in the NICOLE system. Each of the three components of the theory - memory, reasoning, and control - is implemented in a separate module - MOORE, MPA, and TASKSTORM, respectively - all of which work together to collectively generate opportunistic behavior.

TASKSTORM controls all processing in NICOLE, from memory and reasoning to perception and action and even to user input and output. MOORE and MPA are realized as supertasks (Moorman & Ram 1994) which define both the structure of their reasoning as well as the communication mechanisms each uses to interact with each other. However, the contributions of the other two systems are just as important for the overall behavior of the system: without the remindings generated by MOORE or the integration mechanisms of MPA, the system would not be able to function in an opportunistic fashion. We will discuss these three modules and their relationships in more detail when we discuss the theories that each of them implement, in sections 4, 5, and 6.

3.4 The Impact of Memory-Based Opportunism

Memory-based opportunistic reasoning impacts almost every component of an agent. Because it depends heavily on remindings, it affects how an agent must remember. Because those remindings may provide new information to reasoning at any time, it affects how an agent must reason. And because pursuing the opportunities those remindings represent may conflict with the goals that the agent is already pursuing, it affects how an agent must be controlled.

In the following three sections, we will explore how memory-based opportunistic reasoning affects memory, reasoning and control, and will propose theories of how each faculty should function in a memory-based opportunistic reasoner. Along the way we will discuss how the various faculties impact each other, and will also detail how each theory is implemented within the NICOLE system.

4. Memory in an Opportunistic Agent

How can we remember what we need when we need it?

Question: What are the requirements of a memory for an opportunistic agent?

Answer: An agent's memory must be capable of generating anytime and asynchronous remindings, based on the context of the agent's reasoning.

4.1 Requirements of an Agent Memory

Bringing the past to mind when we need it is a difficult problem. A simpler version of the problem is to bring the past to mind when we ask for it. If we can construct the specifications and cues that we need, we can retrieve it from memory almost immediately (Kettler et al 1993); if not, we can use what we can retrieve to elaborate our specifications and cues until we find what we want, or until we are convinced that it cannot be found (Kolodner 1983). Generating remindings in a dynamic world requires additional capabilities: the memory must be able to return items not only when asked for them, but also when changes to the world or to the reasoner provide it with an opportunity to remember.

A memory system driven by the direct control of a reasoning module is not sufficient to support the type of opportunistic reasoning we have proposed. A memory for an opportunistic reasoner cannot be simply called like a subroutine, returning results on demand; it must instead act like a coroutine, operating independently from the reasoning system and returning results as it finds them. Since reasoning may not proceed if the memory returns nothing, we may require that the memory return its "best guess" in an anytime fashion; since the information provided to and the resources available to the memory are limited, we should also require that the memory asynchronously update that best guess as soon as a better retrieval is available.

This behavior is the kernel of an opportunistic memory, but only the kernel: to serve the reasoning demands of a situated agent, the memory must be sensitive to the context of the agent's reasoning. Without that sensitivity, the agent's reminding behavior would be totally determined by the history of retrieval requests presented to it, unaffected by any new information provided by the outside world, by reasoning, or by memory.

4.2 A Theory of Memory in an Opportunistic Reasoner

Therefore, a memory for a situated agent should be capable of returning remindings as soon as it finds them, and what remindings it returns and when it finds them should be affected by what the agent is reasoning about or experiencing. In sum, a memory for an opportunistic agent should be:

To achieve these goals, our theory of a memory system for an opportunistic reasoner has the following primary characteristics:

Retrieval requests are a type of knowledge goal (Ram 1991, Ram & Hunter 1992) that encapsulate the specifications and cues for a memory retrieval, along with a history of the successful and unsuccessful retrieval candidates that the memory system has proposed. A queue of reified retrieval requests allows a memory system to operate asynchronously, performing independent memory search to satisfy the knowledge goals that it has been presented without being tied to the order of processing of the reasoning modules that posted the requests. The retrieval history stored on a retrieval requests allows the system to return the best examined item at any time (if any of the items met the specifications) and then update that guess as new items are examined.

Context focusing simultaneously provides the memory with the ability to bound the amount of effort expended on retrieval during any one retrieval cycle and the ability to make retrieval sensitive to the current reasoning context.

4.3 Implementation: The MOORE System

This theory of memory is implemented in the MOORE system. In addition to implementing our theory of an anytime, asynchronous agent memory, MOORE embodies three fundamental design principles. First, it is designed to be a generic memory system, capable of use in a wide variety of contexts. Second, it is designed to cognitively plausible, with as much of its design as practical based on psychological evidence or computational constraints (for example, its spreading activation mechanism is designed to produce the fan effect; see Anderson 1983a,b). Third, MOORE's design is based on a functional analysis of memory, rather than the content analysis used in many AI systems such as Abby (Domeshek 1992) or the process analysis used in some psychological models such as CHARM (Eich 1985). MOORE'S primary sources of constraint in its functional analysis come from two sources: the utility problem (Holder et al. 1990, Minton 1990, Francis & Ram 1995) and the constraints of an agent memory (Wooldridge & Jennings 1995).

Figure 6. The Life History of a Retrieval in MOORE.

1. A retrieval begins when the cognitive module calls post-request, which adds a new request-node (symbolized by a diamond) to the request buffer of the Memory Blackboard. 2. A request-node may be updated with a update-request call at any time. 3. Each time a request is posted or updated (or when activity occurs in other system blackboards) activation spreads to nodes in long term memory. 4. On every retrieval cycle, a limited number of active nodes (symbolized by dark circles) are retrieved to the retrieval buffer for consideration. 5. Each pending request in the request buffer is matched against the candidates in the retrieval buffer. 6. Successful matches are posted to the candidate buffer to be copied to the requesting blackboard or process. 7. A retrieval candidate can be accepted or rejected by the accept-candidate and reject-candidate calls, which update the state of the request to allow it to more accurately select future matches. 8. The cognitive module may at any time decide to terminate processing of a request by accepting it through an accept-request call or rejecting it with a cancel-request call. 9. Terminated requests, both successful and unsuccessful, are stored in long term memory and used by the storage module (not shown) to adjust associative links in long term memory and retrieval parameters in the matching and candidate retrieval systems.

MOORE implements context focusing through the use of a unified system blackboard for all reasoning, memory, perception and action processes in an agent, combined with a context-sensitive spreading activation process which propagates activation from items in the blackboard into long-term memory. A small set of the most highly activated items in memory become retrieval candidates - items that will be considered for potential retrieval.

MOORE implements the retrieval request queue directly, maintaining a list of the memory retrieval requests it has received and keeping track of the retrieval history for each request. On each retrieval cycle, the retrieval request queue is used to search the retrieval candidates that have been selected through spreading activation. If a matching item is found, it is returned; if not, the system waits until the next cycle when new retrieval candidates will be selected.

In its default configuration, MOORE'S spreading activation and retrieval search processes proceed independently of other processes in a system. As input, output and reasoning processes change the system blackboard, activation spreads into long-term memory, changing the set of retrieval candidates; this changing set is constantly monitored by the retrieval request queue, which returns requests to a retrieval buffer as soon as they are found. It is the responsibility of the rest of the reasoning system to respond to these retrievals appropriately.

4.4 Keeping Afloat in A Sea of Remindings

With a memory that can generate almost arbitrary remindings at any stage of processing, how can the rest of the system cope with all of the information it gets back? If the system is assessing a situation and asks for a domain theory, what does it do if it begins solving the problem based on the theory and then remembers a better one halfway through the solution?

Clearly, for a system like MOORE to retrieve anything, a reasoner must be able to specify what it wants to get back; it would be useless if memory returned information completely irrelevant to the reasoner's goals. But taking full advantage of an asynchronous, context-sensitive memory requires more of a reasoner; for that purpose, we need a way of integrating remindings into reasoning, no matter when the remindings are returned. Section 5 explores how these two constraints impact the design of a reasoning system for an opportunistic agent.

5. Reasoning in an Opportunistic Agent

How can we reason if we are constantly bombarded by remindings of the past?

Question: What are the requirements on reasoning in an opportunistic agent?

Answer: Reasoning must be able to generate knowledge goals that specify what information it needs, and must have integration mechanisms that allow that knowledge to be integrated on demand.

5.1 Putting the Pieces Together As You Find Them

In many conditions, reasoning in a situated agent with an asynchronous memory is not very different from reasoning in a situated agent with exogenous events. Both the memory and the world may present the system with new information at any time, information that needs to be incorporated into the reasoner's behavior. However, an asynchronous memory can provide more information for a situated agent than the world can. The world can merely present new facts and information, which need to be interpreted and processed, while the memory can present past problem solving experiences and control knowledge that can be directly used by the reasoner.

As new information is added to an agent's working memory, either as a result of memory retrieval or from input from the world, we would like the system's reasoning mechanism to respond to that information in a productive way. Before that is possible, however, the reasoning system must have the ability to specify what information that it needs from memory or perception, and must also have the capability to incorporate knowledge that it receives into its current reasoning context.

5.2 A Theory of Reasoning in an Opportunistic Agent

In sum, a reasoning mechanism for an opportunistic agent needs the following capabilities:

To achieve this kind of opportunism, we propose a theory of reasoning that has the following components:

Knowledge goals provide a reasoner with the fundamental capability to specify what the system wants from memory. They are the reasoning system's counterpart to the memory's retrieval requests; while they are not identical - the information memory needs for retrieval must be specified in different terms than the information the reasoner needs for processing -they allow a kind of handshaking between an independent reasoner and an independent memory.

Integration mechanisms shake the other hand of memory: where knowledge goals correspond to retrieval requests, integration mechanisms correspond to retrievals. When retrieval of new knowledge is totally under the control of the reasoning mechanism, reasoning can be structured so that various classes of information are retrieved when it is easiest to integrate them into the data structures that the agent has already constructed. However, this programming convenience does not take into account the reality that the information may not be retrieved when it is easiest to retrieve - nor does it take into account the fact that the point at which knowledge is easiest to integrate is not always the point at which it is most useful.


 Information   Initial Use  Opportunistic   Source of      Type of      Integration  
    Type                         Use       Opportunity   Opportunity     Mechanism   

Planning      Provide       Adapt/extend  Exogenous or  Internal       Extraction    
Case          outline for   partial       endogenous                   of relevant   
              new solution  solution                                   case          
                                                                       clipping;     
                                                                       merging of    
                                                                       clipping      
                                                                       with partial  
                                                                       solution      

Control Rule  Eliminate     Eliminate     Endogenous    Internal       Merge into    
              search        search                                     control rule  
                                                                       library       

Critic        Detect        Debug         Exogenous or  Internal       Critic        
              potential     current       endogenous                   application;  
              threats       plan,                                      replanning    
                            eliminate                                                
                            plan                                                     
                            candidates                                               

Goal          Define focus  Unblock       Exogenous or  Internal or    Change        
              of agent's    Suspended     endogenous    external       current goal  
              problem       Goal                                       focus         
              solving                                                                

Domain        Problem       Problem       Exogenous or  Internal       Mapping or    
Theory        Formulation:  Reformulation endogenous                   discarding    
              Defines       : new                                      of partial    
              problem and   problem                                    solutions     
              available     definition,                                into new      
              operators     new operator                               domain        
                            set                                        theory        



Table 2. Types of Integration Mechanisms

Integration mechanisms essentially provide a means to incorporate "inconvenient" retrievals into the system's current reasoning structures. This may be as simple as adding a control rule to a control rule library or applying a critic to a partial plan, or as complex as clipping a retrieved case to the open subgoals of a partial solution or mapping a partial solution into a new domain theory. In an extreme case, a new retrieval may require abandoning the entire partial plan that the system has been pursuing, or even the goals that the system has been pursuing itself (watch out for that tiger!) Table 2 lists different classes of information that may be retrieved from memory and the integration mechanisms appropriate for each.

5.3 Implementation: The MPA System

Unlike our theory of memory, which specifies the process of retrieval in enough detail for a system like MOORE to practically fall out of the theory, our theory of reasoning is more general. It specifies properties which could be implemented in a wide range of reasoning systems, rather than prescribing a particular reasoning mechanism.

We have implemented our theory of reasoning in the MPA system, a case-based reasoning system which supports multi-plan adaptation. Currently, MPA implements first row ("planning case") of Table 2. Initially, MPA posts a retrieval request for cases relevant to its current problem; once reasoning has begun, MPA's knowledge goal generator extracts intermediate goal statements from partial plans and adds them to the existing retrieval requests for past cases.

When new information is retrieved, MPA's planning case integration mechanism uses a plan clipper to remove parts of the partial plan not relevant to the current situation and then uses a plan splicer to integrate that clipping with the partial plan that the system has already constructed. These mechanisms allow MPA to incorporate new planning information at any point in the reasoning process, whether the new cases are all retrieved before adaptation or during the course of adaptation itself. Figure 2 illustrates this multi-plan adaptation process.

Figure 2. Multi-Plan Adaptation

Multi-plan adaptation begins when 1. a partial plan is obtained, either directly from a goal statement, from an initial case fitting, or from ongoing adaptation processes. 2. An intermediate goal statement is extracted from the plan, consisting of the initial conditions known to be true in the world and the set of preconditions not yet satisfied in the plan. 3. The case library is searched for a matching plan in exactly the same way that it is searched for initial case fittings. 4. The best matching plan is retrieved and 5. is adjusted to have the right set of initial and goal conditions and to remove extraneous plan steps. 6. The steps are recursively spliced into the plan, beginning with the links that match to the intermediate goal statement and then moving backwards through the plan along the paths of the causal links. 0. Finally, the successfully spliced plans are returned for further adaptation or splicing.

5.4 Switching Tracks without Jumping Tracks

The disadvantage of being able to retrieve and splice a plan into the partially constructed plan at any point during the adaptation process is that we can retrieve and splice a plan at any point during the adaptation process. Without some constraint, the agent can spend an arbitrary amount of retrieval effort for minimal improvements.

When other integration mechanisms are considered, the problem becomes more difficult. New critics, control rules and other knowledge "local" to solving a particular problem are easy to incorporate; new goals, domain theories and other knowledge "global" to the entire agent's operation are not. Even though adding new goals can be critical (remember the tiger - as well as Hammond 1989) and switching domain theories can make problem solving radically easier (Amarel 1967), they can radically change the course of the agent's activity and hence should be considered carefully.

To avoid stopping for a drink of water with a tiger on our heels, or changing the domain theory on a problem when we are only one step away from completing the solution in the original theory, we need a way of limiting the application of remindings to the conditions under which it appears useful to apply them. Section 6 outlines a theory of how an agent can achieve this control.

6. Control of an Opportunistic Agent

In a storm of remindings, how can the agent remain in control of itself?

Question: What are the requirements on control in an opportunistic agent?

Answer: Control of an agent should be able to schedule reasoning processes based on their projected utility and the reasoner's current context, and should be able to update that scheduling based on new information.

6.1 Human Frontal Lobes and Multitasking Operating Systems

In cooperative multitasking systems like Windows 3.1 and the Macintosh, programs are expected to behave. Specifically, pieces of programs are expected to yield gracefully and frequently, allowing the multitasker to pass control to other programs dutifully waiting in the queue. This provides the system with the ability to run a word processor and a communications package simultaneously.

Unfortunately, when an unruly program refuses to yield control, this mechanism can lead to deadlock, leading to lost modem connections or, worse, lost documents. In more advanced, preemptive multitasking systems, every program is guaranteed a minimum timeslice - a tiny slice of reasoning, if you will - but the multitasker reserves the right to snatch control from each task and pass it to another to allow the system to continue to respond to the user in a timely fashion.

Human reasoners and other situated agents show capabilities similar to that of a preemptive multitasking system: many different reasoning activities can be maintained at the same time, and even though extreme loads can degrade the system's ability to perform multiple tasks, no one task can "lock up" the system and prevent the system from functioning. This suggests the presence of a higher-level control mechanism which has the ability to decide what activities the agent is engaged in, as well as the ability to override activities that are not contributing to the functioning of the rest of the system.

The function of the human frontal lobe reinforces our intuitive notions that a higher level of control exists and that this control mechanism is separable from other functions, such as reasoning and memory (Levinson 1994). When the self-regulatory abilities of the human frontal lobe become damaged, humans exhibit environmental dependency - an inability to prevent reactive behavior, leading one subject to attempt to bake cookies every time she saw an oven - or perseveration - the inability to stop behavior once initiated.

This type of behavior - the ability to flexibly maintain many reasoning processes at once, as well as the ability to decide when reasoning processes should be initiated or abandoned - is precisely the type of behavior we need to control a reasoning system capable of generating and integrating spontaneous remindings.

6.2 A Theory of Control in an Opportunistic Agent

The first requirement for control in any agent is that the agent doesn't try to do what it can't. Many different techniques have been applied to this problem, from production rules to task networks to subsumption architectures; one unifying characteristic of all of these approaches is that each operation the agent can perform specifies the conditions for its applicability, and that no action can be executed unless those conditions are satisfied. Given those preconditions and the known state of the world, more complex agents may even devise plans or schedules for activities, allowing them to achieve a remote goal.

In an opportunistic agent, it isn't that simple. Several goals may be active, and actions may contribute to one goal while working against another. Even if goal conflicts can be resolved, compatible actions may compete for the same limited resources. Merely because an action's preconditions have become active does not mean that the agent should blindly select it for execution. Agents need the ability to decide what actions are useful when devising a plan for action.

Worse, even when an agent has resolved goal and resource conflicts and has decided on a plan for action, new opportunities may arise. These may require abandoning some or all of the current plan to pursue the new opportunities, or the opportunities themselves may need to be abandoned in favor of the current plan.

An agent needs all three of these capabilities - deciding what it can do, deciding what it should do, and deciding to change what it is doing - to perform opportunistic reasoning. In more detail:

To achieve this behavior, our theory of metacontrol in an opportunistic agent has the following components:

While there are similarities between this theory of an agent and state-space search, there are differences. Not all cognitive operations can be conveniently modeled by operators; many cognitive modules have continuous inputs and outputs and produce results that may improve or degrade in quality over time. Hence, unlike pure state-space search, the state of the system may change independently of the operations performed at the meta-level.

For example, the central controller may schedule a reactive component to solve a navigation problem. Once the central controller has entered this state, the entire system's state will continuously change until its goal has been achieved, at which point additional modules of the system may become available.

6.3 Implementation: The TASKSTORM System

This theory of metacontrol has been implemented in the TASKSTORM system. TASKSTORM uses a unified framework called a supertask to specify task networks, utility information, and blackboard structures, and uses a fixed kernel that interprets the supertask networks to decide which low-level reasoning operations to schedule.

Figure 3. The Task Storm

The three declarative levels - supertask, task and task storm - specify the cognitive tasks the system is engaged in, the algorithms that realize those tasks, and the execution modules that implement them. All changes in the state of the task system are mediated by the scheduling system, including the selection of modules for execution. All execution takes place in the execution module, including the scheduler, which can be thought of as a hardwired kernel. The blackboards that each of the supertasks operate on are also shown. Lines of communication between blackboards are omitted for clarity.

Supertasks (Moorman & Ram 1994) represent reasoning tasks viewed at a level too high to specify a single input or output. Instead, supertasks specify a set of lower-level reasoning tasks that operate over a single domain of knowledge relevant to a reasoner, such as vision, planning, or language comprehension. Supertasks are similar to Firby's (1989) RAPs (Reactive Action Packages), with three important differences. First, RAPs are designed to reactively and flexible schedule physical actions, whereas our supertasks have been extended to handle reasoning tasks as well. Second, the nodes of RAP task networks specify merely their conditions for execution, whereas the execution modules in a supertask's task network specify the cost and projected utility of their operations as well. Finally, supertasks define blackboard planes in the memory system, through which reasoning tasks communicate with memory, each other, and the system's sensors and effectors.

By specifying common blackboard structures through which reasoning modules must communicate, the supertasks make the context of all reasoning operations transparent to the system's opportunistic memory. By providing the projected utility of each reasoning operation, supertasks provide the metacontroller with the ability to decide not to execute a reasoning action, even though the preconditions defined by its task network may be satisfied. Hence, if a reasoner is pursuing a highly important or resource-intensive task, such as running from a tiger, it can elect not to pursue new opportunities immediately, no matter how enticing the sparkling water of the stream seems to be.

The TASKSTORM system draws a distinction between execution modules, which perform some information-processing function, and tasklets, which describe those execution modules to the task scheduling system. In one sense, the tasklets are "all the system knows" about how its implementation actually functions; in another sense, the tasklets go beyond the execution modules because they contain information about cost and benefit of execution that are determined by and used by the scheduler and not privy to the execution modules themselves.

The process of control in the TASKSTORM system is state space search for the best configuration of the system, where states are sets of task network configurations and activation states of tasklets. However, because preconditions for actions of tasklets and task network elements depend on the state of other tasks and other system blackboards, which may be changed at any time through the operation of the execution module, states of the system are ephemeral and need to be constantly re-evaluated and modified. Thus, TASKSTORM's control layer acts like a hillclimbing, optimizing search process moving reactively through a changing state space.

6.4 Implications of Flexible Metacontrol

The theory of metacontrol defines how processing occurs in the overall theory of memory-based opportunistic reasoning. Simply by altering the declarative specifications of the reasoning tasks, the same low-level operations can be composed into a wide variety of processing structures. The theory of metacontrol essentially specifies an architecture in which many different memory-based opportunistic agents may be constructed.

In addition to defining the architecture of NICOLE, the TASKSTORM system provides an additional capability: the ability to monitor reasoning and collect utility and performance data. Since the execution of every module in the system is controlled by TASKSTORM, information on the type of module executed, the cost of its execution, its input and output and even its success or failure can be recorded. Thus, in addition to fulfilling its technical role as the implementation of our theory of metacontrol, TASKSTORM also provides the framework for a larger-scale evaluation of the theory of memory-based opportunistic reasoning. Section 7 discusses the evaluation of the theory and how TASKSTORM may fulfill a role in that evaluation.

7. Evaluating the Theory

How to evaluate an opportunistic agent.

Question: How can we evaluate a theory of opportunistic reasoning?

Answer: By implementing the theory in a domain to demonstrate its competence, by extending the theory to new domains to demonstrate its generality, and by using theoretical and empirical analyses to verify that the properties of the implementation required by the theory are the source of the implementation's power.

7.1 The Proof in the Pudding

Evaluation of an AI theory must take place on several levels. Two important criteria include competence (can the theory generate the behavior that it is claimed to) and generality (how wide is the range of conditions under which the theory applies). To evaluate a system on either of these criteria requires some combination of implementational, empirical and theoretical analyses. To unify these three analyses, we have developed a comparative algorithmic methodology for the study of utility issues in AI systems. This methodology provides a unifying framework for both theoretical and empirical analyses AI systems across various domains, as well as providing a framework for making decisions within an AI system based on their utility.

7.2 A Methodology for Analysis

The methodology involves modeling AI systems in terms of three fundamental components: a fixed cognitive architecture CA, which defines the structure of a system's processing; a changeable knowledge base KB, which defines the information that the cognitive architecture uses in its processing; and a hardware architecture HA, which defines the cost functions of the various operations in the cognitive architecture. The cognitive architecture can be further decomposed into lower-level functional units, or modules, that can be represented by formal algorithmic models. By combining formal algorithmic models with the cost functions defined by the cognitive architecture, our algorithmic approach allows the analysis of both functional-level and implementation-level aspects of AI systems. This multi-level analysis is crucial for the analysis of any AI architecture operating in a real-world domain because there may be critical interactions between the functional level of the system and the way that functional computation is actually implemented.

For a comparative analysis to be successful, the AI systems being studied must be modeled with a uniform vocabulary of basic cognitive operations that is sufficient to describe the architectures of a wide range of systems. The task networks provide a uniform representational language that allows us to represent AI systems as computational models whose basic operations are identical and thus are suitable for comparative algorithmic complexity analysis. By reconfiguring the task networks, it is possible to directly compare the performance of different algorithms in terms of the costs of basic cognitive operations. This methodology can therefore be used to compare the utility of different learning and reasoning mechanisms or, through ablation studies, to determine how various components of the system contribute to its competence.

7.3 Evaluating the Theory and its Implementation

As we have already mentioned, the theory of memory-based opportunistic reasoning is implemented in the NICOLE system, which is itself divided into three separate components: the memory module MOORE, the case-based reasoner MPA, and the metacontroller TASKSTORM. The competence of each of the modules of the system has been tested on a range of examples, and the overall system is currently being integrated to allow more sophisticated tests of opportunistic reasoning. These extensions include both improvements to MOORE, MPA, and TASKSTORM, as well as the addition of other reasoning modules which will provide NICOLE with the ability to perform multistrategy reasoning and simulated action.

We intend to apply our methodology for utility analysis to the completed NICOLE system. This will allow us to evaluate this implementation of the theory and determine its performance characteristics; it will also allow us to determine where the power of the system lies by performing ablation studies. By deactivating pieces of the system and comparing the new version of the system with the original, we can assess what parts of the system are essential to its performance, and what parts are merely fluff.

To demonstrate the generality of the theory and its implementation, we have also pursued using the various components of NICOLE in different systems. The knowledge representation language NICOLE is built on, CRYSTAL, is currently used in the ISAAC story understanding system (Moorman & Ram 1994). NICOLE's memory module, MOORE, was designed to be a generic memory module capable of use in a wide range of AI systems, and is currently being extended to work with ISAAC to provide its retrieval needs. MOORE can also be reconfigured to emulate other memory modules, allowing us to test MOORE's retrieval strategy against competing strategies. And NICOLE's metacontroller, TASKSTORM, will be tested on a robot simulator to demonstrate the generality of the control architecture.

The generic nature of these components is designed to allow us to reconfigure the system to act like a wide range of systems - a reasoner with a synchronous control module, an opportunistic system with no utility control, and so on. So in addition to testing NICOLE against ablated versions of itself, we also hope to test NICOLE against altered versions of itself which use different techniques to (possibly) achieve the same functionality.

TASKSTORM is also being extended to collect information about the execution of tasklets in the system in an attempt to collect utility data. Viewed through the lens of the methodology for utility analysis, supertasks in TASKSTORM correspond to computational models and tasklets correspond closely to modules. By extracting information about the actual running time of various tasklets and the utility of their operation, TASKSTORM will allow the empirical verification of the theoretical models. Eventually, this information could be fed back into the supertasks to allow TASKSTORM to dynamically update its own utility priorities, helping the system adapt to different domains.

8. The Road Ahead

Future work, in theory and in implementation.

8.1 Future Work in Memory-Based Opportunistic Reasoning

The theory of memory-based opportunistic reasoning has been well sketched out in the abstract, but considerable work lies ahead to fully flesh out the theory. The theory of utility-based control currently requires the system designer to partially specify the projected utility of different operations; while these types of design-time decisions will never go away, the theory should be extended to allow the system to extract utility information from the operation of the system and to apply that information towards scheduling decisions. The implementation of the entire system must be refined, integrated, and extended to include new reasoning mechanisms and to apply them to a wider range of domains.

8.2 Future Work in Memory

The theory of memory also needs to be extended. The mechanics of anytime and asynchronous retrieval are complex, and our theory of how retrieval requests are processed needs to be extended in several ways, including how utility information from the metacontrol or reasoning system may be incorporated into the retrieval mechanism to help it decide when to update a best guess. Second, the theory of context focusing needs to be extended to be more sensitive to the structure of the context, as well as to the patterns of storage and retrieval. These changes need to be propagated into the implementation and evaluated.

8.3 Future Work in Reasoning

The theory of integration mechanisms needs to be fleshed out and extended. Currently, only one row ("planning case") in Table 1 has been filled in, implemented, and tested; other integration mechanisms corresponding to other knowledge classes need to be developed, implemented and tested as well. Accordingly, MPA must be extended to handle additional integration mechanisms to incorporate other types of new information, such as critics, goals, or even domain theories; alternative reasoning mechanisms should also be constructed to handle different classes of problems currently not handled by MPA. MPA is also currently implemented on top of the SPA case-based reasoner (Hanks & Weld 1995) which uses a CRYSTAL-incompatible representation. Since the incompatible representations require translation functions, we are rewriting MPA as a standalone system represented in CRYSTAL, and are at the same time extending its STRIPS-like operator format to support hierarchical and conditional actions.

8.4 Future Work in Control

Our theory of supertasks and supertask-based control needs to be fleshed out and extended, particularly in the area of recording cost and utility of various reasoning modules and incorporating that knowledge into scheduling decisions. TASKSTORM will provide a framework for implementing a utility recording system, but it is not yet theoretically clear what information should be recorded and how it could be used.

8.5 Future Work in Evaluation

The system currently has partial functionality for a single example in a single reasoning domain. Obviously, the system must be extended to an entire range of similar examples as well as other classes of examples, to demonstrate the generality of the theory. To fully test the theory, however, the system must be extended to handle more than one task at once in a dynamic domain (whether simulated or real), allowing the system to demonstrate the full range of behavior predicted by the theory of opportunistic reasoning.

9. Contributions

What this research contributes to artificial intelligence and cognitive science.

9.1 Contributions of this Research

This research covers a wide range of ground, from the broad ground of agent architectures to specific claims about the mechanisms of memory, reasoning, and control. As such, the theory makes a number of separate but interlocking contributions, in the areas of agent theory, memory, reasoning, control, and evaluation.

9.2 Contributions in Agent Theory

9.3 Contributions in Memory

9.4 Contributions in Reasoning

9.5 Contributions in Control

9.6 Contributions in Analysis and Evaluation

10. Conclusion

The meaning of this work.

This proposal has advanced a theory of memory-based opportunistic reasoning, along with a theoretical description of opportunistic reasoning itself and a comparative algorithmic methodology for the evaluation of the theory. Our work stands firmly in the tradition of past memory research and past opportunistic reasoning research, but extends that work by providing a uniform theory of opportunistic reasoning based on memory retrieval, as well as tapping a previously neglected class of internally generated opportunities for internal reasoning.

Acknowledgments


I must first thank my thesis advisor, Ashwin Ram, for his support, advice, and guidance. I must also thank the other members of my thesis committee, my mentor Kurt Eiselt and my friend Janet Kolodner, as well as the many other members of the Cognitive Science community who have helped shape my research.

Several people read and commented on drafts of this manuscript: in addition to Ashwin, I had the benefit of the criticism and advice of Kenneth Moorman, Fred Zust, Shannon Duffy, William Morse and Stuart Myerburg.

This research was supported by the United States Air Force Laboratory Graduate Fellowship Program, and the Georgia Institute of Technology

Bibliography


Amarel, S. (1967) On Representations of Problems on Reasoning about Actions. Reprinted in Nilsson & Webber, (eds.), Readings in Artificial Intelligence, Tioga Press. 1967.

Anderson, John R. (1983a). "A spreading activation theory of memory." Journal of Verbal Learning and Verbal Behavior, 22, 261-295.

Anderson, John R. (1983b). The Architecture of Cognition. Cambridge, Massachusetts: Harvard University Press.

Domeshek, E. (1992) Do the right thing: A component theory for indexing stories as social advice. Technical Report #26, May 1992. Institute for the Learning Sciences, Northwestern University.

Francis, A.G. (1995). "Sibling Rivalry." The Leading Edge: Magazine of Science Fiction and Fantasy, 30, February 1995, p79-102.

Francis, A. and Ram, A. (1995). A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems. In Proceedings, ECML-95, Heraklion, Crete, 1995.

Hammond, K. (1989). Opportunistic Memory. In Proceedings of the 1989 Meeting of the International Joint Committee on Artificial Intelligence.

Hanks, S., & Weld, D. (1995). A Domain-Independent Algorithm for Plan Adaptation. Journal of Artificial Intelligence Research 2, p319-360

Hinrichs, T.R. (1992) Problem Solving in Open Worlds: A Case Study in Design. Lawrence Erlbaum, 1992.

Holder, L.B.; Porter, B.W.; and Mooney, R.J. (1990). The general utility problem in machine learning. In Machine Learning: Proceedings of the Seventh International Conference, 1990.

Kettler, B., Hendler, J.A., & Andersen, W.A. (1993). Fast, frequent and flexible retrieval in case-based planning. In Case-Based Reasoning: Papers from the the 1993 Workshop, Technical Report WS-93-01, AAAI Press.

Kolodner, J.L. (1993). Case-based Reasoning. Morgan Kaufmann, 1993.

Levinson, R. (1994). Human frontal lobes and AI planning systems. In Proceedings of the Second International Conference on AI Planning Systems, p170-175, June 13-15 1994, Chicago, Illinois.

Marr, D. (1982). Vision. W.H.Freeman and Co., New York.

Moorman, K. & Ram, A. (1994). Integrating Creativity and Reading: A Functional Approach. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994

Minton, S. (1990). Quantitative results concerning the utility of explanation-based learning. Artificial Intelligence, 42(2-3), March 1990.

Ram, A. (1989). Question-driven understanding: An integrated theory of story understanding, memory and learning. Ph.D. Thesis YALEU/CSD/RR #710, Department of Computer Science, Yale University. May 1989.

Ram, A. (1991). A Theory of Questions and Question Asking. The Journal of the Learning Sciences, 1(3&4):273--318, 1991.

Ram, A. & Cox, M. (1994). Choosing Learning Strategies to Achieve Learning Goals. In Proceedings of the AAAI Spring Symposium on Goal-Driven Learning, Stanford, CA, 1994

Ram, A. & Hunter, L. (1992) The Use of Explicit Goals for Knowledge to Guide Inference and Learning. Applied Intelligence, 2(1):47-73.

Redmond, M. (1990). Distributed cases for case-based reasoning: Facilitating use of multiple cases. In Proceedings of AAAI-90. Cambridge, MA: AAAI Press/MIT Press.

Redmond, M. (1992). Learning by observing and understanding expert problem solving. Georgia Institute of Technology, College of Computing Technical Report no. GIT-CC-92/43. Atlanta, Georgia.

Simina, M. & Kolodner, J. (1995). Opportunistic Reasoning: A Design Perspective. In Proceedings of the 17th Annual Cognitive Science Conference. Pittsburg, PA, July 1995.

Veloso, M. (1995). Planning and Learning by Analogical Reasoning. Springer-Verlag, 1995.

Weld, D. (1994). An introduction to least-commitment planning. AI Magazine, (15) 4, pages 27-61. Winter 1994.

Wooldridge, M. & Jennings, N. (1995) Intelligent Agents: Theory and Practice. Submitted to Knowledge Engineering Review October 1994, Revised January 1995.