An Adaptive Meeting Scheduling Agent, J. William Murdock and Ashok K. Goel. Proceedings of the First Asia-Pacific Conference on Intelligent Agent Technology (IAT'99), Hong Kong, December 15-17, 1999.

An intelligent agent, such as a meeting scheduling system, has a set of constraints which characterizes what that agent can do. However, a dynamic environment may require that a system alter its constraints. If situation-specific feedback is available, a system may be able to adapt by reflecting on its own reasoning processes. Such reflection may be guided not only by explicit representation of the system's constraints but also by explicit representation of the functional role that those constraints play in the reasoning process. We present an operational computer program, SIRRINE2 which uses Task-Method-Knowledge models of a system to reason about traits such as system constraints. We further describe an experiment with SIRRINE2 in the domain of meeting scheduling.

Towards Adaptive Web Agents, J. William Murdock and Ashok K. Goel. Proceedings of the Fourteenth IEEE International Conference on Automated Software Engineering (ASE'99), Cocoa Beach, FL, October 12-15, 1999.

There is an increasingly large demand for software systems which are able to operate effectively in dynamic environments. In such environments, automated software engineering is extremely valuable since a system needs to evolve in order to respond to changing requirements. One way for software to evolve is for it to reflect upon a model of its own design. A key challenge in reflective evolution is credit assignment: given a model representing the design elements of a complex system, how might that system localize, identify and prioritize prospective candidates for potential modification. We describe a model-based credit assignment mechanism. We also report on an experiment on evolving the design of Mosaic 2.4, an early network browser.

Introspective Multistrategy Learning: On the Construction of Learning Strategies, Michael T. Cox. Ashwin Ram. Artificial Intelligence, 112:1-55, 1999, in press.

PML: Representing Procedural Domains for Multimedia Presentations, Ashwin Ram, Richard Catrambone, Mark J. Guzdial, Colleen M. Kehoe, D. Scott McCrickard, John T. Stasko. To appear in IEEE Multimedia, 1999. Also available as Technical Report GIT-GVU-98-20, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1998. A central issue in the development of multimedia systems is the. presentation of the information to the user of the system and how to best represent that information to the designer of the system. Typically, the designers create a system in which content and presentation are inseparably linked; specific presentations and navigational aids are chosen for each piece of content and hard-coded into the system. We argue that the representation of content should be decoupled from the design of the presentation and navigational structure, both to facilitate modular system design and to permit the construction of dynamic multimedia systems that can determine appropriate presentations in a given situation on the fly. We propose a new markup language called PML (Procedural Markup Language) which allows the content to be represented in a flexible manner by specifying the knowledge structures, the underlying physical media, and the relationships between them using cognitive media roles. The PML description can then be translated into different presentations depending on such factors as the context, goals, presentation preferences, and expertise of the user. Keywords: multimedia, representation, learning, training, cognitive media, design, XML.

Integrating Robotic Technologies with JavaBots, Tucker Balch, Ashwin Ram. AAAI Spring Symposium.

Needles in a Haystack: Plan Recognition in Large Spatial Domains Involving Multiple Agents, Mark Devaney, Ashwin Ram. National Conference on Artificial Intelligence (AAAI-98), Madison, Wisconsin, 1998.
While plan recognition research has been applied to a wide variety of. problems, it has largely made identical assumptions about the number of agents participating in the plan, the observability of the plan execution process, and the scale of the domain. We describe a method for plan recognition in a real-world domain involving large numbers of agents performing spatial maneuvers in concert under conditions of limited observability. These assumptions differ radically from those traditionally made in plan recognition and produce a problem which combines aspects of the fields of plan recognition, pattern recognition, and object tracking. We describe our initial solution which borrows and builds upon research from each of these areas, employing a pattern-directed approach to recognize individual movements and generalizing these to produce inferences of large-scale behavior.

Method Specific Knowledge Compilation: Towards Practical Design Support Systems, J. William Murdock, Ashok K. Goel, M. Jeff Donahoo, and Sham Navathe. Proceedings of the Fifth International Conference on Artificial Intelligence and Design (AID'98), Lisbon, Portugal, July 20-23, 1998.

Modern knowledge systems for design typically employ multiple problem-solving methods which in turn use different kinds of knowledge. The construction of a heterogeneous knowledge system that can support practical design thus raises two fundamental questions: how to accumulate huge volumes of design information, and how to support heterogeneous design processing? Fortunately, partial answers to both questions exist separately. Legacy databases already contain huge amounts of general-purpose design information. In addition, modern knowledge systems typically characterize the kinds of knowledge needed by specific problem-solving methods quite precisely. This leads us to hypothesize method-specific data-to-knowledge compilation as a potential mechanism for integrating heterogeneous knowledge systems and legacy databases for design. In this paper, first we outline a general computational architecture called HIPED for this integration. Then, we focus on the specific issue of how to convert data accessed from a legacy database into a form appropriate to the problem-solving method used in a heterogeneous knowledge system. We describe an experiment in which a legacy knowledge system called {\ik} is integrated with an ORACLE database using IDI as the communication tool. The limited experiment indicates the computational feasibility of method-specific data-to-knowledge compilation, but also raises additional research issues.

Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces, Juan C. Santamaria. Richard C. Sutton. Ashwin Ram. Adaptive Behavior, 6(2):163-217, 1998.

A key element in the solution of reinforcement learning problems is the value function. The purpose of this function is to measure the long-term utility or value of any given state. The function is important because an agent can use this measure to decide what to do next. A common problem in reinforcement learning when applied to systems having continuous states and action spaces is that the value function must operate with a domain consisting of real-valued variables, which means that it should be able to represent the value of infinitely many state and action pairs. For this reason, function approximators are used to represent the value function when a close-form solution of the optimal policy is not available. In this paper, we extend a previously proposed reinforcement learning algorithm so that it can be used with function approximators that generalize the value of individual experiences across both, state and action spaces. In particular, we discuss the benefits of using sparse coarse-coded function approximators to represent value functions and describe in detail three implementations: CMAC, instance-based, and case-based. Additionally, we discuss how function approximators having different degrees of resolution in different regions of the state and action spaces may influence the performance and learning efficiency of the agent. We propose a simple and modular technique that can be used to implement function approximators with non-uniform degrees of resolution so that it can represent the value function with higher accuracy in important regions of the state and action spaces. We performed extensive experiments in the double integrator and pendulum swing up systems to demonstrate the proposed ideas.

Cognitive Media and Hypermedia Learning Environment Design: A GOMS Model Analysis, Terry Shikano, Mimi Recker, Ashwin Ram. International Journal of Artificial Intelligence and Education, in press.

In our research, we have been developing a design framework for educational multimedia, based on the cognitive aspects of the users of that information. Design based on "cognitive media" appeals to the particular cognitive aspects of learners, whereas design based on types of "physical media" appeals to particular sensory modalities. This framework informed the design of AlgoNet, a computer science educational hypermedia system that used cognitive media as its basic building blocks. In this paper, we describe a model of student usage and learning with AlgoNet. This model, using the GOMS methodology, provided a useful description of the procedural knowledge required to interact with the AlgoNet system. In addition, our implemented simulations provided estimates of learning and execution times for several instances of the model. Together, the parameters in the simulations and their resulting estimates help clarify the impact of system design, and hence our design framework, on students' browsing and learning strategies.

Computional Models of Reading and Understanding, Ashwin Ram, Kenneth Moorman (eds.). Forthcoming from MIT Press.

From Data to Knowledge: Method Specific Transformations, M. Jeff Donahoo, J. William Murdock, Ashok K. Goel, Sham Navathe, and Edward Omiecinski. Proccedings of the 1997 International Symposium Symposium of Methodologies for Intelligent Systems..

Generality and scale are important but difficult issues in knowledge engineering. At the root of the difficulty lie two hard questions: how to accumulate huge volumes of knowledge, and how to support heterogeneous knowledge and processing? One answer to the first question is to reuse legacy knowledge systems, integrate knowledge systems with legacy databases, and enable sharing of the databases by multiple knowledge systems. We present an architecture called HIPED for realizing this answer. HIPED converts the second question above into a new form: how to convert data accessed from a legacy database into a form appropriate to the processing method used in a legacy knowledge system? One answer to this reformed question is to use method-specific transformation of data into knowledge. We describe an experiment in which a legacy knowledge system called Interactive Kritik is integrated with an ORACLE database using IDI as the communication tool. The experiment indicates the computational feasibility of method-specific data-to-knowledge transformations.

Functional Explanations in Design, Andres Gomez de Silva Garza, Nathalie Grue, J. William Murdock, Margaret M. Recker. To appear in IJCAI-97 Workshop on Modeling and Reasoning. about Function

A key step in explaining how something works is explaining what that thing was intended to do. This is equally true of physical devices and of abstract devices such as knowledge systems. In this paper, we consider the problem of providing functionally oriented explanations of a knowledge-based design system. In particular, we analyze the content of explanations of reasoning in the context of the design of physical devices. We describe a language for expressing explanations: task-method-knowledge models. Additionally, we describe the Interactive Kritik system, a computer program that makes use of these representations to visually illustrate the system's reasoning.

Learning Adaptive Reactive Agents, Juan Carlos Santamaria. PhD Thesis, Technical Report GIT-CC-97/08, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.

An autonomous agent is an intelligent system that has an ongoing interaction with a dynamic external world. It can perceive and act on the world through a set of limited sensors and effectors. Its most important characteristic is that it is forced to make decisions sequentially, one after another, during its entire ``life''. The main objective of this dissertation is to study algorithms by which an autonomous agents can learn, using their own experience, to perform sequential decision-making efficiently and autonomously. The dissertation describes a framework for studying autonomous sequential decision-making consisting of three main elements: the agent, the environment, and the task. The agent attempts to control the environment by perceiving the environment and choosing actions in a sequential fashion. The environment is a dynamic system characterized by a state and its dynamics, a function that describes the evolution of the state given the agent's actions. A task is a declarative description of the desired behavior the agent should exhibit as it interacts with the environment. The ultimate goal of the agent is to learn a policy or strategy for selecting actions that maximizes its expected benefit as defined by the task. The dissertation focuses on sequential decision-making when the environment is characterized by continuous states and actions, and the agent has imperfect perception, incomplete knowledge, and limited computational resources. The main characteristic of the approach proposed in this dissertation is that the agent uses its previous experiences to improve estimates of the long-term benefit associated with the execution of specific actions. The agent uses these estimates to evaluate how desirable is to execute alternative actions and select the one that best balances the short- and long-term consequences, taking special consideration of the expected benefit associated with actions that accomplish new learning while making progress on the task. The approach is based on novel methods that are specifically designed to address the problems associated with continuous domains, imperfect perception, incomplete knowledge, and limited computational resources. The approach is implemented using case-based techniques and extensively evaluated in simulated and real systems including autonomous mobile robots, pendulum swinging and balancing controllers, and other non-linear dynamic system controllers.

Learning Adaptive Reactive Controllers, Juan Carlos Santamaria, Ashwin Ram. Technical Report GIT-CC-97/05, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1997.

Reactive controllers has been widely used in mobile robots since they are able to achieve successful performance in real-time. However, the configuration of a reactive controller depends highly on the operating conditions of the robot and the environment; thus, a reactive controller configured for one class of environments may not perform adequately in another. This paper presents a formulation of learning adaptive reactive controllers. Adaptive reactive controllers inherit all the advantages of traditional reactive controllers, but in addition they are able to adjust themselves to the current operating conditions of the robot and the environment in order to improve task performance. Furthermore, learning adaptive reactive controllers can learn when and how to adapt the reactive controller so as to achieve effective performance under different conditions. The paper presents an algorithm for a learning adaptive reactive controller that combines ideas from case-based reasoning and reinforcement learning to construct a mapping between the operating conditions of a controller and the appropriate controller configuration; this mapping is in turn used to adapt the controller configuration dynamically. As a case study, the algorithm is implemented in a robotic navigation system that controls a Denning MRV-III mobile robot. The system is extensively evaluated using statistical methods to verify its learning performance and to understand the relevance of different design parameters on the performance of the system.

Continuous Case-Based Reasoning, Ashwin Ram, Juan Carlos Santamaria. Artificial Intelligence, (90)1-2:25--77, 1997.

Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as on-line sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task. This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research.

Design, Analogy, and Creativity, Ashok K. Goel. To appear in IEEE Expert Special Issue on AI in Design.

Analogical reasoning appears to play a key role in creative design. This article provides a brief overview of recent AI research on analogy-based creative design. It begins with an examination of characterizations of creative design. Then it analyzes theories of analogical design in terms of four questions: why, what, how, and when. Next it briefly describes three recent AI theories of analogy-based creative design: SYN [Borner et al 1996], DSSUA [Qian and Gero 1992], and IDEAL [Bhatta 1995]. Finally it enumerates a set of research issues in analogy-based creative design.

Kritik: An Early Case-Based Design System, A. Goel, S. Bhatta, E. Stroulia. Mary Lou Maher, Pearl Pu (eds.) Issues and Applications of Case-Based Reasoning to Design. Lawrence Erlbaum associates, 1997..

Situation Development in a Complex Real-World Domain, Mark Devaney and Ashwin Ram . ICML-97 Workshop on Machine Learning Applications in the Real World, Nashville, TN, 1997.

Applying techniques from Machine Learning to real-world domains and problems often requires considerable processing of the input data, to both remove noise and to augment the amount and type of information present. We describe our work in the task of situation assessment in the domain of US Army training exercises involving hundreds of agents interacting in real-time over the course of several days. In particular, we describe techniques we have developed to process this data and draw general conclusions on the types of information required in order to apply various Machine Learning algorithms and how this information may be extracted in real-world situations where it is not directly represented.

Towards Design Learning Environments - I: Exploring How Devices Work , Ashok K. Goel, Andres Gomez de Silva Garza, Nathalie Grue, J. William Murdock, Margaret M. Recker, T. Govindaraj . Third International Conference on Intelligent Tutoring Systems, Universite de Montreal, June 1996.

Knowledge-based support for learning about physical devices is a classical problem in research on intelligent tutoring systems (ITS). The large amount of knowledge engineering needed, however, presents a major difficulty in constructing ITS's for learning how devices work. Many knowledge-based design systems, on the other hand, already contain libraries of device designs and models. This provides an opportunity for reusing the legacy device libraries for supporting the learning of how devices work. We report on an experiment on the computational feasibility of this reuse of device libraries. In particular, we describe how the structure-behavior-function (SBF) device models in an autonomous knowledge-based design system called Kritik enable device explanation and exploration in an interactive design and learning environment called Interactive Kritik.

The Role of Student Tasks in Accessing Cognitive Media Types, M. Byrne, M. Guzdial, P. Ram, R. Catrambone, A. Ram, J. Stasko, G. Shippey, F. Albrecht. Second International Conference on the Learning Sciences, Evanson, IL, 1996.

We believe that identifying media by their cognitive roles (e.g., definition, explanation, pseudo-code, visualization) can improve comprehension and usability in hypermedia systems designed for learning. We refer to media links organized around their cognitive role as cognitive media types [Recker, Ram, Shikano, Li, & Stasko, 1995]. Our hypothesis is that the goals that students bring to the learning task will affect how they will use the hypermedia support system [Ram & Leake, 1995]. We explored student use of a hypermedia system based on cognitive media types where students performed different orienting tasks: undirected, browsing in order to answer specific questions, problem-solving, and problem-solving with prompted self-explanations. We found significant differences in use behavior between problem-solving and browsing students, though no learning differences.

The Role of Ontology in Creative Understanding, Kenneth Moorman, Ashwin Ram. Eighteenth Annual Conference of the Cognitive Science Society, San Diego, CA, 1996.

Successful creative understanding requires that a reasoner be able to manipulate known concepts in order to understand novel ones. A major problem arises, however, when one considers exactly how these manipulations are to be bounded. If a bound is imposed which is too loose, the reasoner is likely to create bizarre understandings rather than useful creative ones. On the other hand, if the bound is too tight, the reasoner will not have the flexibility needed to deal with a wide range of creative understanding experiences. Out approach is to make use of a principled ontology as one source of reasonable bounding. This allows our creative understanding theory to have good explanatory power about the process while allowing the computer implementation of the theory (the ISAAC system) to be flexible without being bizarre in the task domain of reading science fiction short stories.

Introspective multistrategy learning: Constructing a learning strategy under reasoning failure, Michael T. Cox. PhD Thesis, Technical Report GIT-CC-96/06, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.

The thesis put forth by this dissertation is that introspective analyses facilitate the construction of learning strategies. Furthermore, learning is much like nonlinear planning and problem solving. Like problem solving, it can be specified by a set of explicit learning goals (i.e., desired changes to the reasoner's knowledge); these goals can be achieved by constructing a plan from a set of operators (the learning algorithms) that execute in a knowledge space. However, in order to specify learning goals and to avoid negative interactions between operators, a reasoner requires a model of its reasoning processes and knowledge. With such a model, the reasoner can declaratively represent the events and causal relations of its mental world in the same manner that it represents events and relations in the physical world. This representation enables introspective self-examination, which contributes to learning by providing a basis for identifying what needs to be learned when reasoning fails. A multistrategy system possessing several learning algorithms can decide what to learn, and which algorithm(s) to apply, by analyzing the model of its reasoning. This introspective analysis therefore allows the learner to understand its reasoning failures, to determine the causes of the failures, to identify needed knowledge repairs to avoid such failures in the future, and to build a learning strategy (plan). Thus, the research goal is to develop both a content theory and a process theory of introspective multistrategy learning and to establish the conditions under which such an approach is fruitful. Empirical experiments provide results that support the claims herein. The theory was implemented in a computational model called Meta-AQUA that attempts to understand simple stories. The system uses case-based reasoning to explain reasoning failures and to generate sets of learning goals, and it uses a standard non-linear planner to achieve these goals. Evaluating Meta-AQUA with and without learning goals generated results indicating that computational introspection facilitates the learning process. In particular, the results lead to the conclusion that the stage that posts learning goals is a necessary stage if negative interactions between learning methods are to be avoided and if learning is to remain effective.

Explanatory Interface in Interactive Design Environments, Ashok K. Goel, Andres Gomez de Silva Garza, Nathalie Grue, J. William Murdock, Margaret M. Recker, and T. Govindaraj. Fourth International Conference on AI in Design, Stanford University, June 1996.

Explanation is an important issue in building computer-based interactive design environments in which a human designer and a knowledge system may cooperatively solve a design problem. We consider the two related problems of explaining the system's reasoning and the design generated by the system. In particular, we analyze the content of explanations of design reasoning and design solutions in the domain of physical devices. We describe two complementary languages: task-method-knowledge models for explaining design reasoning, and structure-behavior-function models for explaining device designs. Interactive Kritik is a computer program that uses these representations to visually illustrate the system's reasoning and the result of a design episode. The explanation of design reasoning in Interactive Kritik is in the context of the evolving design solution, and, similarly, the explanation of the design solution is in the context of the design reasoning.

Evaluating the Structural Organization of a Hypermedia Learning Environment using GOMS Model Analysis, Terry Shikano, Mimi Recker, Ashwin Ram. World Conference on Educational Multimedia and Hypermedia, Boston, MA, June 1996.

Network-accessible hypermedia environments offer the potential for radically changing the nature of education by providing students with self-paced access to digital repositories of course information. However, much research is still required to identify ways to best organize, present, and index multimedia information to facilitate use and learning by students. We have been developing a theory of design for educational multimedia, which is based on cognitive aspects of the users of that information. Design based on "cognitive media types" appeals to the particular cognitive aspects of learners. In contrast, design based on physical media types appeals to particular symbol systems or sensory modalities. To evaluate our theory of cognitive media types, we have taken a 3-pronged approach: design, empirical evaluation, and analysis of student models. In this paper, we focus on the third component of our approach: a model of student usage and learning with cognitive media. This model, based on the GOMS methodology, helps us better understand the usability of our system, and how it may support and hinder student learning. Furthermore, our user model provides feedback on our theory of cognitive media, and offers suggestions for the design of effective hypermedia learning environments.

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent, Ashwin Ram, Anthony G. Francis, Jr.. In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996.

The real world has many properties that present challenges for the design of intelligent agents: it is dynamic, unpredictable, and independent, poses poorly structured problems, and places bounds on the resources available to agents. Agents that opearate in real worlds need a wide range of capabilities to deal with them: memory, situation analysis, situativity, resource-bounded cognition, and opportunism. We propose a theory of experience-based agency which specifies how an agent with the ability to richly represent and store its experiences could remember those experiences with a context-sensitive, asynchronous memory, incorporate those experiences into its reasoning on demand with integration mechanisms, and usefully direct memory and reasoning through the use of a utility-based metacontroller. We have implemented this theory in an architecture called NICOLE and have used it to address the problem of merging multiple plans during the course of case-based adaptation in least-committment planning.

Exploring Interface Options in Multimedia Educational Environments, G. Shippey, A. Ram, F. Albrecht, J. Roberts, M. Guzdial, R. Catrambone, M. Byrne, J. Stasko. Second International Conference on the Learning Sciences, Evanson, IL, 1996.

Multimedia technology presents several options to the developers of computer-based learning environments. For instance, it is common to organize information by its physical characteristics. However, organizize information based on how users understand the material might improve comprehension. This theory of cognitive media - media organized by cognitive characteristics - was examined in studies using the AlgoNet system, a multimedia learning environment (Recker, Ram, Shikano, Li, & Stasko, 1995). To explore several interface options, AlgoNet2, a second version of AlgoNet, was created with the same domain information, but several new interface concepts. Students in an introductory programming class used AlgoNet2 to solve a problem involving graph theory. Students' performance and comments suggest that many students lack effective learning strategies and those that do employ effective learning strategies are unaware of them.

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems, Juan Carlos Santamaria, Ashwin Ram. In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996.

Two important goals in the evaluation of artificial intelligence systems are to assess the merit of alternative design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. Achieving these objectives enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave when the characteristics of the domain or problem change. In addition, for case-based reasoning and other machine learning systems, it is important to evaluate the improvement in the performance of the system with experience (or with learning), to show that this improvement is statistically significant, to show that the variability in performance decreases with experience (convergence), and to analyze the impact of the design decisions on this improvement in performance. We present a methodology for the evaluation of CBR and other AI systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. We illustrate this methodology with a case study in which we evaluate a multistrategy case-based and reinforcement learning system which performs autonomous robotic navigation. In this case study, we evaluate a range of design decisions that are important in CBR systems, including configuration parameters of the system (e.g., overall size of the case library, size or extent of the individual cases), problem characteristics (e.g., problem difficulty), knowledge representation decisions (e.g., choice of representational primitives or vocabulary), algorithmic decisions (e.g., choice of adaptation method), and amount of prior experience (e.g., learning or training). We show how our methodology can be used to evaluate the impact of these decisions on the performance of the system and, in turn, to make the appropriate choices for a given problem domain and verify that the system does behave as predicted.

Dynamically Adjusting Concepts to Accommodate Changing Contexts, Mark Devaney, Ashwin Ram . ICML-96 Workshop on Learning in Context Sensitive Domains, Bari, Italy, 1996.

In concept learning, objects in a domain are grouped together based on similarity as determined by the attributes used to describe them. Existing concept learners require that this set of attributes be known in advance and presented in entirety before learning begins. Additionally, most systems do not possess mechanisms for altering the attribute set after concepts have been learned. Consequently, a veridical attribute set relevant to the task for which the concepts are to be used must be supplied at the onset of learning, and in turn, the usefulness of the concepts is limited to the task for which the attributes were originally selected. In order to efficiently accommodate changing contexts, a concept learner must be able to alter the set of descriptors without discarding its prior knowledge of the domain. We introduce the notion of attribute-incrementation, the dynamic modification of the attribute set used to describe instances in a problem domain. We have implemented the capability in a concept learning system that has been evaluated along several dimensions using an existing concept formation system for comparison.

Creative Conceptual Change, Ashwin Ram, Kenneth Moorman, Juan Carlos Santamaria. Technical Report GIT-CC-96/07, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996. A shorter version appears in the Fifteenth Annual Conference of the Cognitive Science Society, 17-26, Boulder, CO, June 1993.

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. The first kind of process involves reformulating perceptual, sensorimotor, or other low-level information into higher-level abstractions. The second kind of process involves a temporary suspension of disbelieve and the extension or adaptation of existing concepts to create a conceptual model of a new situation which may be very different from previous real-world experience. We discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different "everyday" task domains: (a) SINS is an autonomous robotic navigation system that learns to navigate in an obstacle-ridden world by constructing sensorimotor concepts that represent navigational strategies, and (b) ISAAC is a natural language understanding system that reads short stories from the science fiction genre which requires a deep understanding of concepts that might be very different from the concepts that the system is familiar with.

Interacting Learning-Goals: Treating Learning as a Planning Task, Michael T. Cox, Ashwin Ram. In J.-P. Haton, M. Keane, & M. Manago (eds.), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995.

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system's background knowledge so that the failure will not be repeated in similar future situations. A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems, Anthony Francis, Ashwin Ram. Eighth European Conference on Machine Learning (ECML-95), Crete, Greece, 1995.

The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems.

Structuring On-The-Job Troubleshooting Performance to Aid Learning, Brian Minsk, Harinarayanan Balakrishnan, Ashwin Ram. World Conference on Engineering Education, Minneapolis, MN, October 1995.

This paper describes a methodology for aiding the learning of troubleshooting tasks in the course of an engineer's work. The approach supports learning in the context of actual, on-the-job troubleshooting and, in addition, supports performance of the troubleshooting task in tandem. This approach has been implemented in a computer tool called WALTS (Workspace for Aiding and Learning Troubleshooting). This method aids learning by helping the learner structure his or her task into the conceptual components necessary for troubleshooting, giving advice about how to proceed, suggesting candidate hypotheses and solutions, and automatically retrieving cognitively relevant media. WALTS includes three major components: a structured dynamic workspace for representing knowledge about the troubleshooting process and the device being diagnosed; an intelligent agent that facilitates the troubleshooting process by offering advice; and an intelligent media retrieval tool that automatically presents candidate hypotheses and solutions, relevant cases, and various other media. WALTS creates resources for future learning and aiding of troubleshooting by storing completed troubleshooting instances in a self-populating database of troubleshooting cases. The methodology described in this paper is partly based on research in problem-based learning, learning by doing, case-based reasoning, intelligent tutoring systems, and the transition from novice to expert. The tool is currently implemented in the domain of remote computer troubleshooting.

Learning as Goal-Driven Inference, Ryszard Michalski, Ashwin Ram. In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 21, MIT Press/Bradford Books, 1995.

Developing an adequate and general computational model of adaptive, multistrategy, and goal-oriented learning is a fundamental long-term objective for machine learning research for both theoretical and pragmatic reasons. We outline a proposal for developing such a model based on two key ideas. First, we view learning as an active process involving the formulation of learning goals during the performance of a reasoning task, the prioritization of learning goals, and the pursuit of learning goals using multiple learning strategies. The second key idea is to model learning as a kind of inference in which the system augments and reformulates its knowledge using various types of primitive inferential actions, known as knowledge transmutations.

Some Experimental Results in Multistrategy Navigation Planning, Ashok K. Goel, Khaled S. Ali, and Eleni Stroulia. GIT-CC-95-51.

Spatial navigation is a classical problem in AI. In this paper, we examine three specific hypotheses regarding multistrategy navigation planning in visually engineered physical spaces containing discrete pathways: (1) For hybrid robots capable of both deliberative planning and situated action, qualitative representations of topological knowledge are sufficient for enabling effective spatial navigation; (2) For deliberative planning, the case-based strategy of plan reuse generates plans more efficiently than the model-based strategy of search without any loss in the quality of plans or problem-solving coverage; and (3) For the strategy of model-based search, the ``principle of locality'' provides a productive basis for partitioning and organizing topological knowledge. We describe the design of a multistrategy navigation planner called Router that provides an experimental testbed for evaluating the three hypotheses. We also describe the embodiment of Router on a mobile robot called Stimpy for testing the first hypothesis. Experiments with Stimpy indicate that this hypothesis apparently is valid for hybrid robots in visually engineered navigation spaces containing discrete pathways such as office buildings. In addition, two different kinds of simulation experiments with Router indicate that the second and the third hypotheses are only partially correct. Finally, we relate the evaluation methods and experimental designs with the research hypotheses.

Goal-Driven Learning (Chapter 1: Learning, Goals, and Learning Goals), Ashwin Ram, David Leake. MIT Press/Bradford Books, 1995.

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner's goals. Investigators in each of these areas have independently pursued the common issues of how learning goals arise, how they affect learner decisions of when and what to learn, and how they guide the learning process. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. The book begins with a discussion of fundamental questions for goal-driven learning: the motivations for adopting a goal-driven model of learning, the basic goal-driven learning framework, the specific issues raised by the framework that a theory of goal-driven learning must address, the types of goals that can influence learning, the types of influences those goals can have on learning, and the pragmatic implications of the goal-driven learning model. The remaining chapters address issues such as the justification of goal-driven learning models through functional arguments about the role and utility of goals in learning, the justification of such models through cognitive results, goal-based processes for deciding what to learn and for guiding learning and the learning process, and pragmatic implications of goal-driven learning for design of instructional environments.

Structred Light Systems for Dent Recognition: Lessons Learned, Juan Carlos Santamaria, Ronald C. Arkin. SPIE Photonics East 95 - Mobile Robots X, Philadelphia, Pennsylvania, October 1995.

This paper describes the results from a feasibility analysis performed on two different structured light system designs and the image processing algorithms they require for dent detection and localization. The impact of each structured light system is analyzed in terms of their mechanical realization and the complexity of the image processing algorithms required for robust dent detection. The two design alternatives considered consist of projecting vertical or horizontal laser stripes on the drum surface. The first alternative produces straight lines in the image plane and requires scanning the drum surface horizontally, whereas the second alternative produces conic curves on the camera plane and requires scanning the drum surface vertically. That is, the first alternative favors image processing against mechanical realization while the second alternative favors mechanical realization against image processing. The results from simulated and real structured light systems are presented and their major advantages and disadvantages for dent detection are presented. The paper concludes with the lessons learned from experiments with real and simulated structured light system prototypes.

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems, Ashwin Ram, Michael T. Cox, S. Narayanan. In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995.

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Functional Representation and Reasoning in Reflective Systems., E. Stroulia and A. Goel. Applied Artificial Intelligence: An International Journal, Special Issue on Functional Reasoning, Vol. 9, No. 1, pp. 101-124.

Functional models have been extensively investigated in the context of several problem-solving tasks such as device diagnosis and design. In this paper, we view problem solvers themselves as devices, and use structure-behavior-function models to represent how they work. The model representing the functioning of a problem solver explicitly specifies how the knowledge and reasoning of the problem solver result in the achievement of its goals. Then, we employ these models for performance-driven reflective learning. We view performance-driven learning as the task of redesigning the knowledge and reasoning of the problem solver to improve its performance. We use the model of the problem solver to monitor its reasoning, assign blame when it fails, and appropriately redesign its knowledge and reasoning. This paper focuses on the model-based redesign of a path planner's task structure. It illustrates the model-based reflection using examples from an operational system called Autognostic system.

Cognitive Media Types for Multimedia Information Access, Mimi Recker, Ashwin Ram, Terry Shikano, George Li, John Stasko. Journal of Educational Multimedia and Hypermedia, 4(2/3):185-210, 1995..

Multimedia repositories, libraries, and databases offer the potential for providing students with access to a wide variety of interconnected information resources. However, in order to realize this potential, multimedia systems should provide access to information and activities that support effective knowledge construction and learning by students. This article proposes a theoretical framework for organizing information and activities in educational hypermedia systems. We show that such systems should not be characterized primarily in terms of the kinds of physical media types that can be accessed; instead, the important aspect is the content that can be represented within a physical media, rather than the physical media itself. We propose a theory of ``cognitive media types'' based on the inferential and learning processes of human users. The theory highlights specific media characteristics that facilitate specific problem solving actions, which in turn are enabled by specific kinds of physical media. We present an implemented computer system, called AlgoNet, that supports hypermedia information access and constructive learning activities for self-paced learning in computer and engineering disciplines. Extensive empirical evaluations with undergraduate students suggest that self-paced interactive learning environments, coupled with multimedia information access and constructive activities organized into cognitive media types, can support and help students develop deep intuitions about important concepts in a given domain.

Goal-Driven Learning, Ashwin Ram, David Leake (eds.). MIT Press/Bradford Books, 1995.

Opportunistic Reasoning: A Design Perspective , Marin D. Simina, Janet L. Kolodner. Seventeenth Annual Conference of the Cognitive Science Society, Pittsburgh, PA, 1995.

An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal "match". We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system.

Model-Based Echolocation of Environmental Objects, Juan Carlos Santamaria, Ronald C. Arkin. IEEE International Conference on Intelligent Robots and Systems, Munich, Germany, October 1994.

This paper presents an algorithm that can recognize and localize objects given a model of their contours using only ultrasonic range data. The algorithm exploits a physical model of the ultrasonic beam and combines several readings to extract outline object segments from the environment. It then detects patterns of outline segments that correspond to predefined models of object contours, performing both object recognition and localization. The algorithm is robust since it can account for noise and inaccurate readings as well as efficient since it uses a relaxation technique that can incorporate new data incrementally without recalculating from scratch.

Multimedia Information Access in Support of Knowledge Construction, Mimi Recker, Ashwin Ram, George Li, Terry Shikano, John Stasko. Annual Meeting of the American Educational Research Association, San Franciso, 1995 (extended abstract).

Subsumed by

Analogical Design: A Model-Based Approach, Bhatta, S., Goel, A. and Prabhakar, S. . AID-94: In Proc. of the Third International Conference on AI in Design, Aug. 1994, Lausanne, Switzerland.

Discovery of Physical Principles from Design Experiences, S. Bhatta and A. Goel. In a special issue on ``Machine Learning in Design'' of the International Journal of AI EDAM (AI for Engineering Design, Analysis, and Manufacturing), 8(2), Spring. ftp:://

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems (old), Juan Carlos Santamaria, Ashwin Ram. AAAI Workshop on Case-Based Reasoning, Seattle, WA, August 1994.

Subsumed by

Understanding the Creative Mind, Ashwin Ram, Linda Wills, Eric Domeshek, Nancy Nersessian, Janet Kolodner. Artificial Intelligence journal, 79(1):111-128, 1995.

A review of Margaret Boden's "The Creative Mind", discussing creativity and computational models of creativity.

Choosing Learning Strategies to Achieve Learning Goals, Michael T. Cox, Ashwin Ram. AAAI Spring Symposium on Goal-Driven Learning, 12-21, Stanford, CA, 1994.

Subsumed by

Failure-Driven Learning as Input Bias, Michael T. Cox, Ashwin Ram. Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994.

Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA.

Cognitive Media Types as Indices for Hypermedia Learning Environments, Mimi Recker, Ashwin Ram. AAAI Workshop on Indexing and Reuse in Multimedia Systems, Seattle, WA, August 1994.

Subsumed by

A Framework for Goal-Driven Learning, Ashwin Ram, David Leake. AAAI Spring Symposium on Goal-Driven Learning, Stanford, CA, 1994. Full version in A. Ram & D. Leake, editors, Goal-Driven Learning,. MIT Press/Bradford Books, 1995

Subsumed by

KA: Integrating natural language understanding with design problem solving, Kavi Mahesh, Justin Peterson, Ashok Goel, Kurt P. Eiselt. In Working Notes from the AAAI Spring Symposium on Active NLP: Natural Language Understanding in Integrated Systems.

In this article, we present our research on the integration of natural language understanding and problem solving capabilities in the context of the design of physical devices. We describe an experimental integrated system called KA [Goel and Eiselt, 1991; Pittges et al, 1993] that illustrates some of the benefits of building an integrated theory of multiple cognitive tasks focusing on language u nderstanding and its interaction with design problem solving. We show for example how our work on KA imposed constraints on the target representation of natural language understanding and how the integrated approach redefined classical problems in language processing such as ambiguity and underspecification in terms of the overall goals of the KA system. Language understanding imposed constraints, in return, on the task structure of the design problem solver.

A Model of Creative Understanding, Kenneth Moorman, Ashwin Ram. Twelvth National Conference on Artificial Intelligence (AAAI-94), Seattle, WA, August 1994.

Although creativity has largely been studied in problem solving contexts, creativity consists of both a generative component and a comprehension component. In particular, creativity is an essential part of reading and understanding of natural language stories. We have formalized the understanding process and have developed an algorithm capable of producing creative understanding behavior. We have also created a novel knowledge organization scheme to assist the process. Our model of creativity is implemented as a portion of the ISAAC (Integrated Story Analysis And Creativity) reading system, a system which models the creative reading of science fiction stories.

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems (old), Anthony Francis, Ashwin Ram. AAAI Workshop on Case-Based Reasoning, Seattle, WA, August 1994.

Subsumed by

Foundations of Foundations of Artificial Intelligence, Ashwin Ram, Eric Jones. Philosophical Psychology, 8(2):193-199, 1995.

Review of D. Kirsh (ed.), Foundations of Artificial Intelligence, MIT Press, 1992, containing papers by Kirsh, Nilsson, Birnbaum, Hewitt, Gasser, Brooks, Lenat & Feigenbaum, Smith, Rosenbloom and the Soar team, and Norman.

Using Genetic Algorithms to Learn Reactive Control Parameters for Autonomous Robotic Navigation, Ashwin Ram, Ronald Arkin, Gary Boone, Michael Pearce. Adaptive Behavior, 2(3):277-305, 1994.

This paper explores the application of genetic algorithms to the learning of local robot navigation behaviors for reactive control systems. Our approach evolves reactive control systems in various environments, thus creating sets of ``ecological niches'' that can be used in similar environments. The use of genetic algorithms as an unsupervised learning method for a reactive control architecture greatly reduces the effort required to configure a navigation system. Unlike standard genetic algorithms, our method uses a floating point gene representation. The system is fully implemented and has been evaluated through extensive computer simulations of robot navigation through various types of environments.

Managing Learning Goals in Strategy Selection Problems, Michael T. Cox, Ashwin Ram. Second European Workshop on Case-Based Reasoning, Chantilly, France, 1994.

Subsumed by

From Design Experiences to Generic Mechanisms: Model-Based Learning in Analogical Design, S. Bhatta and A. Goel. In Proceedings of the AID-94 workshop on Machine Learning in Design, Aug. 1994, Lausanne, Switzerland.

A Functional Theory of Creative Reading, Kenneth Moorman, Ashwin Ram. The Psycgrad Journal. Technical Report GIT-CC-94/01, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1994.

Reading is an area of human cognition which has been studied for decades by psychologists, education researchers, and artificial intelligence researchers. Yet, there still does not exist a theory which accurately describes the complete process. We believe that these past attempts fell short due to an incomplete understanding of the overall task of reading; namely, the complete set of mental tasks a reasoner must perform to read and the mechanisms that carry out these tasks. We present a functional theory of the reading process and argue that it represents a coverage of the task. The theory combines experimental results from psychology, artificial intelligence, education, and linguistics, along with the insights we have gained from our own research. This greater understanding of the mental tasks necessary for reading will enable new natural language understanding systems to be more flexible and more capable than earlier ones. Furthermore, we argue that creativity is a necessary component of the reading process and must be considered in any theory or system attempting to describe it. We present a functional theory of creative reading and a novel knowledge organization scheme that supports the creativity mechanisms. The reading theory is currently being implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a computer system which reads science fiction stories.

Learning Problem-Solving Concepts by Reflecting on Problem Solving, E. Stroulia and A. Goel. Proc. 1994 European Conference on Machine Learning, Catania, Italy, April 1994, pp. 287-306, Available as Lecture Notes in Artificial Intelligence 784 - Machine Learning, F. Bergadano and L. De Raedt (editors), Berlin: Springer-Verlag, 1994.

Learning and problem solving are intimately related: problem solving determines the knowledge requirements of the reasoner which learning must fulfill, and learning enables improved problem-solving performance. Different models of problem solving, however, recognize different knowledge needs, and, as a result, set up different learning tasks. Some recent models analyze problem solving in terms of generic tasks, methods, and subtasks. These models require the learning of problem-solving concepts such as new tasks and new task decompositions. We view reflection as a core process for learning these problem-solving concepts. In this paper, we identify the learning issues raised by the task-structure framework of problem solving. We view the problem solver as an abstract device, and represent how it works in terms of a structure-behavior-function model which specifies how the knowledge and reasoning of the problem solver results in the accomplishment of its tasks. We describe how this model enables reflection, and how model-based reflection enables the reasoner to adapt its task structure to produce solutions of better quality. The Autognostic system illustrates this reflection process.

Integrating Creativity and Reading: A Functional Approach, Kenneth Moorman, Ashwin Ram. Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994.

Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.

Continuous Case-Based Reasoning (short version), Ashwin Ram, Juan Carlos Santamaria. AAAI Workshop on Case-Based Reasoning, 86-93, Washington DC, July 1993.

Subsumed by

A Multistrategy Case-Based and Reinforcement Learning Approach to Self-Improving Reactive Control Systems for Autonomous Robotic Navigation, Ashwin Ram, Juan Carlos Santamaria. Second International Workshop on Multistrategy Learning, Harpers Ferry, WV, May 1993.

Subsumed by

A Theory of Interaction and Independence in Sentence Understanding, Kavi Mahesh. College of Computing Technical Report, PhD Thesis Proposal..

Developing a complete and well-specified computational model of human language processing is a difficult problem. Natural language understanding requires the application of many different kinds of knowledge such as syntactic, semantic, and conceptual knowledge. To account for the variety of constructs possible in natural languages and to explain the variety of human behavior in sentence understanding, each kind of knowledge must be applicable independently of others. However, in order to efficiently resolve the many kinds of ambiguities that abound in natural languages, the sentence processor must integrate information available from different knowledge sources as soon as it can. Such early commitment in ambiguity resolution calls for an ability to recover from possible errors in commitment. In this work, we propose a unified-process, multiple knowledge-source model of sentence understanding that satisfies all the constraints above. In this model, syntactic, semantic, and conceptual knowledge are represented separately but in the same form. The single unified process utilizes all knowledge sources to process a sentence. The unified process can resolve structural as well as lexical ambiguities and recover from errors it might make. We show that this model can account for a range of human sentence processing behaviors by producing seemingly autonomous behavior at times and interactive behaviors at other times. It is efficient since it supports interaction between syntactic, semantic, and conceptual processing. Moreover, the model aids portability between domains by separating domain-specific knowledge from general linguistic knowledge. We also present an early commitment, expectation-driven, bottom-up theory of syntactic processing that permits us to unify syntactic processing with semantic processing. We show several illustrative examples of ambiguity resolution and error recovery processed by our prototype implementation of the theory in a program called COMPERE (Cognitive Model of Parsing and Error Recovery).

Case-Based Reasoning, Janet L. Kolodner. Morgan Kaufmann Publishers, 1993.

AQUA: Questions that Drive the Explanation Process, Ashwin Ram. Inside Case-Based Explanation, R.C. Schank, A. Kass, and C.K. Riesbeck (eds.), 207-261, Lawrence Erlbaum, 1994.

In the doctoral disseration from which this chapter is drawn, Ashwin Ram presented an alternative perspective on the processes of story understanding, explanation, and learning. The issues that Ram explores in that dissertation are similar to those that are explored by the other authors in this book, but the angle that Ram takes on these issues is somewhat different. Ram's exploration of these processes is organized around the central theme of question asking. For Ram, understanding a story means identifying questions that the story raises, and questions that it answers. Question asking also serves as a lens through which each of the sub-processes of is viewed: the retrieval of stored explanations, for instance, is driven by a library of what Ram calls "XP retrieval questions"; likewise, evaluation is driven by another set of questions, called "hypothesis verification questions". The AQUA program, which is Ram's implementation of this question-based theory of understanding, is a very complex system, probably the most complex among the programs described in this book. AQUA covers a great deal of ground; it implements the entire case-based explanation process in a question-based manner. In this chapter, we have focussed on the high-level description of the questions the programs asks, especially the questions it asks when constructing and evaluating explanations of volitional actions.

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task, Ashwin Ram, S. Narayanan, Michael T. Cox. Cognitive Science journal, 19(3):289-340, 1995. Technical Report GIT-CC-93/67, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1993.

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Model-Based Learning of Structural Indices to Design Cases, S. Bhatta and A. Goel. In Proc. of IJCAI-93 workshop on ``Reuse of Designs: An Interdisciplinary Cognitive Approach''.

The Utility Problem in Case-Based Reasoning, Anthony G. Francis, Jr., Ashwin Ram. Abstracted in the AAAI-93 Workshop on Case-Based Reasoning, Washington, DC, July 1993.

Subsumed by

A New Perspective on Story Understanding, Kenneth Moorman, Ashwin Ram. Thirty-First Southeast ACM Conference, Birmingham, AL, April 1993.
Subsumed by

Creative Conceptual Change (short version), Ashwin Ram. Fifteenth Annual Conference of the Cognitive Science Society, 17-26, Boulder, CO, June 1993.

Superceded by

Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing, Kurt P. Eiselt, Kavi Mahesh, Jennifer K. Holbrook. AAAI-93: Proceedings of the Eleventh National Conference on Artificial Intelligence.

Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this paper we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language processing community significantly closer to offering human-like language processing systems.

Learning Generic Mechanisms from Experiences for Analogical Reasoning, Bhatta, S. and Goel, A. . In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society.

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation, Ashwin Ram, Juan Carlos Santamaria. Informatica, 17(4):347-369, 1993 .

This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.

Goal-Driven Learning: Fundamental Issues and Symposium Report, David Leake, Ashwin Ram. AI Magazine, 14(4):67-72, Winter 1993. Technical Report #85, Cognitive Science Program, Indiana University, Bloomington, IN, 1993.

In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research directions in goal-driven learning.

Computational Models of the Utility Problem and their Application to Utility Analysis of Case-Based Reasoning, Anthony G. Francis, Jr., Ashwin Ram. ML-93 Workshop on Knowledge Compilation and Speedup Learning, Amherst, MA, June 1993.

Subsumed by

Knowledge Compilation and Speedup Learning in Continuous Task Domains, Juan Carlos Santamaria, Ashwin Ram. ML-93 Workshop on Knowledge Compilation and Speedup Learning, Amherst, MA, June 1993.

Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define. To improve its performance in continuous problem domains, a problem solver must learn, modify, and use "continuous operators" that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimotor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimentation.

Learning to Troubleshoot in Electronics Assembly Manufacturing, S. Narayanan, Ashwin Ram. Machine learning: Ninth International Conference, Workshop on Integrated Learning in Real-world Domains, Aberdeen, Scotland, July 1992.

Subsumed by

Use of Mental Models for Constraining Index Learning in Experience-Based Design, S. Bhatta and A. Goel. In Proc. of AAAI-92 workshop on ``Constraining Learning with Prior Knowledge''.

The Use of Explicit Goals for Knowledge to Guide Inference and Learning, Ashwin Ram, Lawrence Hunter. Journal of Applied Intelligence, 2(1):47-73, 1992.

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Knowledge-Based Diagnostic Problem Solving and Learning in the Test Area of Electronics Assembly Manufacturing, S. Narayanan, Ashwin Ram, Sally M. Cohen, Christine M. Mitchell, T. Govindraj. SPIE Symposium on Applications of AI X: Knowledge-Based Systems, Orlando, FL, April 1992.

Subsumed by

Natural Language Understanding for Information-Filtering Systems, Ashwin Ram. Communications of the ACM, 35(12):80-81, December 1992.

Multistrategy Learning with Introspective Meta-Explanations, Michael T. Cox. Ashwin Ram. Machine Learning: Ninth International Conference, Aberdeen, Scotland, July 1992.

Given an arbitrary learning situation, it is difficult to determine the most appropriate learning strategy. The goal of this research is to provide a general representation and processing framework for introspective reasoning for strategy selection. The learning framework for an introspective system is to first perform some reasoning task. As it does, the system also records a trace of the reasoning itself, along with the results of such reasoning. If a reasoning failure occurs, the system must retrieve and apply an introspective explanation of the failure in order to understand the error and repair the knowledge base. A knowledge structure called a Meta-Explanation Pattern is used to both explain how conclusions are derived and why such conclusions fail. If reasoning is represented in an explicit, declarative manner, the system can examine its own reasoning, analyze its reasoning failures, identify what it needs to learn, and select appropriate learning strategies in order to learn the required knowledge without overreliance on the programmer.

The Learning of Reactive Control Parameters through Genetic Algorithms, Michael Pearce, Ronald C. Arkin, Ashwin Ram. IEEE/RSJ International Conference on Intelligent Robots and Systems, 130-137, Raleigh, NC, 1992.

Subsumed by

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases, Ashwin Ram. Machine Learning, 10:201-248, 1993..

This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good "lessons" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.

A Case-Based Approach to Reactive Control for Autonomous Robots, Kenneth Moorman, Ashwin Ram. AAAI Fall Symposium on AI for Real-World Autonomous Robots, Cambridge, MA, October 1992.

Subsumed by

AskJef: Integrating Case-Based and Multimedia Technologies for Interface Design Advising, John Barber, Sambasiva Bhatta, Ashok Goel, Mark Jacobson, Michael Pearce, Louise Penberthy, Murali Shankar, Robert Simpson and Eleni Stroulia. Proc. Second International Conference on Artificial Intelligence in Design, Pittsburgh, June 1992, pp. 457-476.

AskJef is a prototype AI system that helps software engineers in designing human-machine interfaces. It provides a memory of interface design examples, primitive domain objects, and design principles, guidelines, errors and stories. The design examples are represented graphically and decomposed temporally. The different types of knowledge are cross-indexed to enable the designer to navigate through the system's memory. AskJef helps software engineers in (1) understanding interface design problems by illustrating and explaining solutions to similar examples, and (2) comprehending the domain of interface design by illustrating and explaining the use of design guidelines. It uses text, graphics, animation and voice to present relevant information to the designer.

Introspective Reasoning using Meta-Explanations for Multistrategy Learning, Ashwin Ram, Michael T. Cox. Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (eds.), 349-377, Morgan Kaufmann, 1994.

In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This paper presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.

A Theory of Questions and Question Asking, Ashwin Ram. The Journal of the Learning Sciences, 1(3&4):273-318, 1991.

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner's model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Case-Based Reactive Navigation: A Case-Based Method for On-Line Selection and Adaptation of Reactive Control Parameters in Autonomous Robotic Systems, Ashwin Ram, Ronald C. Arkin, Kenneth Moorman, Russell J. Clark. IEEE Transactions on Systems, Man, and Cybernetics, 27B(3), 1997. Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1992.

This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including "box canyon" environments known to be problematic for reactive control systems in general.

A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding, Jennifer K. Holbrook, Kurt P. Eiselt, Kavi Mahesh. Cogsci-92: In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society.

The development of models of human sentence processing has traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own task-specific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in parallel to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word: the semantic processor briefly pursued the different interpretations associated with the different meanings of the word in question until additional text confirmed one of them, or until processing limitations were exceeded. Errors in syntactic ambiguity resolution were assumed to be handled in some other way by a separate syntactic module. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding that two significantly different kinds of errors are handled in the same way is this: the human sentence processor consists of a single unified processing module utilizing multiple independent knowledge sources in parallel. A sentence processor built upon this architecture should at times exhibit behavior associated with modular approaches, and at other times act like an integrated system. In this paper we explore some of these ideas via a prototype computational model of sentence processing called COMPERE, and propose a set of psychological experiments for testing our theories.

An Architecture for Integrated Introspective Learning, Ashwin Ram, Michael T. Cox, S. Narayanan. Machine Learning: Ninth International Conference, Workshop on Computational Architectures, Aberdeen, Scotland, July 1992.

Subsumed by

A Model-Based Approach to Analogical Reasoning and Learning in Design, S. Bhatta. College of Computing Technical Report, Georgia Institute of Technology (PhD thesis proposal).

A Model-Based Approach to Blame Assignment in Design, E. Stroulia, M. Shankar, A. Goel, and L. Penberthy. Proceedings of AID'92.

We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment.

An Explicit Representation of Forgetting, Michael T. Cox, Ashwin Ram. Sixth International Conference on Systems Research, Informatics and Cybernetics, Baden-Baden, Germany, August, 1992.

A pervasive, yet much ignored, factor in the analysis of processing failures is the problem of misorganized knowledge. If a system's knowledge is not indexed or organized correctly, it may make an error, not because it does not have either the general capability or specific knowledge to solve a problem, but rather because it does not have the knowledge sufficiently organized so that the appropriate knowledge structures are brought to bear on the problem at the appropriate time. In such cases, the system can be said to have "forgotten" the knowledge, if only in this context. This is the problem of forgetting or retrieval failure. This research presents an analysis along with a declarative representation of a number of types of forgetting errors. Such representations can extend the capability of introspective failure-driven learning systems, allowing them to reduce the likelihood of repeating such errors. Examples are presented from the Meta-AQUA program, which learns to improve its performance on a story understanding task through an introspective meta-analysis of its knowledge, its organization of its knowledge, and its reasoning processes.

Generic Teleological Mechanisms and their Use in Case Adaptation, E. Stroulia and A. Goel. Proceedings of CogSci'92.

In experience-based (or case-based) reasoning, new problems are solved by retrieving and adapting the solutions to similar problems encountered in the past. An important issue in experience-based reasoning is to identify different types of knowledge and reasoning useful for different classes of case-adaptation tasks. In this paper, we examine a class of non-routine case-adaptation tasks that involve patterned insertions of new elements in old solutions. We describe a model-based method for solving this task in the context of the design of physical devices. The method uses knowledge of generic teleological mechanisms (GTMs) such as cascading. Old designs are adapted to meet new functional specifications by accessing and instantiating the appropriate GTM. The Kritik2 system evaluates the computational feasibility and sufficiency of this method for design adaptation.

Using Introspective Reasoning to Select Learning Strategies, Michael Cox, Ashwin Ram. First International Workshop on Multistrategy Learning, 217-230, Harpers Ferry, WV, November 1991.

Subsumed by

A Goal-based Approach to Intelligent Information Retrieval, Ashwin Ram, Lawrence Hunter. Machine Learning: Eighth International Workshop, Chicago, IL, June 1991.

Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems. The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system's inferential abilities and current state. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems, Ashwin Ram. Bellcore Workshop on High-Performance Information Filtering, Morristown, NJ, November 1991.

Given the vast amount of information available to the average person, there is a growing need for mechanisms that can select relevant or useful information based on some specification of the interests of a user. Furthermore, experience with natural language understanding and reasoning programs in artificial intelligence has demonstrated that the combinatorial explosion of possible conclusions that can be drawn from any input is a serious computational bottleneck in the design of computer programs that process information automatically. This paper presents a theory of interestingness that serves as the basis for two story understanding programs, one that can filter and extract information likely to be relevant or interesting to a user, and another that can formulate and pursue its own interests based on an analysis of the information necessary to carry out the tasks it is pursuing. We discuss the basis for our theory of interestingness, heuristics for interest-based processing of information, and the process used to filter and extract relevant information from the input.

Knowledge Compilation: A Symposium, Ashok Goel, Tom Bylander, B. Chandrasekaran, Thomas Dietterich, Richard Keller, and Chris Tong. IEEE Expert, 6(2):71-93, April 1991.

Learning Indices for Schema Selection, Sambasiva Bhatta, Ashwin Ram. Florida Artificial Intelligence Research Symposium, 226-231, Cocoa Beach, FL, April 1991.

In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system's memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.

Learning Momentum: On-line Performance Enhancement for Reactive Systems, Russell J. Clark, Ronald C. Arkin, Ashwin Ram. IEEE International Conference on Robotics and Automation, 111-116, Nice, France, May 1992.

Subsumed by

Evaluation of Explanatory Hypotheses, Ashwin Ram, David Leake. Thirteenth Annual Conference of the Cognitive Science Society, 867-871, Chicago, IL, August 1991.

Abduction is often viewed as inference to the "best" explanation. However, the evaluation of the goodness of candidate hypotheses remains an open problem. Most artificial intelligence research addressing this problem has concentrated on syntactic criteria, applied uniformly regardless of the explainer's intended use for the explanation. We demonstrate that syntactic approaches are insufficient to capture important differences in explanations, and propose instead that choice of the "best" explanation should be based on explanations' utility for the explainer's purpose. We describe two classes of goals motivating explanation: knowledge goals reflecting internal desires for information, and goals to accomplish tasks in the external world. We describe how these goals impose requirements on explanations, and discuss how we apply those requirements to evaluate hypotheses in two computer story understanding systems.

Incremental Learning of Explanation Patterns and their Indices, Ashwin Ram. Seventh International Conference on Machine Learning, 313-320, Austin, TX, June 1990.

Subsumed by

Knowledge Goals: A Theory of Interestingness, Ashwin Ram. Twelvth Annual Conference of the Cognitive Science Society, 206-214, Cambridge, MA, July 1990.

Combinatorial explosion of inferences has always been one of the classic problems in AI. Resources are limited, and inferences potentially infinite; a reasoner needs to be able to determine which inferences are useful to draw from a given piece of text. But unless one considers the goals of the reasoner, it is very difficult to give a principled definition of what it means for an inference to be "useful." This paper presents a theory of inference control based on the notion of interestingness. We introduce knowledge goals, the goals of a reasoner to acquire some piece of knowledge required for a reasoning task, as the focussing criteria for inference control. We argue that knowledge goals correspond to the interests of the reasoner, and present a theory of interestingness that is functionally motivated by consideration of the needs of the reasoner. Although we use story understanding as the reasoning task, many of the arguments carry over to other cognitive tasks as well.

Decision Models: A Theory of Volitional Explanation, Ashwin Ram. Twelvth Annual Conference of the Cognitive Science Society, Cambridge, MA, July 1990.

Subsumed by

Mental Models, Natural Language, and Knowledge Acquisition, Ashok Goel and Kurt Eiselt. ACM SIGART, Special Issue on Integrated Cognitive Architectures, 2(4): 75:78, August 1991.

Functional Representation of Designs and Redesign Problem Solving, Ashok Goel and B. Chandrasekaran. Proc. Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89),Detroit, Michigan, August, 1989, pp. 1388-1394, Los Altos,California: Morgan Kaufmann Publishers.

Functional Representation as a Basis for Design Rationale., B. Chandrasekaran, Ashok Goel and Yumi Iwasaki. . IEEE Computer, 26(1):48-56, January 1993.

Meta-Cases: Explaining Case-Based Reasoning, Ashok Goel and J. William Murdock. Proc. 3rd European Workshop on Case-Based Reasoning, November 1996.

AI research on case-based reasoning has led to the development of many laboratory case-based systems. As we move towards introducing these systems into work environments, explaining the processes of case-based reasoning is becoming an increasingly important issue. In this paper we describe the notion of a meta-case for illustrating, explaining and justifying case-based reasoning. A meta-case contains a trace of the processing in a problem-solving episode, and provides an explanation of the problem-solving decisions and a (partial) justification for the solution. The language for representing the problem-solving trace depends on the model of problem solving. We describe a task-method-knowledge (TMK) model of problem-solving and describe the representation of meta-cases in the TMK language. We illustrate this explanatory scheme with examples from Interactive Kritik, a computer-based design and learning environment presently under development.

Functional Reasoning for Design and Diagnosis. , Jon Sticklen, Ashok Goel, B. Chandrasekaran, and William Bond. Proc. Second International Workshop on Model-Based Diagnosis, Paris, France, July 1989, Los Altos, CA: Morgan Kaufmann.

Functional Reasoning about Devices with Fields and Cycles, Ashok Goel, Eleni Stroulia and Kai Yeung Luk. Proc. AAAI-94 Workshop on Representation and Reasoning about Function, Seattle, Washington, July 1994.

Functional Models and Model-Based Diagnosis in Adaptive Design, Ashok Goel and Eleni Stroulia. To appear in Artificial Intelligence for Engineering Design,Analysis and Manufacturing, Special Issue on Functional Representation and Reasoning, 1996.

Model-Based Discovery of Physical Principles from Design Experiences, Sambasiva Bhatta and Ashok Goel. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Special Issue on Machine Learning in Design, 8(2):113-123, May 1994.

From Numbers to Symbols to Knowledge Structures: Artificial Intelligence Perspectives on the Classification Task, B. Chandrasekaran and Ashok Goel. IEEE Transactions on Systems, Man, and Cybernetics, 18(3):415-424, May/June, 1988.

From Models to Cases: where do cases come from and what happens when a case is not available? , Ashok Goel, Andres Gomez, Todd Callantine, Michael Donnellan, and Juan Santamaria. Proc. Fifteenth Annual Conference of the Cognitive Science Society, Boulder, Colorado, July 1993, pp. 474-480, Hillsdale, NJ: Lawrence Erlbaum.

Model-Based Indexing and Index Learning in Analogical Design, Sambasiva Bhatta and Ashok Goel. Proc. 1995 Seventeenth Annual Conference of the Cognitive Science Society, Pittsburgh, July 22-25, 1995, NJ, Hillsdale: Erlbaum.

From Design Cases to Generic Mechanisms, Sambasiva Bhatta and Ashok Goel. To appear in Artificial Intelligence in Engineering Design,Analysis and Manufacturing, Special Issue on Machine Learning, Vol. 10, in press..

Model-Based Indexing and Index Learning in Case-Based Design, Sambasiva Bhatta and Ashok Goel. To appear in International Journal of Engineering Applications of Artificial Intelligence,special issue on Machine Learning in Engineering.

Model Revision: A Theory of Incremental Model Learning, Ashok Goel. Proc. Eighth International Conference on Machine Learning, Chicago, June 1991, pp. 605-609, Los Altos, CA: Morgan Kaufmann.

Modeling Foreign Policy Decision Making as Knowledge-Based Reasoning, Donald Sylvan, Ashok Goel, and B. Chandrasekaran. Artificial Intelligence and International Politics, pp.245-273, V. Hudson (editor), Boulder, Colorado: Westview Press, 1991.

Multistrategy Adaptive Navigational Path Planning, Ashok Goel, Khaled Ali, Michael Donnellan, Andres Gomez and Todd Callantine. IEEE Expert, 9(6):57-65, December 1994.

Multistrategy Language Understanding for Device Comrehension, Ashok Goel, Kavi Mahesh, Justin Peterson and Kurt Eiselt. Proc. 1996 Cognitive Science Conference, San Diego, July 1996..

Narrow Aisle Mobile Robot Navigation in Hazardous Environments, Thomas Collins, Andrew Henshaw, Ronald C. Arkin, William Wester.

Routine monitoring of stored radioactive materials can be performed safely by mobile robots. In order to accomplish this in an environment consisting of aisles of drums, it is necessary to have a reliable means of aisle-following. This work describes the adaptation of successful road-following methods based on visual recognition of road boundaries to the waste storage problem. Since the effort is targeted for neat-term usage in normal operating conditions, special emphasis has been given to the implementation of the visual processing on practical (i.e., small, low-cost, and low-power) hardware platforms. A modular flexible architecture called ANIMA (Architecture for Natural Intelligence in Machine Applications) has been developed at Georgia Tech. The initial versions of this architecture have been based on the Inmos Transputer, a microprocessor designed for parallel applications. To address this application an ANIMA-based real-time visual navigation module has been developed. The system described here has been implemented onboard a robot in our laboratory.

Manufacturing Diagnosis and Control: A Task-Specific AI Approach, William Punch, Ashok Goel, and Jon Sticklen. Intelligent Modeling, Diagnosis and Control of Manufacturing Processes, B. Chu and S. Chen (editors), Singapore: World Scientific Press, 1992, Chapter 1, pp. 1-32.

Practical Abduction: Characterization, Decomposition and Distribution, Ashok Goel, John Josephson, Olivier Fischer and P. Sadayappan. Journal of Experimental and Theoretical Artificial Intelligence, 7(1995):429-450.

Efficient Feature Selection in Conceptual Clustering, Mark Devaney, Ashwin Ram. Fourteenth International Conference on Machine Learning, Nashville, TN, 1997.

Feature selection has proven to be a valuable technique in supervised learning for improving predictive accuracy while reducing the number of attributes considered in a task. We investigate the potential for similar benefits in an unsupervised learning task, conceptual clustering. The issues raised in feature selection by the absence of class labels are discussed and an implementation of a sequential feature selection algorithm based on an existing conceptual clustering system is described. Additionally, we present a second implementation which employs a technique for improving the efficiency of the search for an optimal description and compare the performance of both algorithms.

Efficiency of Essentials-First Strategy for Assembling Composite Explanations, Ashok Goel, Jack Smith and John Svirbely, & Olivier Fischer. Abductive Inference: Computation, Philosophy, and Technology,J. Josephson and S. Josephson (editors), second part of Chapter 6, pp. 142-156, New York: Cambridge University Press, 1994.

Reactive Robotic Systems, Ronald C. Arkin.

Dynamic Scheduling for Obstacle Avoidance in Mobile Robots, Tucker Balch, Harold Forbes, Karsten Schwan.

Recompositional Analogy: A Model-Based Approach to Design Reuse, Ashok Goel. Proc. AAAI Spring Symposium on Computational Support for Incremental Modification and Reuse, Palo Alto, March 1992, pp. 116-121.

Reflective Self-Adaptive Problem Solvers, Eleni Stroulia and Ashok Goel. Proc. 1994 European Conference on Knowledge Acquisition,Germany, September 1994; available in book form as "A Future for Knowledge Acquisition" Luc Steels, Guus Schreiber and Walter Van de Velde (editors), Berlin: Springer-Verlag, 1994.

Learning in Parallel Distributed Processing: Computational Complexity and Information Content, John Kolen and Ashok Goel. IEEE Transactions on Man, Systems, and Cybernetics, 21(2):359-367,March/April 1991.

Grounding Case Adaptation in Causal Models. , Ashok Goel. Methodologies for Intelligent Systems --- 5, Z. Ras, M. Zemenkova,and M. Emrich (editors), Amsterdam, Netherlands: North Holland, 1990,pp. 260-267.

Learning About Novel Operating Environments: Designing by Adaptive Modelling, Sattiraju Prabhakar and Ashok Goel. To appear in Artificial Intelligence in Engineering Design,Analysis and Manufacturing, Special Issue on Machine Learning, Vol. 10, in press.

Situating Natural Language Understanding in Experience-Based Design, Justin Peterson, Kavi Mahesh and Ashok Goel. International Journal of Human-Computer Studies, 41: 881-913, 1994.

Specification and Execution of Multiagent Missions, Doug MacKenzie, Jonathan Cameron, Ronald C. Arkin.

Connectionism and Information Processing Abstractions: The Message Still Counts More Than the Medium, B. Chandrasekaran, Ashok Goel, and Dean Allemang. AI Magazine, 9(4):24-34, Winter 1988.

Concurrent Synthesis of Composite Explanatory Hypotheses. , Ashok Goel, P. Sadayappan, and John Josephson. . Proc. Seventeenth International Conference on Parallel Processing, St. Charles, Illinois, August 1988, Vol. III, pp. 156-160.

Concurrent Assembly of Composite Explanations, Ashok Goel, John Josephson, and P. Sadayappan. Abductive Inference: Computation, Philosophy, and Technology,J. Josephson and S. Josephson (editors), first part of Chapter 6,pp. 142-156, New York: Cambridge University Press, 1994.

Innovation in Analogical Design: A Model-Based Approach, Sambasiva Bhatta, Ashok Goel and Sattiraju Prabhakar. In Proc. Third International Conference on Artificial Intelligence in Design, Lausanne, Switzerland, August 1994, pp. 57-74.

Computational Trade-Offs in Experience-Based Reasoning, Ashok Goel, Khaled Ali and Andres Gomez. Proc. AAAI-94 Workshop on Evaluating Case-Based Reasoning, Seattle, Washington, July 1994.

Task Structures: What to Learn? , Eleni Stroulia and Ashok Goel. Proc. AAAI-94 Spring Symposium on Goal-Directed Learning,Stanford University, March 1994.

Computational Feasibility of Structured Matching, Ashok Goel and Thomas Bylander. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(12):1312-1316, December 1989.

Complexity in Classificatory Reasoning, Ashok Goel, N. Soundararajan, and B. Chandrasekaran. Proc. Sixth National Conference on Artificial Intelligence (AAAI-87), Seattle, Washington, July 1987,421-425, Los Altos, CA: Morgan Kaufmann.

Combining Navigational Planning and Reactive Control, Khaled Ali and Ashok Goel. Proc. AAAI-96 Workshop on Reasoning About Actions, Planning and Control: Bridging the Gap, Portland, August 1996.

Teaching Introductory Artificial Intelligence: A Design Stance, Ashok Goel. Proc. 1994 AAAI Fall Symposium on Improving Introductory Instruction of Artificial Intelligence, New Orleans,November 1994.

The Multiple Dimensions of Action-Oriented Robotic. Perception: Fission, Fusion, and Fashion, Ronald C. Arkin.

The illusion that reactive and hierarchical planning methods are at odds with each other needs to be dropped. By exploiting each method's strengths, a synthesis of hierarchical and reactive paradigms can yield robust, flexible, and generalizable navigation. Psychological and neuroscientific studies support this claim.

The Role of Essential Explanations in Abduction, Olivier Fischer, Ashok Goel, John Svirbely, and Jack Smith. Artificial Intelligence in Medicine, 3(1991):181-191, 1991.

The Role of Generic Models in Conceptual Change, Todd W. Griffith, Nancy J. Nersessian, and Ashok Goel. In Proc. of the Eighteenth Annual Conference of the Cognitive Science Society.

We hypothesize generic models to be central in conceptual change in science. This hypothesis has its origins in two theoretical sources. The first source, constructive modeling, derives from a philosophical theory that synthesizes analyses of historical conceptual changes in science with investigations of reasoning and representation in cognitive psychology. The theory of constructive modeling posits generic mental models as productive in conceptual change. The second source, adaptive modeling, derives from a computational theory of creative design. Both theories posit situation independent domain abstractions, i.e. generic models. Using a constructive modeling interpretation of the reasoning exhibited in protocols collected by John Clement (1989) of a problem solving session involving conceptual change, we employ the resources of the theory of adaptive modeling to develop a new computational model, ToRQUE. Here we describe a piece of our analysis of the protocol to illustrate how our synthesis of the two theories is being used to develop a system for articulating and testing ToRQUE. The results of our research show how generic modeling plays a central role in conceptual change. They also demonstrate how such an interdisciplinary synthesis can provide significant insights into scientific reasoning.

Towards a Neural Architecture for Abductive Reasoning, Ashok Goel, J. Ramanujam, and P. Sadayappan. Proc. Second IEEE International Conference on Neural Networks, San Diego, California, July 1988, Vol. II, pp. 681-688, IEEE Press.

Integrating Artificial Intelligence and Multimedia Technologies for Interface Design Advising, John Barber, Mark Jacobson, Louise Penberthy, Robert Simpson, Sambasiva Bhatta, Ashok Goel, Michael Pearce, Murali Shankar, and Eleni Stroulia. NCR Journal of Research and Development, 6(1):75-85, October 1992.

Case-Based Planning to Learn, Bill Murdock, Gordon Shippey, Ashwin Ram. Second International Conference on Case-Based Reasoning, Providence, RI, 1997.

Learning can be viewed as a problem of planning a series of modifications to memory. We adopt this view of learning and propose the applicability of the case-based planning methodology to the task of planning to learn. We argue that relatively simple, fine-grained primitive inferential operators are needed to support flexible planning. We show that it is possible to obtain the benefits of case-based reasoning within a planning to learn framework.

Case-Based Design: A Task Analysis, Ashok Goel and B. Chandrasekaran. Artificial Intelligence Approaches to Engineering Design, Volume II: Innovative Design, C. Tong and D. Sriram (editors), pp. 165-184, San Diego: Academic Press, 1992.

Case-Based Decision Support: A Case Study in Architectural Design., Michael Pearce, Ashok Goel, Janet Kolodner, Craig Zimring, Lucas Sentosa and Richard Billington. IEEE Expert, 7(5):14-20, October 1992.

Can Your Architecture Do This? A Proposal for Impasse-Driven Asynchronous Memory Retrieval and Integration, Anthony Francis, Ashwin Ram. AAAI-97 Workshop on Robots, Softbots, Immobots: Theories of Action, Planning and Control, Providence, RI, 1997.

Beyond Domain Knowledge: Towards A Computing Enviornment for the Learning of Design Skills and Strategies, Georgia Tech Cognitive Science Technical Report, 1995.

Integrating Case-Based and Model-Based Reasoning: A Computational Model of Design Problem Solving, Ashok Goel. AI Magazine, 13(2):50-54, Summer 1992.

Analyzing Political Decision Making from an Information Processing Perspective: JESSE, Donald Sylvan, Ashok Goel, and B. Chandrasekaran. American Journal of Political Science, 34(1):74-123, 1990.

Towards the Unification of Navigational Planning and Reactive Control, Ronald C. Arkin.

An Integrated Experience-Based Approach to Navigational Path Planning for Automonous Mobile Robotics, Ashok Goel, Michael Donnellan, Nancy Vasquez, and Todd Callantine. . Proc. IEEE International Conference on Robotics and Automation, Atlanta, Georgia, May 1993,pp. 818-825, IEEE Press.

Integrating Case-Based and Model-Based Reasoning for Creative Design:Constraint Discovery, Model Revision and Case Composition, Sattiraju Prabhakar and Ashok Goel. Proc. Second International Conference on Computational Models of Creative Design, Heron Island,Australia, December 1992, pp. 101-127.

An Experience-Based Approach to Navigational Path Planning, Proc. IEEE/RSJ International Conference on Robotics and Systems,RayleAigh, North Carolina, July 1992, Volume II, pp. 705-710,IEEE Press.

Intelligent Robotic Systems - Editorial Introduction, Ronald C. Arkin. Special Issue on Intelligent Robotic Systems for IEEE Expert.

Tractable Abduction, John Josephson and Ashok Goel. Abductive Inference: Computation, Philosophy, and Technology,J.Josephson and S. Josephson (editors), Chapter 9, pp. 202-215, New York: Cambridge University Press, 1994.

Adaptive Modeling. , Ashok Goel. 1996 International Workshop on Qualitative Reasoning, Monterrey, May 1996.

Abductive Explanation: On Why the Essentials are Essential., Olivier Fischer and Ashok Goel. Methodologies for Intelligent Systems --- 5, Z. Ras, M. Zemenkova, and M. Emrich (editors), pp.354-361, Amsterdam, Netherlands: North Holland, 1990.

KA: Situating Natural Language Processing in Design Problem Solving, Justin Peterson, Kavi Mahesh, Ashok Goel, and Kurt Eiselt. Proc. Sixteenth Annual Conference of the Cognitive Science Society,August 1994, Atlanta, Georgia, pp. 711-716, Hillsdale, NJ:Lawrence Erlbaum.

JESSE: An Information Processing Model of Policy Decision Making, Ashok Goel, B. Chandrasekaran and Donald Sylvan. Proc. IEEE Third Annual AI Systems in Government Conference Washington, D. C.,October 1987, pp. 178-87, IEEE Computer Society Press.

Trade-Offs in Acquiring Problem-Decomposition Knowledge: Some Experiments with the Principle of Locality, Eleni Stroulia and Ashok Goel. Proc. Eighth Knowledge Acquisition Workshop Banff,Canada, January 1994, pp. 18(1)-18(20).

A Task Structure for Case-Based Design , Ashok Goel and B. Chandrasekaran. Proc. 1990 IEEE International Conference on Systems, Man, and Cybernetics, Los Angeles, California, November 1990, pp. 587-592, IEEE Systems, Man, and Cybernetics Society Press.

Unification of Language Understanding, Device Comrehension and Knowledge Acquisition, Ashok Goel, Kavi Mahesh, Justin Peterson and Kurt Eiselt. Proc. 1996 Cognitive Science Conference, San Diego, July 1996.

Use of Device Models in Adaptation of Design Cases, Ashok Goel and B. Chandrasekaran. Proc. Second DARPA Case-Based Reasoning Workshop, Pensacola, Florida, May 1989, pp. 100-109, Los Altos, CA:Morgan Kaufmann.

A New Heuristic Approach for Dual Control, Juan Carlos Santamaria, Ashwin Ram. AAAI-97 Workshop on On-Line Search, Providence, RI, 1997.

Autonomous agents engaged in a continuous interaction with an incompletely known environment face the problem of dual control (Feldbaum, 1965). Simply stated, actions are necessary not only for studying the environment, but also for making progress on the task. In other words, actions must bear a ``dual'' character: They must be investigators to some degree, but also directors to some degree. Because the number of variables involved in the solution of the dual control problem increases with the number of decision stages, the exact solution of the dual control problem is computationally intractable except for a few special cases. This paper provides an overview of dual control theory and proposes a heuristic approach towards obtaining a near-optimal dual control method that can be implemented. The proposed algorithm selects control actions taking into account the information contained in past observations as well as the possible information that future observations may reveal. In short, the algorithm anticipates the fact that future learning is possible and selects the control actions accordingly. The algorithm uses memory-based methods to associate long-term benefit estimates to belief states and actions, and selects the actions to execute next according to such estimates. The algorithm uses the outcome of every experience to progressively refine the long-term benefit estimates so that it can make better, improved decisions as it progresses. The algorithm is tested on a classical simulation problem.

A Neural Architecture for a Class of Abduction Problems, Ashok Goel and J. Ramanujam. To appear in IEEE Transactions on Systems, Man and Cybernetics.

Use of Diagnostic Experiences in Experience-Based Innovative Design, Sattiraju Prabhakar and Ashok Goel. Proc. Tenth SPIE Conference on Applications of AI: Knowledge-Based Systems, Orlando,Florida, April 1992, pp. 420-434, SPIE Press.

Io, Ganymede and Callisto - a Multiagent Robot Janitorial Team, Tucker Balch, Gary Boone, Tom Collins, Harold Forbes, Doug MacKenzie, Juan Carlos Santamaria.

Georgia Tech won the Office Cleanup Event at the 1994 AAAI Mobile Robot Competition with a multi-robot cooperating team. This paper describes the design and implementation of these reactive trash-collecting robots, including details of multiagent cooperation, color vision for the detection of perceptual object classes, temporal sequencing of behaviors for task completion, and a language for specifying motor schema-based robot behaviors.

A Model-Based Theory of Adaptive Design for New Operating Environments, Sattiraju Prabhakar, Ashok Goel and Sambasiva Bhatta. the Proc. Third International Conference on Computational Models. of Creative Design, Heron Island, Australia, December 1995, pages 267-301

A Model-Based Approach to Redesign, Ashok Goel, Andres Gomez, Jeffrey Pittges, Murali Shankar and Eleni Stroulia. Proc. Thirteenth SPIE Knowledge Based Systems Conference, Orlando, April 1994,pp. 164-171, SPIE Press.

A Model-Based Approach to Case Adaptation, Proc. Thirteenth Annual Conference of the Cognitive Science Society, Chicago, August 1991, pp. 143-148, Hillsdale, NJ: Lawrence Erlbaum.

Representation, Organization, and Use of Topographic Models of Physical Spaces for Route Planning, Ashok Goel, Todd Callantine, Murali Shankar, and B. Chandrasekaran. Proc. the Seventh IEEE Conference on Artificial Intelligence Applications, Miami Beach, Florida, February 1991, pp. 308-314, IEEE Computer Society Press.

A Model-Based Approach to Blame Assignment: Revising the Reasoning Steps of Problem Solvers, Eleni Stroulia and Ashok Goel. To appear in Proc. National Conference on Artificial Intelligence - AAAI96,Portland, Oregon, August 1996.

Representation of Design Functions in Experience-Based Design, Ashok Goel. Intelligent Computer Aided Design, D. Brown, M. Waldron, and H. Yoshikawa (editors), pp. 283-308, Amsterdam, Netherlands: North-Holland, 1992.

A Knowledge-based Selection Mechanism for Strategic Control with Application in Design, Diagnosis and Planning, William Punch, Ashok Goel and David Brown.. International Journal of Artificial Intelligence Tools, Vol. 4 (3), pp 323-348, 1996.

Structured Matching: A Task-Specific Technique for Making Decisions, Thomas Bylander, Todd Johnson, and Ashok Goel. Knowledge Acquisition, 3(1):1-20, 1991.

Viewing Nation-States as Cognitive Agents, Ashok Goel, Donald Sylvan and B. Chandrasekaran. Journal of Experimental and Theoretical Artificial Intelligence.

A Functional Approach to Program Understanding, Eleni Stroulia and Ashok Goel. Proc. AAAI-92 workshop on AI and Automated Program Understanding, San Jose, July 1992, pp. 120-124.

Virtual Prototyping for Product Demanufacture and Service Using a Virtual Design Studio Approach, Proc. 1995 ASME Computers in Engineering Conference, Boston, pp. 951-958, 1995.

A Cross-Domain Experiment in Case-Based Design Support: ArchieTutor, Ashok Goel, Michael Pearce, Ali Malkawi, and Kim Liu. Proc. AAAI-93 Workshop on Case-Based Reasoning, July 1993, pp. 111-117.

A Control Architecture for Run-Time Method Selection, Ashok Goel and Todd Callantine. Proc. AAAI 1991 Workshop on Cooperation Among Heterogeneous Intelligent Systems Anaheim, California, July 1991.

A Control Architecture for Redesign and Design Verification, Ashok Goel and Sattiraju Prabhakar. Proc. 1994 Australian-New Zealand Intelligent Information Systems Conference, Brisbane,Queensland, Australia, Nov. 29 - Dec 2, 1994, pp. 377 - 381.

A Control Architecture for Model-Based Redesign Problem Solving, Ashok Goel and Sattiraju Prabhakar . Proc. IJCAI-1991 Workshop on AI in Design Sydney, Australia, August 1991, pp. 121-136.

What is Abductive Reasoning? , Ashok Goel and Gerard Montgomery. Neural Network Review, 3(4):181-187, June 1990.

What is a Robot Architecture Anyway? Turing Equivalence versus Organizing Principles, Ronald C. Arkin.

Over the years, there has been seemingly endless debate on how robot software architectures differ from each other and how they resemble each other. Often points are made that some architectures can do one thing while another cannot, or that in fact they are equivalent. The question is posed ``Just what does it mean when we say that an architecture is different in some respect from another or that they are in some ways equivalent?'' An effort is made in this paper to answer that question.

A Case-Based Tool for Conceptual Design Problem Solving, Ashok Goel, Janet Kolodner, Michael Pearce, Richard Billington, and Craig Zimring. Proc. Third DARPA Workshop on Case-Based Reasoning, Washington D.C., May 1991, pp. 109-120, Los Altos, CA: Morgan Kaufmann.

Invention as an Opportunistic Enterprise, Marin Simina, Janet Kolodner, Ashwin Ram, Michael Gorman. Abstracted in Nineteenth Annual Conference of the Cognitive Science Society, Stanford, CA, 1997. Technical Report GIT-CogSci-97/04, Cognitive Science Program, Georgia Institute of Technology, Atlanta, GA, 1997.

This paper identifies goal handling processes that begin to account for the kind of processes involved in invention. We identify new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. We focus on invention goals, which address significant enterprises associated with an inventor. Invention goals represent ``seed'' goals of an expert, around which the whole knowledge of an expert gets reorganized and grows more or less opportunistically. Invention goals reflect the idiosyncrasy of thematic goals among experts. They constantly increase the sensitivity of individuals for particular events that might contribute to their satisfaction. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We propose mechanisms to explain: (1) how Bell's early thematic goals gave rise to the new goals to invent the multiple telegraph and the telephone, and (2) how the new goals interacted opportunistically. Finally, we describe our computational model, ALEC, that accounts for the role of goals in invention.