%TI KEYNOTE: Observations from Studying Cognitive Systems in Context
%AU David Woods
%SC Saturday, August 13, 6:00 p.m.

%TI PLENARY: Identifying the Modules of the Mind with fMRI: Imaging the Biological Stages in Visual and Language Processing
%AU Walter Schneider
%AU Steven Small (discussant)
%SC Sunday, August 14, 9-10:30 a.m.

%TI PLENARY: A Picture is Worth a Thousand Words -- but that's the Problem
%AU Lila Gleitman
%AU Paul Smolensky (discussant)
%SC Monday, August 15, 9-10:30 a.m.

%TI PLENARY: The Role of Existing Knowledge in Generalization
%AU Michael J. Pazzani
%AU Mark Keane (discussant)
%SC Tuesday, August 16, 9-10:30 a.m.

%TI PLENARY PANEL: Cognitive Science 2004: The Last 10 Years
%AU Tony Simon (moderator)
%AU Joseph Bates
%AU Dedre Gentner
%AU Jim Greeno
%AU Gil Harman
%AU Michael Pazzani
%AU Walter Schneider
%SC Tuesday, August 16, 2-3:30 p.m.

%TI WORKSHOP: Education in Cognitive Science: Planning for the 21st Century
%AU N. Nersessian (chair)
%AU J.L. Kolodner (chair)
%SC Wednesday, August 17, 9-5:30 p.m.

%TI TALK SESSION: Categorization
%SC Sunday, August 14, 11-12:30
%AB Carbrera, "Functional and Conditional Equivalence:  Conceptual Contributions From Behavior Analysis"
    Pevtzow & Goldstone, "Categorization and the Parsing of Objects"
    Kurbat, Smith, & Medin, "Categorization, Typicality, and Shape Similarity"
    Kruschke & Erickson, "Learning of Rules That Have High-Frequency Exceptions:  New Empirical Data and a Hybrid Connectionist Model"
    Miller, "Modeling Inter-Category Typicality within a Symbolic Search Framework"

%TI TALK SESSION: Reasoning
%SC Sunday, August 14, 11-12:30
%AB Byrne & Tasso, "Counterfactual Reasoning:  Inferences From Hypothetical Conditionals"
    Melis, "How Mathematicians Prove Theorems"
    Bush, Johnson, & Siefret, "The Implications of Corrections:  Then Why Did You Mention It?"
    Ohlsson & Robin, "The Power of Negative Thinking:  The Central Role of Modus Tollens in Human Cognition"
    Tabachneck, Koedinger, & Nathan, "Toward a Theoretical Account of Strategy Use and Sense-Making in Mathematics Problem Solving"

%TI TALK SESSION: Collaborative Problem Solving
%SC Sunday, August 14, 2-3:30
%AB Derry & Tookey, "Effects of Collaborative Interaction and Computer Tool Use"
    Engle & Greeno, "Managing Disagreement in Intellectual Conversations:  Coordinating Interpersonal and Conceptual Concerns in the Collaborative Construction of Mathematical Explanations"
    Coulson & Flor, "Rational Choice and Framing Devices:  Argumentation and Computer Programming"
    Liu & Sycara, "Distributed Meeting Scheduling"
    Turner & Eaton, "Handling Unanticipated Events During Collaboration"

%TI TALK SESSION: Representation in Connectionist Networks
%SC Sunday, August 14, 2-3:30
%AB Dennis, "The Null List Strength Effect in Recognition Memory:  Environmental Statistics and Connectionist Accounts"
    Phillips, "Strong Systematicity Within Connectionism:  The Tensor-Recurrent Network"
    Niklasson & van Gelder, "Can Connectionist Models Exhibit Non-Classical Structure Sensitivity?"
    French, "Dynamically Constraining Connectionist Networks to Produce Distributed, Orthogonal Representations to Reduce Catastrophic Interference"
    Tesar & Smolensky, "Synchronous Firing Variable Binding is a Tensor Product Representation With Temporal Role Vectors"

%TI TALK SESSION: Situated Natural Language
%SC Sunday, August 14, 4-5:30
%AB Carpenter & Alterman, "A Taxonomy for Planned Reading"
    Moorman & Ram, "Integrating Creativity and Reading:  A Functional Approach"
    Peterson, Mahesh, Goel & Eiselt, "KA:  Situating Natural Language Understanding in Design Problem Solving"
    Cassell, Stone, Douville, Prevost, Achorn, Steedman, Badler, & Pelachaud, "Modeling the Interaction Between Speech and Gesture"
    Nelson, Lehman, & John, "Integrating Cognitive Capabilities in a Real-Time Task"

%TI TALK SESSION: Foundations
%SC Sunday, August 14, 4-5:30
%AB Tash, "Formal Rationality and Limited Agents"
    Byrne, "Integrating, Not Debating, Situated Action and Computational Models:  Taking the Environment Seriously"
    van Gelder & Niklasson, "Classicalism and Cognitive Architecture"
    Slezak, "Situated Cognition:  Empirical Issue, `Paradigm Shift' or Conceptual Confusion?"

%TI TALK SESSION: Analogical Reasoning
%SC Monday, August 15, 11-12:30
%AB Clausner, "Commonsense Knowledge and Conceptual Structure in Container Metaphors"
    Burstein, "Case Age:  Selecting the Best Exemplars for Plausible Reasoning Using Distance in Time or Space"
    Faries & Shlossberg, "The Effect of Similarity on Memory for Prior Problems"
    Gentner & Bowdle, "The Coherence Imbalance Hypothesis:  A Functional Approach to Asymmetry in Comparison"
    Forbus, Ferguson, & Gentner, "Incremental Structure-Mapping"

%TI TALK SESSION: Sentence Processing
%SC Monday, August 15, 11-12:30
%AB Ferstl, "The Construction-Integration Model:  A Framework for Studying Context Effects in Sentence Processing"
    Stevenson, "A Unified Model of Preference and Recovery Mechanisms in Human Parsing"
    Mahesh & Eiselt, "Uniform Representations for Syntax-Semantics Arbitration"
    Mayberry III & Miikkulainen, "Lexical Disambiguation Based on Distributed Representations of Context Frequency"
    Burgess & Lund, "Multiple Constraints in Syntactic Ambiguity Resolution:  A Connectionist Account of Psycholinguistic Data"

%TI TALK SESSION: Problem Solving
%SC Monday, August 15, 2-3:30
%AB Ahn, Bailenson, & Gordon, "Causal Attribution as Mechanism-Based Story Construction:  An Explanation of the Conjunction Fallacy and the Discounting Principle"
    Recker, Govindaraj, & Vasandani, "Troubleshooting Strategies in a Complex, Dynamical Domain"
    Catrambone, "The Effects of Labels in Examples on Problem Solving Transfer"
    Vollmeyer, Holyoak, & Burns, "Goal Specificity in Hypothesis Testing and Problem Solving"
    Blessing & Ross, "Problem Content Affects the Categorization and Solutions of Problems"

%TI TALK SESSION: Brain Modeling
%SC Monday, August 15, 2-3:30
%AB Grunewald & Grossberg, "Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data"
    Wan, Touretzky, & Redish, "Computing Goal Locations From Place Codes"
    Bullinaria, "Connectionist Modelling of Spelling"
    Braisby, Franks & Hampton, "On the Psychological Basis for Rigid Designation"

%TI TALK SESSION: Visual Perception
%SC Monday, August 15, 4-5:30
%AB Fischer, "Attention Allocation During Movement Preparation"
    Francis & Grossberg, "How Do Representations of Visual Form Organize Our Percepts of Visual Motion?"
    Isaak & Just, "The Curtate Cycloid Illusion:  Cognitive Constraints on the Processing of Rolling Motion"
    Polk & Farah, "A Simple Co-Occurrence Explanation for the Development of Abstract Letter Identities"
    Jin, "Computational Simulation of Depth Perception in the Mammalian Visual System"

%TI TALK SESSION: Mental Models
%SC Monday, August 15, 4-5:30
%AB Glasgow, "Array Representations for Model-Based Spatial Reasoning"
    Bara, Bucciarelli, Johnson-Laird, & Lombardo, "Mental Models in Propositional Reasoning"
    Samarapungavan & Wiers, "Do Children Have Epistemic Constructs About Explanatory Frameworks:  Examples From Naive Ideas About the Origin of Species"
    Johnson-Laird & Barres, "When `Or' Means `And':  A Study in Mental Models"
    Moore & Schwartz, "Mental Models for Proportional Reasoning"

%TI TALK SESSION: Learning
%SC Tuesday, August 16, 11-12:30
%AB Doane, Sohn, Adams, & McNamara,"Learning from Instruction:  A Comprehension-Based Approach"
    Bielaczyc, Pirolli, & Brown,  "Collaborative Explanations and Metacognition:  Identifying Successful Learning Activities in the Acquisition of Cognitive Skills"
    Leake, "Towards a Computer Model of Memory Search Strategy Learning"
    Cox, "Machines That Forget:  Learning From Retrieval Failure of Mis-Indexed Explanations"
    Sekaran & Sen, "Learning With Friends and Foes"

%TI TALK SESSION: Belief Modeling
%SC Tuesday, August 16, 11-12:30
%AB Elio & Pelletier, "The Effect of Syntactic Form on Simple Belief Revisions and Updates"
    Barnden, Helmreich, Iverson, & Stein, "Combining Simulative and Metaphor-Based Reasoning About Beliefs"
    Hewson, "Empirical Evidence Regarding the Folk Psychological Concept of Belief"
    Veale & Keane, "Belief Modelling, Intentionality and Perlocution in Metaphor Comprehension"
    Chalupsky & Shapiro, "SL:  A Subjective, Intensional Logic of Belief"

%TI POSTER+DISCUSSANT SESSION: Speech
%SC Sunday, August 14, 11-12:30
%AB 
    Roelofs, "On-Line Versus Off-Line Priming of Word-Form Encoding in Spoken Word Production"
    Meijer, "Towards a New Model of Phonological Encoding"
    Markey, "Acoustic-Based Syllabic Representation and Articulatory Gesture Detection:  Prerequisites for Early Childhood Phonetic and Articulatory Development"
    Content & Sternon, "Modelling Retroactive Context Effects in Spoken Word Recognition With a Simple Recurrent Network"
    Harm, Altmann, & Seidenberg, "Using Connectionist Networks to Examine the Role of Prior Constraints in Human Learning"
    Abu-Bakar & Chater, "Distribution and Frequency:  Modelling the Effects of Speaking Rate on Category Boundaries Using a Recurrent Neural Network"
    Gaskell & Marslen-Wilson, "Inference Processes in Speech Perception"

%TI POSTER+DISCUSSANT SESSION: Analogy
%SC Sunday, August 14, 2-3:30
%AB M. Burstein (discussant)
    Keane, "Adaptation as a Selection Constraint on Analogical Mapping"
    Ohnishi, Suzuki, & Shigemasu, "Similarity by Feature Creation:  Reexamination of the Asymmetry of Similarity"
    Hummel, Melz, Thompson, & Holyoak, "Mapping Hierarchical Structures With Synchrony for Binding:  Preliminary Investigations"
    Wharton & Lange, "Analogical Transfer Through Comprehension and Priming"
    Burns & Holyoak, "Competing Models of Analogy:  ACME Versus Copycat"
    Law, Forbus, and Gentner, "Simulating Similarity-Based Retrieval:  A Comparison of ARCS and MAC/FAC"
    Ferguson, "MAGI:  Analogy-Based Encoding Using Regularity and Symmetry"

%TI POSTER+DISCUSSANT SESSION: Visual Reasoning
%SC Sunday, August 14, 4-5:30
%AB J. Glasgow (discussant)
    Cheng, "An Empirical Investigation of Law Encoding Diagrams for Instruction"
    Cox, Stenning, & Oberlander, "Graphical Effects in Learning Logic:  Reasoning, Representation and Individual Differences"
    Lindsay, "Understanding Diagrammatic Demonstrations"
    Merrill & Reiser, "Scaffolding Effective Problem Solving Strategies in Interactive Learning Environments"
    Gattis & Holyoak, "How Graphs Mediate Analog and Symbolic Representation"
    Tabachneck, Leonardo, & Simon, "How Does and Expert Use a Graph?  A Model of Visual and Verbal Inferencing in Economics"
    Naraynan, Suwa, & Motoda, "A Study of Diagrammatic Reasoning From Verbal and Gestural Data"
    Clement, "Imagistic Simulation and Physical Intuition in Expert Problem Solving"

%TI POSTER+DISCUSSANT SESSION: Perception
%SC Monday, August 15, 11-12:30
%AB H. Narayanan (discussant)
    Thorisson, "Simulated Perceptual Grouping:  An Application to Human-Computer Interaction"
    Gilbert & Richards, "Using Trajectory Mapping to Analyze Musical Intervals"
    Tennenbaum, "Functional Parts"
    McGraw, Rehling, & Goldstone, "Letter Perception:  Toward a Conceptual Approach"
    Schyns & Bulthoff, "Viewpoint Dependence and Face Recognition"
    Olds, "A Connectionist Account of Global Precedence:  Theory and Data"
    McAuley, "Time as Phase:  A Dynamic Model of Time Perception"
    Large, "Models of Metrical Structure in Music"

%TI POSTER+DISCUSSANT SESSION: Learning
%SC Monday, August 15, 2-3:30
%AB G. Collins (discussant)
    Oehlmann, Edwards, & Sleeman, "Changing the Viewpoint:  Re-Indexing by Introspective Questioning"
    Suwa & Motoda, "PCLEARN:  A Model for Learning Perceptual-Chunks"
    Seger, "Multiple Learning Mechanisms Within Implicit Learning"
    Jimenez & Cleeremans, "Direct and Indirect Measures of Implicit Learning"
    Cox & Ram, "Failure-Driven Learning as Input Bias"
    Fox & Leake, "Using Introspective Reasoning to Guide Index Refinement in Case-Based Reasoning"
    Van Dyne & Tsatsoulis, "An Experiment to Determine Improvements in Automated Problem Solving in a Complex Problem Domain"
    Hiraki, "Abstraction of Sensory-Motor Features"

%TI POSTER+DISCUSSANT SESSION: Language Acquisition
%SC Monday, August 15, 4-5:30
%AB L. Gleitman (discussant)
    Taraban & Taraban, "A Lexical Model of Learning to Read Single Words Aloud"
    Ling, "Predicting Irregular Past Tenses:  Comparing Symbolic and Connectionist Models Against Native English Speakers"
    Peterson & Billman, "Correspondences Between Syntactic Form and Meaning:  From Anarchy to Hierarchy"
    Batali, "Artificial Evolution of Syntactic Aptitude"
    Finch & Chater, "Distributional Bootstrapping:  From Word Class to Proto-Sentence"
    Gillis, Daelemans, & Durieux, "Are Children `Lazy Learners'?  A Comparison of Natural and Machine Learning of Stress"
    Hastings & Lytinen, "Objects, Actions, Nouns, and Verbs"
    Lampinen & Faries, "Levels of Semantic Constraint and Learning Novel Words"
    Cartwright & Brent, "Segmenting Speech Without a Lexicon:  Evidence for a Bootstrapping Model of Lexical Acquisition"
    Westermann & Miikkulainen, "Verb Inflections in German Child Language:  A Connectionist Account"

%TI POSTER+DISCUSSANT SESSION: Syntactic Processing
%SC Tuesday, August 16, 11-12:30
%AB J. Holbrook (discussant)
    Ferstl, "Context Effects in Syntactic Ambiguity Resolution:  The Location of Prepositional Phrase Attachment"
    Schutze, "A Connectionist Model of Verb Subcategorization"
    Gibson & Loomis, "A Corpus Analysis of Recency Preference and Predicate Proximity"
    Blackwell & Bates, "Inducing Agrammatic Profiles in Normals"
    Pearlmutter, Daugherty, MacDonald, & Seidenberg, "Modeling the Use of Frequency and Contextual Biases in Sentence Processing"
    Spivey-Knowlton & Tanenhaus, "Immediate Effects of Discourse and Semantic Context in Syntactic Processing:  Evidence from Eye-Tracking"
    Burgess, Tanenhaus, & Hoffman, "Parafoveal and Semantic Effects on Syntactic Ambiguity Resolution"

%TI SYMPOSIUM: Scientific Creativity: Multidisciplinary Perspectives
%AU N. Nersessian (chair)
%AU J. Clement
%AU K. Dunbar
%AU R. Jones
%AU R. Tweney
%SC Sunday, August 14, 11-12:30

%TI SYMPOSIUM: Animal Cognition
%AU A. Francis (chair)
%AU D. Rumbaugh
%AU M. Tomasello
%AU D. Washburn
%SC Sunday, August 14, 2-3:30

%TI SYMPOSIUM: Learning New Features of Representation
%AU R.L. Goldstone (chair)
%AU P. Schyns (chair)
%AU B. French
%AU D.L. Medin
%AU M. Mozer
%AU J.-P. Thibaut
%SC Sunday, August 14, 4-5:30

%TI SYMPOSIUM: Cognitive Science Meets Cognitive Engineering
%AU R. Catrambone (chair)
%AU S.T. Dumais
%AU J. Elkerton
%AU B.E. John
%AU M.G. Shafto
%SC Monday, August 15, 11-12:30

%TI SYMPOSIUM: Visual Reasoning in Discovery, Instruction and Problem Solving
%AU N.H. Narayanan (chair)
%AU M. Hegarty
%AU R. Hall
%AU N. Nersessian
%SC Monday, August 15, 2-3:30

%TI SYMPOSIUM: The Role of Cases in Learning
%AU T. Koschmann (chair)
%AU A. Collins
%AU K. Holyoak
%AU G. Klein
%AU J. Kolodner
%SC Monday, August 15, 4-5:30

%TI SYMPOSIUM: Collaborative Knowledge
%AU P. Thagard (chair)
%AU K. Dunbar
%AU E. Hutchins
%AU G. Olson
%SC Tuesday, August 16, 11-12:30

%TI Causal Attribution As Mechanism-Based Story Construction: An Explanation Of The Conjunction Fallacy And The Discounting Principle
%AU Woo-kyoung Ahn
%AU Jeremy Bailenson
%AU Brian Gordon
%PU Proc. CogSci-94, pp. 9-14
%SC Monday, August 15, 2-3:30
%AB We propose that causal attribution involves constructing a coherent
    story using mechanism information (i.e., the processes underlying
    the relationship between the cause and the effect). This processing
    account can explain both the conjunction effect (i.e., conjunctive
    explanations being rated more probable than their components) and
    the discounting effect (i.e., the effect of one cause being
    discounted when another cause is already known to be true).  In the
    current experiment, both effects occurred with mechanism-based
    explanations but not with covariation-based explanations in which
    the cause-effect relationship was phrased in terms of covariations
    without referring to mechanisms. We discuss why the current results
    pose difficulties for previous attribution models in Psychology and
    Artificial Intelligence.

%TI Distribution and frequency: Modelling the effects of speaking rate on category boundaries using a recurrent neural network 
%AU Mukhlis Abu-Bakar
%AU Nick Chater
%PU Proc. CogSci-94, pp. 3-8
%SC Sunday, August 14, 11-12:30
%AB We describe a recurrent neural network model of rate effects on the
    syllable-initial voicing distinction, specified by voice-onset-time
    (VOT).  The stimuli were stylized /bi/ and /pi/ syllables covarying
    in VOT and syllable duration.  Network performance revealed a
    systematic rate effect: as syllable duration increases, the category
    boundary moves toward longer VOT values, mirroring human
    performance.  Two factors underlie this effect: the range of
    training stimuli with each VOT and syllable duration, and their
    frequency of occurrence.  The latter influence was particularly
    strong, consistent with exemplar-based accounts of human category
    formation.

%TI Mental Models in Propositional Reasoning
%AU B.G. Bara
%AU M. Bucciarelli
%AU P.N. Johnson-Laird
%AU V. Lombardo
%PU Proc. CogSci-94, pp. 15-20
%SC Monday, August 15, 4-5:30
%AB A cognitive account of propositional reasoning must consider both the
    representation of the propositions (premises and states of affairs) and
    the context in which the propositions are used. This paper is concerned
    with reasoning processes involving three different connectives
    (conjunctive, conditional and disjunctive connectives) in three different
    tasks (accomplishing a request for action expressed by a premise, judging
    a state of affairs as true or false with respect to a premise, drawing an
    inference from two premises).  Our claim is that the ability to reason
    with connectives is explained in terms of construction and manipulation
    of mental models. We present a computer model that takes as input the
    modelistic representations of the premises and the specific state of
    affairs, compares such models and gives rise to a series of model
    manipulations in order to produce a result, i.e. an action, a judgement
    or an inference. A computer program reproduces the performances of
    subjects of different age groups, predicting both correct and erroneous
    inferences.

%TI Combining Simulative and Metaphor-Based Reasoning about Beliefs
%AU John A. Barnden 
%AU Stephen Helmreich
%AU Eric Iverson
%AU Gees C. Stein
%PU Proc. CogSci-94, pp. 21-26
%SC Tuesday, August 16, 11-12:30
%AB An unprecedented combination of simulative and metaphor-based
    reasoning about beliefs is achieved in an AI system, ATT-Meta.  Much
    mundane discourse about beliefs uses conceptual metaphors (e.g.,
    MIND AS CONTAINER) productively, and ATT-Meta's metaphor-based
    reasoning accordingly leads to crucial discourse comprehension
    decisions.  ATT-Meta's non-metaphorical mode of belief reasoning
    includes simulative reasoning (SR).  In ATT-Meta, metaphor-based
    reasoning can block and otherwise influence the course of SR.  Also,
    ATT-Meta can nest SR and metaphor-based reasoning within themselves
    and each other.  As well as currently allowing ATT-Meta to
    simulatively reason about beliefs about beliefs ..., the nesting
    will in the near future allow the system to handle chained
    metaphors, ascribe its own metaphor-based reasoning to other agents,
    and apply simulative reasoning to purely metaphorical agents.

%TI Artificial Evolution of Syntactic Aptitude
%AU John Batali
%PU Proc. CogSci-94, pp. 27-32
%SC Monday, August 15, 4-5:30
%AB Populations of simple recurrent neural networks were subject to
    simulations of evolution where the selection criterion was the
    ability of a network to learn to recognize strings from context free
    grammars.  After a number of generations, networks emerged that use
    the activation values of the units feeding their recurrent
    connections to represent the depth of embedding in a string.
    Networks inherited innate biases to accurately learn members of a
    class of related context-free grammars, and, while learning, passed
    through periods during which exposure to spurious input interfered
    with their subsequent ability to learn a grammar.

%TI Interactive Model-Driven Case Adaptation for Instructional Software Design
%AU Benjamin Bell
%AU Smadar Kedar
%AU Ray Bareiss
%PU Proc. CogSci-94, pp. 33-38
%SC Monday, August 15, 7:30-9
%AB Research in case-based design has demonstrated some capability to retrieve
    relevant designs and to adapt them automatically to satisfy new design
    constraints. However, some domains are less amenable to automated
    adaptation, particularly when the cases are very complex and when
    relationships among the design components are difficult to express
    formally. The design of interactive learning environments is one such
    domain. We describe a case-based approach to instructional software design
    which utilizes interactive, model-driven case adaptation. Our model for
    computer-based instruction is Goal-Based Scenarios. We describe a tool,
    Goal-Based Scenario Builder, which supports interactive adaptation of
    instructional software using the model, and illustrate its use in adapting
    an example case of a successful instructional software program, Sickle Cell
    Counselor.

%TI Collaborative Explanations and Metacognition: Identifying Successful Learning Activities in the Acquisition of Cognitive Skills  
%AU K. Bielaczyc
%AU P. Pirolli
%AU A. Brown
%PU Proc. CogSci-94, pp. 39-44
%SC Tuesday, August 16, 11-12:30
%AB Individual differences in collaborative explanations during
    learning were analyzed to determine effects on problem solving.
    Twenty-five university students with no prior programming experience
    worked through a sequence of programming lessons.  For the Target
    lesson, subjects studied instructional texts and examples in either
    mixed performance-level dyads (collaborative dyad group) or
    individually (individual group) prior to individual programming
    activities.  The collaborative dyad subjects were divided into equal
    sized groups of high-benefit and low-benefit dyad subjects based on
    Target lesson programming performance.  Between-group analyses of the
    characteristics of the explanations generated by high-benefit and
    low-benefit dyad subjects were investigated, including (a) explanation
    and metacognitive strategies, (b) content of elaborations, and (c)
    manner of generating elaborations.  High-benefit dyad subjects were
    found to generate both a higher quantity and higher quality of
    elaborations.  These results are compared to findings from prior
    research on the self-explanation processes of solo learners.             

%TI Inducing Agrammatic Profiles in Normals  
%AU Arshavir Blackwell
%AU Elizabeth Bates
%PU Proc. CogSci-94, pp. 45-50
%SC Tuesday, August 16, 11-12:30
%AB The selective vulnerability of morphology in agrammatic aphasia is
    often interpreted as evidence that closed-class items reside in a
    particular part of the brain (i.e., Broca9s area); thus, damage to a
    part of the language processor maps onto behavior in a transparent
    fashion.  We propose that the selective vulnerability of grammatical
    morphemes in receptive processing may be the result of decrements in
    overall processing capacity, and not the result of a selective
    lesion.  We demonstrate agrammatic profiles in healthy adults who
    have their processing capacity diminished by engaging in a secondary
    task during testing.  Our results suggest that this selective
    profile does not necessarily indicate the existence of a distinct
    sub-system specialized for the implicated aspects of syntax, but
    rather may be due to the vulnerability of these forms in the face of
    global resource diminution, at least in grammaticality judgment.

%TI Problem Content Affects the Categorization and Solutions of Problems
%AU Stephen B. Blessing
%AU Brian H. Ross
%PU Proc. CogSci-94, pp. 51-55
%SC Monday, August 15, 2-3:30
%AB In many domains, the content of a problem (i.e., its surface cover
    story) provides useful clues as to the type of problem it is and its
    solution. Three experiments examined this role of problem content on
    the problem categorization and solution of algebra word problems
    with experienced subjects, by manipulating only the content of the
    problems.  When a problem's content was highly correlated with its
    deep structure (e.g., a content of cars driving for a
    distance-time-rate problem), people were able to categorize the
    problem after seeing a smaller portion of it compared to a baseline
    with contents uncorrelated to the problem deep structure. In
    addition, for more complex problems in which irrelevant information
    had been added, problem solving performance was higher and people
    showed greater sensitivity to the relevance of the information. When
    a problem's content suggested a different (inappropriate) type of
    problem, people required a greater part of the problem to categorize
    it and were slower and less accurate at solving the problem. These
    results suggest that content may be influential even for experienced
    problem solvers.

%TI On the Psychological Basis for Rigid Designation
%AU Nick Braisby
%AU Bradley Franks
%AU James Hampton
%PU Proc. CogSci-94, pp. 56-60
%SC Sunday, August 14, 4-5:30
%AB Kripke (1972) and Putnam (1975a; 1975b) have argued forcefully for
    the philosophical view of word meaning known as rigid designation.
    While certain psychological studies have appeared to offer this view
    support (Keil, 1986; Rips, 1989), we argue that these have not
    provided an exhaustive evaluation.  In particular, the original
    discussions of Kripke and Putnam reveal that their view rests on an
    explicit appeal to intuition concerning word use in a range of
    different scenarios.  The study reported here investigates word use
    under three such types of scenarios, for a variety of natural kind
    terms, by investigating subjects' judgements of truth or falsity for
    a range of statement types.  We argue that the results obtained
    indicate that the intuition on which rigid designation rests is not
    one which is generally true of agents' language use.  Further, we
    obtain patterns of apparent contradiction which appear strictly
    inconsistent with rigid designation and which require an account of
    word meaning which allows that the sense of words may vary
    systematically with context (Franks & Braisby, 1990).

%TI The Theory-Ladenness of Data: An Experimental Demonstration
%AU William F. Brewer
%AU Clark A. Chinn
%PU Proc. CogSci-94, pp. 61-65
%SC Monday, August 15, 7:30-9
%AB Most philosophers of science now believe that scientific data are
    theory laden, i.e., the evaluation of data is influenced by prior
    theoretical beliefs.  Although there is historical and psychological
    evidence that is consistent with the theory-laden position,
    experimental evidence is needed to directly test whether prior
    beliefs influence the evaluation of scientific data.  In a fully
    counterbalanced design, one group of subjects received evidence that
    dinosaurs were cold-blooded, and another group of subjects received
    evidence that dinosaurs were warm-blooded.  The subjects reported a
    strong belief in whichever theory they had read about.  Then
    subjects were presented with a piece of data that supported one
    theory and contradicted the other theory.  The identical piece of
    data was rated as more believable when it was consistent with the
    subject's theory than when it was inconsistent.  These results
    provide clear support for the position that scientific data are
    theory laden.

%TI Kant and Cognitive Science
%AU Andrew Brook
%PU Proc. CogSci-94, pp. 66-71
%SC Monday, August 15, 7:30-9
%AB Some of Kant's ideas about the mind have had a huge influence on
    cognitive science, in particular his view that sensory input has to be
    worked up using concepts or concept-like states and his conception of the
    mind as a system of cognitive functions. Other ideas of Kant's about the
    mind have not been assimilated into cognitive science, including
    important ideas about synthesis, mental unity and consciousness and
    self-consciousness. Work of P. M. and P. S. Churchland, Dennett,
    Flanagan, Jerry Fodor, Patricia Kitcher, Martindale, Sellars, and
    Treisman is briefly discussed.

%TI A Connectionist Model of the Development of Velocity, Time, and Distance Concepts
%AU David Buckingham
%AU Thomas R. Shultz
%PU Proc. CogSci-94, pp. 72-77
%SC Monday, August 15, 7:30-9
%AB Connectionist simulations of children's acquisition of velocity (v),
    time (t), and distance (d) concepts were conducted using a
    generative algorithm, cascade-correlation (Fahlman & Lebiere, 1990).
    Diagnosis of network rules were consistent with the developmental
    course of children's concepts (Wilkening, 1981, 1982) and predicted
    some new stages as well.  Networks integrated the defining
    dimensions of the concepts first by identity rules (e.g., v = d),
    then additive rules (e.g., v = d-t), and finally multiplicative
    rules (e.g., v = d/t).  Psychological effects of differential memory
    demands were also simulated.  It is argued that cascade-correlation
    implements an explicit mechanism of developmental change involving
    incremental learning and qualitative increases in representational
    power.

%TI Connectionist Modelling of Spelling
%AU John A. Bullinaria
%PU Proc. CogSci-94, pp. 78-83
%SC Monday, August 15, 2-3:30
%AB We present a new connectionist model of human spelling and
    investigate some of its properties.  Although based on Sejnowski &
    Rosenberg's (1987) NETtalk model of reading, it requires no
    pre-processing of the training data to align the phonemes and
    letters.  The model achieves 100% performance on the training data
    (2837 monosyllabic words including many irregular words) and has a
    generalization performance of about 89%.  Under appropriate
    conditions it exhibits symptoms similar to developmental surface
    dyslexia and acquired surface dysgraphia.  However, its inability to
    account for phonological dysgraphia and lexical decision leads us to
    believe that it is a promising candidate for the rule based part of
    a dual route model but not a complete model of spelling on its own.

%TI Internal Representations of a Connectionist Model of Reading Aloud
%AU John A. Bullinaria
%PU Proc. CogSci-94, pp. 84-89
%SC Monday, August 15, 7:30-9 
%AB We use hierarchical cluster analysis, principal component analysis,
    multi-dimensional scaling and discriminant analysis to investigate
    the internal representations learnt by a recent connectionist model
    of reading aloud.  The learning trajectories of these
    representations may help us understand reading development in
    children and the results of naming latency experiments in adults.
    Studying the effects of network damage on these representations
    seems to provide insight into the mechanisms underlying acquired
    surface dyslexia.  The discussion of the various techniques used may
    also prove useful in analysing the functioning of other
    connectionist systems.

%TI Multiple Constraints in Syntactic Ambiguity Resolution: A Connectionist Account of Psycholinguistic Data
%AU Curt Burgess
%AU Kevin Lund
%PU Proc. CogSci-94, pp. 90-95
%SC Monday, August 15, 11-12:30
%AB We implement a constraint satisfaction connectionist style model that
    accounts for data from three psycholinguistics experiments investigating
    the gardenpath effect with reduced relative constructions.  Normative
    data was collected on the stimuli used in experiments by Burgess and
    Tanenhaus (1992) and Ferreira and Clifton (1986) and this data served as
    the input for the simulation.  We have demonstrated with this set of
    simulations that a plausible theoretical framework for a range of these
    results is a hierarchical connectionist network which is sensitive to a
    number of constraints inherent in the input stimuli.  The model accounts
    for the top-down effect of context, the contribution of the bottom-up
    morphological frequency asymmetry of the verb, and the probabilistic
    nature of the disambiguating preposition.  These effects are sensitive to
    the timecourse of processing as well. The pattern of results from the
    psycholinguistic data suggest that syntactic processing is a confluence
    of multiple constraints that represent both bottom-up and top-down
    influences in processing.  These results are incompatible with a
    deterministic parsing model.  The hierarchical connectionist style model
    presented in this paper is sensitive to the range of constraints
    discussed above and is offered as a more adaptive theoretical model that
    can capture the domain of effects found in the literature encompassing
    local syntactic ambiguity resolution.

%TI Parafoveal and Semantic Effects on Syntactic Ambiguity Resolution
%AU Curt Burgess
%AU Michael K. Tanenhaus
%AU Miriam Hoffman
%PU Proc. CogSci-94, pp. 96-99
%SC Tuesday, August 16, 11-12:30
%AB Subjects were presented with strongly past-participle biased sentences,
    The portrait sketched by the tree was very beautiful, in a self-paced
    reading time task. Sentences were displayed two words at a time, (e.g.,
    The portrait / sketched by ...) so that the verb and disambiguating
    preposition were read together.  In Experiment 1, a set of materials
    constructed to minimize the past-tense bias with an inanimate NP was
    compared with a less constraining set of sentences.  The syntactic
    gardenpath usually associated with the reduced-relative construction was
    not present with the more constraining materials.  In Experiment 2, using
    the more constraining materials, preposition length was manipulated so
    that subjects read sentences with both short (i.e., by) and long (i.e.,
    underneath) prepositions. No syntactic gardenpaths occurred with
    sentences with the past-participle bias and short prepositions; however,
    when the same sentences were read with the long prepositions - the
    syntactic gardenpath was present. This result is inconsistent with a
    deterministic parser. We expand on our previous proposals that the parser
    must be able to take into account both semantic and verb-form
    information, as well as, parafoveal disambiguating information in the
    form of the preposition.

%TI Competing Models of Analogy: ACME Versus Copycat
%AU Bruce D. Burns
%AU Keith J. Holyoak
%PU Proc. CogSci-94, pp. 97-100
%SC Sunday, August 14, 2-3:30
%AB ACME and Copycat have been viewed as competing models of analogy
    making.  Mitchell (1993) makes three major criticisms of ACME in
    arguing for Copycat's superiority: that because ACME considers all
    syntactically possible mappings it is psychologically implausible
    and computationally infeasible; that its representations are rigid
    and hand-tailored for each problem; and that ACME's representations
    are semantically empty.  To evaluate these criticisms we applied
    ACME to simulating problems in the only domain addressed by Copycat,
    letter-string analogies such as, "If abc is changed into abd, how
    would you change kji in the same way?"  Using representations that
    include only knowledge available to Copycat, ACME generated the most
    common solutions that people and Copycat produce.  In addition, ACME
    was able to generate some solutions produced by people but that are
    impossible for Copycat, demonstrating that in some respects ACME is
    a more flexible analogical reasoner than is Copycat.  These
    simulations answer each of Mitchell's criticisms of ACME.  ACME can
    incorporate domain-relevant knowledge to allow a principled
    reduction in the number of mappings considered; it can generate
    novel representations based on its domain-general constraints; and
    it can incorporate semantic content into its representations.  In
    addition, ACME has the advantage of being applicable to many
    different domains.

%TI Case Age: Selecting the Best Exemplars for Plausible Reasoning Using Distance in Time or Space
%AU Mark H. Burstein
%PU Proc. CogSci-94, pp. 106-111
%SC Monday, August 15, 11-12:30
%AB The age of a case (in the CBR sense) is the amount of time that has
    elapsed between the time that the case originally occurred and the time
    of the current reasoning activity.  People engaged in plausible reasoning
    tasks will, under appropriate circumstances, use the age of retrieved
    prior cases to filter and discard them, or to select among alternatives
    by their recency.  This paper examines how the age of a case (and its
    spatial analog) are used by people in plausible reasoning and case-based
    reasoning tasks. I will argue that (1) the ate of a retrieved case is an
    important factor in relevance judgements for certain kinds of inferences.
    (2) When case age is relevant, more recent cases are usually, but not
    always, preferred to older ones (the "all other things being equal"
    caveat).  Finally, I will argue that, somewhat surprisingly, (3) case age
    cannot be used as in index into memory given some commonly held
    assumptions about the nature of the retrieval process because it varies
    with the time of retrieval.  This limits its use to post-retrieval
    processes, such as the filtering of already retrieved cases.

%TI The Implications of Corrections:  Then Why Did You Mention It?
%AU Julie G. Bush
%AU Hollyn M. Johnson
%AU Colleen M. Seifert
%PU Proc. CogSci-94, pp. 112-117
%SC Sunday, August 14, 11-12:30
%AB How can misreported information be effectively corrected?  Wilkes
    and Leatherbarrow (1988) found that people relied upon invalidated
    information to answer questions despite their awareness of its
    inaccuracy, a phenomenon called the "continued influence effect"
    (Johnson & Seifert, in press).  But corrections in which an
    assertion is made and then denied (e.g., "X is true ... actually, X
    is untrue") may violate important conversational assumptions.  Grice
    (1967/1989) and others have argued that people expect speakers to
    offer only information that is both truthful and conversationally
    relevant; thus, people may seek interpretations for corrections that
    will incorporate both the literal meaning and the conversational
    implications of the contradictory statements.  Our hypothesis was
    that corrections would be more successful when they explained why
    the original information was asserted.  An empirical study showed
    that corrections that accounted for conversational implications
    (e.g., "X, which had originally been believed because of Y, is
    actually untrue") could more effectively reduce the continued use of
    discredited information.  Additionally, the results show that
    reiterating the literal content of a correction may actually be
    perceived as implying that the correction statement should be
    disbelieved.  Since the conversational implications of corrections
    critically shape comprehension, their examination is crucial in
    domains (such as courtrooms, newspapers, and classrooms) where
    informational updates frequently occur.

%TI Integrating, Not Debating, Situated Action and Computational Models: Taking the Environment Seriously
%AU Michael D. Byrne
%PU Proc. CogSci-94, pp. 118-123
%SC Sunday, August 14, 4-5:30
%AB A recent issue of the journal Cognitive Science (1993, vol. 17, no.
    1) centered around a debate between two "camps" within the field,
    the "situated action" (or SA) camp and the "traditional," symbol
    processing camp. Though the debate in that journal suggests that, at
    some levels, symbol processing and SA are incommensurable, this
    paper disputes that view. If the message of the SA community is
    taken to be that traditional approaches neglect the importance of
    the environment, then not only is the message an important one, but
    the typical symbol processing system is guilty as charged. However,
    this does not mean that, in principle, symbol processing systems
    must have this limitation. The two approaches can work hand-in-hand
    to produce more general and more accurate computational models. A
    framework of building models of the environment and having models of
    cognitive agents work with those models is proposed, from which a
    smooth integration of SA and symbol processing is not only possible,
    but desirable. The framework proposed here is instantiated with a
    production system called S-CAPS, and the efficacy of building models
    of both the problem-solver and the problem environment is
    demonstrated.

%TI Counterfactual Reasoning: Inferences from Hypothetical Conditionals
%AU Ruth M.J. Byrne   
%AU Alessandra Tasso
%PU Proc. CogSci-94, pp. 124-129
%SC Sunday, August 14, 11-12:30
%AB Hypothetical reasoning -- thinking about what might happen in the
    future or what might have happened in the past -- enables us to go
    beyond factual reality.  We suggest that human reasoners construct a
    more explicit mental representation of hypothetical conditionals,
    such as, If Linda were in Dublin then Cathy would be in Galway, than
    of factual conditionals, such as, if Linda is in Dublin then Cathy
    is in Galway.  When people think about the factual conditional, they
    keep in mind the affirmative situation -- Linda is in Dublin, Cathy
    is in Galway, and they maintain only an implicit awareness that
    there may be alternatives to this situation.  In contrast, when they
    think about the hypothetical conditional, they keep in mind not only
    the affirmative situation, but also the presupposed negative one
    (Linda is not in Dublin, Cathy is not in Galway). The postulated
    differences in mental representations lead us to expect differences
    in the frequency of inferences that people make from the two sorts
    of conditionals, and we report the results of an experiment that
    corroborates this prediction. The psychological data have
    implications for philosophical and linguistic accounts of
    counterfactual conditionals, and for artificial intelligence
    programs designed to reason hypothetically.these results for
    computational models of analogy are discussed.

%TI Functional and Conditional Equivalence:  Conceptual Contributions from Behavior Analysis
%AU Angel Cabrera
%PU Proc. CogSci-94, pp. 130-135
%SC Sunday, August 14, 11:00am-12:30pm
%AB Behavior analysis has recently developed a new paradigm for the
    study of categorization and language based on the mathematical
    notion of equivalence.  Inspired by this paradigm, this paper
    presents a definitional framework that could be relevant for several
    of the phenomena under study in Cognitive Science.  First,
    categories are viewed as classes of functional equivalence.  By
    doing so, results from behavior analysis and cognitive psychology
    seem to converge towards an experience-based interpretation of
    category basicness.  Second, conditional equivalence is proposed as
    the basis for symbol-meaning and symbol-symbol relationships.
    Transfer of function through conditional links is suggested as the
    mechanism of connection between language and other aspects of
    cognition.  The adoption and extension of these functionalist
    formalisms provides us with significant methodological, conceptual
    and even empirical advantages.

%TI Lexical Segmentation: the role of sequential statistics in supervised and un-supervised models
%AU Paul Cairns
%AU Richard Shillcock
%AU Nick Chater
%AU Joe Levy
%PU Proc. CogSci-94, pp. 136-141
%SC Monday, August 15, 7:30-9
%AB The use of transitional probabilities between phonetic segments as a
    cue for segmenting words from English speech is investigated. We
    develop a series of class-based n-gram and feature-based neural
    network models that enable us to quantify the contribution of
    low-level statistics to word boundary prediction. Training data for
    our models is representative of genuine conversational speech: a
    phonological transcription of the London-Lund corpus. These simple
    models can be purely bottom-up and hence valid bootstrapping models
    of infant development. We go on to demonstrate how the boostrapping
    models mimic the Metrical Segmentation Strategy of Cutler and Norris
    (1988), and we discuss the implications of this result.

%TI A Taxonomy for Planned Reading
%AU Tamitha Carpenter
%AU Richard Alterman
%PU Proc. CogSci-94, pp. 142-147
%SC Sunday, August 14, 4-5:30
%AB Early computational models of reading treated reading as a disembodied
    process of examining a piece of text sequentially and in its entirety.
    More recent work has shown that reading does not always occur
    sequentially, and that embodying reading in a larger activity is
    beneficial to the reading process.  This paper will present a
    cognitive model that uses reading plans to read instructional text
    non-sequentially and in the context of an activity.  To support this
    model, we will discuss: 1) a taxonomy of reading plans and their
    functions; 2) a taxonomy of reading sub-plans and their roles; and 3)
    procedures for adapting reading plans.  In addition, the results of a
    protocol study are given which support planned reading as a cognitive
    model.

%TI Modelling the Interaction between Speech and Gesture
%AU Justine Cassell
%AU Matthew Stone
%AU Brett Douville
%AU Scott Prevost
%AU Brett Achorn
%AU Mark Steedman
%AU Norm Badler
%AU Catherine Pelachaud
%PU Proc. CogSci-94, pp. 153-158
%SC Sunday, August 14, 4-5:30
%AB This paper describes an implemented system that generates spoken
    dialogue, including speech, intonation, and gesture, using two
    copies of an identical program that differ only in knowledge of the
    world and which must cooperate to accomplish a goal.  The output of
    the dialogue generation is used to drive a three-dimensional
    interactive animated model -- two graphic figures on a computer
    screen who speak and gesture according to the rules of the system.
    The system is based upon a formal, predictive and explanatory theory
    of the gesture-speech relationship.  A felicitous outcome is a
    working system to realize autonomous animated conversational agents
    for virtual reality and other purposes, and a tool for investigating
    the relationship between speech and gesture.

%TI The Effects of Labels in Examples on Problem Solving Transfer
%AU Richard Catrambone
%PU Proc. CogSci-94, pp. 159-164
%SC Monday, August 15, 2-3:30
%AB It is hypothesized that labels in examples help learners group a set
    of steps and to try to explain why those steps belong together.  The
    result of these grouping and self-explanation processes might be the
    formation of a subgoal.  It is conjectured that the meaningfulness
    of the label itself might not be critical in order for the grouping
    and self-explanation processes to occur.  This conjecture is
    supported in an experiment in which subjects studying examples in
    probability that had steps labeled transferred to novel problems
    more successfully than subjects whose examples did not contain
    labels.  Furthermore, subjects who saw less meaningful labels
    transferred as successfully as subjects studying examples with more
    meaningful labels.  Thus, it appears that the meaningfulness of the
    label does not seem to affect subgoal formation as much as the
    presence of a label.  This result supports the interpretation that
    subgoal learning is affected by labels and that labels produce this
    benefit by helping learners group the steps into a purposeful unit,
    perhaps through a self-explanation process.

%TI SL: A Subjective, Intensional Logic of Belief
%AU Hans Chalupsky
%AU Stuart C. Shapiro
%PU Proc. CogSci-94, pp. 165-170
%SC Tuesday, August 16, 11-12:30
%AB Logics of belief are usually either quite complex, unintuitive, make
    overly idealistic assumptions, or all of the above, because they
    have to cope with the unusual characteristics of the belief operator
    (relation, predicate). Some of these problematic characteristics are
    referential opacity, the possible falsehood of objects of belief,
    belief recursion, identification of referents from outside of the
    belief operator in quantification contexts, etc. The difficulties
    faced by traditional logical treatments seem to stem mainly from the
    fact that an essentially subjective, intensional phenomenon gets
    analyzed from an objective, outside observer's point of view in an
    extensional, logical framework.  As an alternative, we propose a
    subjective, intensional logic SL, which takes seriously the usual
    characterization of belief as a propositional attitude, that is, in
    SL belief is treated as a relation between an agent and a
    proposition (an intensional object). As results we gain technical
    simplicity and a simple, intuitive semantics for belief sentences.

%TI An Empirical Investigation Of Law Encoding Diagrams For Instruction
%AU Peter C-H. Cheng
%PU Proc. CogSci-94, pp. 171-176
%SC Sunday, August 14, 4-5:30
%AB Law Encoding Diagrams, LEDs, are knowledge representations that
    correctly encode systems of one or more laws using the geometric
    and/or the topological structure of diagrams.  In an instructional
    role, LEDs aim to focus learning on the formal relations defined by
    the correct laws, whilst using diagrammatic representations to aid
    comprehension.  LEDs can be viewed as intermediate representations
    that aim to bridge the conceptual gulf between abstract laws and the
    behaviour of phenomena.  It is anticipated LEDs will be adopted as
    key models in the foundation of expertise.  This paper describes an
    investigation in which LEDs for momentum and energy conservation
    were used for instruction.  The LEDs were implemented in a computer
    based discovery learning environment and the subjects given only
    minimal instruction on their use in problem solving.  However, half
    the subjects used the LEDs for successful post-test solutions of
    different classes of problem and exhibited strategies that were
    expert-like, in marked contrast to their novice-like pre-test
    performance.

%TI Are Scientific Theories that Predict Data More Believable than Theories that Retrospectively Explain Data? A Psychological Investigation
%AU Clark A. Chinn
%PU Proc. CogSci-94, pp. 177-182
%SC Monday, August 15, 7:30-9
%AB Philosophers have disagreed about whether theories that make
    successful predictions are more believable than theories that merely
    explain data that have already been discovered.  Predictivists
    believe that theories that make successful predictions have an edge
    over theories that offer only retrospective explanations of the same
    data.  Nonpredictivists maintain that whether a theory predicts data
    or explains data retrospectively is irrelevant to the believability
    of the theory.  The purpose of this paper is to report on three
    psychological experiments designed to determine whether
    undergraduates behave as predictivists or nonpredictivists when they
    evaluate theories.  Results indicate that subjects behaved as
    nonpredictivists when one theory predicted a body of data and a
    second theory was devised later to explain the same data
    retrospectively.  However, subjects behaved as predictivists in the
    situation in which a theory retreated in the face of anomalous data
    by adding an auxiliary hypothesis; for instance, theories that
    predicted data by adding the necessary auxiliary hypotheses before
    the data came in were more believable than theories that added the
    auxiliary hypothesis in reaction to the data.  These results suggest
    that cognitive models of theory choice that assume that people are
    nonpredictivists may require modification.

%TI The Architecture of Intuition: Converging Views from Physics Education and Linguistics
%AU Ming Ming Chiu
%AU Joshua Gutwill
%PU Proc. CogSci-94, pp. 183-188
%SC Monday, August 15, 7:30-9
%AB This paper analyzes two converging views of the architecture of
    intuition.  A. diSessa and L. Talmy, working independently in
    different fields (physics education and linguistics), have
    formulated strikingly similar theories of intuition.  Both view
    people's intuitions about forces as simple pieces of knowledge
    organized heterarchically.  However, Talmy's force dynamic patterns
    have more sys tem-wide structure than diSessa's phenomenological
    primi tives.  Using these primitives, people generate common sense
    explanations for a wide variety of situations.  Moreover, people may
    build upon these intuitions while studying formal disciplines such
    as physics.  However, several primitives directly conflict with
    physics concepts and may account for resilient misconceptions.
    Finally, intuitions may also provide the basis for understanding so
    cial and psychological phenomena.

%TI Commonsense Knowledge and Conceptual Structure in Container Metaphors
%AU Timothy C. Clausner
%PU Proc. CogSci-94, pp. 189-194
%SC Monday, August 15, 11-12:30
%AB Cognitive grammar provides an analytic framework in which the
    semantic value of linguistic expressions is characterized relative
    to domains of presupposed knowledge. Cognitive metaphor theory holds
    that metaphorical language involves a mapping of conceptual
    structure from a source domain to a target domain. Containers are
    one such pervasive structure.  This investigation proposes a
    detailed representation for the domain CONTAINER and applies it in
    the analysis of metaphorical expressions mapping CONTAINER onto
    target domains ARGUMENT and LINGUISTIC EXPRESSION. Each source
    domain word is analyzed with respect to which aspects of the
    CONTAINER domain structure it refers, and whether it refers to a 2D
    or 3D bounded region. The pattern of aspects mapped suggest that
    spatial containment, content, and material container object comprise
    major aspects of the 3D CONTAINER domain. The target domains are
    demonstrated to be structured according this container organization.
    The results demonstrate that cognitive semantic analysis can reveal
    specific structures of commonsense knowledge which are prerequisite
    for language use.

%TI A Descriptive Model of Question Asking During Story Acquistion Interviews
%AU Chip Cleary
%AU Ray Bareiss
%PU Proc. CogSci-94, pp. 195-200
%SC Monday, August 15, 7:30-9
%AB In this paper, we provide a taxonomy of the processes which people
    use to generate questions for a type of interviewing task.
    Specifically, we analyze "story acquisition interviews" in which the
    interviewer is a knowledge engineer who asks questions of a domain
    expert to acquire material for a conversational hypermedia system.
    Such interviews have proven to be surprisingly difficult to conduct
    successfully. We have identified a number of "local" strategies
    which successful interviewers use to develop coherent, interesting
    sequences of questions and we have positioned these strategies
    within a model which describes the global interviewing process. This
    descriptive model is an initial step towards a methodology
    prescribing how to perform these interviews effectively.

%TI Imagistic Simulation and Physical Intuition in Expert Problem Solving
%AU John Clement
%PU Proc. CogSci-94, pp. 201-206
%SC Sunday, August 14, 4-5:30
%AB This paper discusses evidence from thinking aloud case studies indicating
    that part of the knowledge used by expert problem solvers consists of
    concrete physical intuitions rather than abstract verbal principles or
    equations.  One purpose of the paper is to provide empirical
    documentation of behaviors such as spontaneous references to using
    intuition, depictive hand motions, and dynamic imagery reports.  Although
    the role of imagery in lower level tasks is becoming more accepted, we
    currently lack sufficient empirical evidence for its use in higher level
    thinking.  In order to account for cases where subjects appear to be
    "running a simulation" of an event on the basis of a physical intuition,
    a model is presented in which a somewhat general and permanent perceptual
    motor schema controls a more specific and temporary image of a situation.
    This process is termed "imagistic simulation".  The imagery can be
    kinesthetic as well as visual, and dynamic rather than static, suggesting
    the involvement of the motor system.  Although rules for making
    inferences from networks of causal relations have been studied, we lack
    models which analyze the nature of mental simulations underlying a single
    causal relationship.  Such physical intuitions and simulations may
    provide basic building blocks for constructing visualizable models in
    science.

%TI Modeling Retroactive Context Effects in Spoken Word Recognition with a Simple Recurrent Network
%AU Alain Content
%AU Pascal Sternon
%PU Proc. CogSci-94, pp. 207-212
%SC Sunday, August 14, 11-12:30
%AB We present a new variant of a simple recurrent network to model
    auditory word recognition in continuous speech and address the issue
    of lexical segmentation. Simulations based on small word sets show
    that the system provides a near-optimal solution to the opposite
    constraints of speed, which requires that lexical processing be
    immediate, and reliability, which imposes that identification
    decisions be postponed until unambiguous information is available.
    Contrary to an often-heard statement, the simulations show that the
    existence of embedded words is not incompatible with the notion of
    continuous on-line lexical processing.

%TI Individual Differences and Predictive Validity in Student Modeling        
%AU Albert T. Corbett
%AU John R. Anderson       
%AU Valerie H. Carver
%AU Scott A. Brancolini
%PU Proc. CogSci-94, pp. 213-218
%SC Monday, August 15, 7:30-9
%AB This paper evaluates the student modeling procedure in the ACT
    Programming Tutor (APT).  APT is a practice environment that
    provides assistance to students as they write short programs.  The
    tutor is constructed around a set of several hundred programming
    rules called the ideal student model, that allows the program to
    solve exercises along with the student.  As the student works the
    tutor maintains an estimate of the probability that the student has
    learned the rules in the ideal model, in a process we call knowledge
    tracing.  The cognitive model, and the learning and performance
    assumptions that underlie knowledge tracing are described.  The
    assumptions that underlie knowledge tracing also yield performance
    predictions.  These predictions provide a good fit to students'
    performance in completing tutor exercises, but a more important
    issue is how well the model predicts students' performance outside
    the tutor environment.  A previous study showed that the model
    provides a good fit to average posttest performance across students,
    but is less sensitive to individual differences.  This paper
    describes a method of individualizing learning and performance
    estimates on-line in the tutor and assesses the validity of the
    resulting performance predictions.

%TI Rational choice and framing devices:  Argumentation and computer programming
%AU Seana Coulson
%AU Nick V. Flor
%PU Proc. CogSci-94, pp. 219-224
%SC Sunday, August 14, 2-3:30
%AB The argumentative discourse of computer programmers engaged in a
    collaborative programming task were analyzed as instances of ecologically
    valid reasoning behavior.  Teams of expert programmers were brought into
    a laboratory setting to work cooperatively on a software maintenance
    task.  Arguments which occurred spontaneously in the course of the task
    were examined with respect to: (a) their effect on task performance; and
    (b) to reveal the sorts of inferential machinery programmers use when
    they reason with one another.  Arguments were found to be important in
    the formulation of plans as well as the negotiation of strategic
    priorities with respect to the task.  Pragmatic features of the
    programmers' discourse revealed extensive use of framing devices whose
    efficacy depended upon interpretation in the context of linked pragmatic
    scales.

%TI Graphical effects in learning logic: reasoning, representation and individual differences
%AU Richard Cox
%AU Keith Stenning
%AU Jon Oberlander
%PU Proc. CogSci-94, pp. 237-242
%SC Sunday, August 14, 4-5:30
%AB Hyperproof is a computer program created by Barwise and Etchemendy
    for teaching logic using multimodal graphical and sentential
    methods, inspired by their theories of heterogeneous reasoning
    (Barwise and Etchemendy 1994).  Elsewhere, we have proposed a theory
    of the cognitive impact of assigning information to different
    modalities (Stenning and Oberlander 1992).  Our view is that where
    diagrams are advantageous, it is because they enforce the
    representation of information, leading to *weak* expressiveness,
    thereby facilitating inference.  The present study tests and
    develops these claims by comparing the effects of teaching
    undergraduate logic classes using Hyperproof and a control syntactic
    teaching method.  Results indicate that there is significant
    transfer from the logic courses to logical and analytical reasoning
    problems. There are also significant interactions between
    theoretically motivated pre-course aptitude measures and teaching
    method; the interactions influence post-course reasoning performance
    in transfer domains.  Hyperproof boosts students previously weak on
    items which benefit from diagram use, whereas the syntactic course
    appears to degrade the same group of students' graphical strategies.
    As well as being theoretically interesting, these results provide
    support for the important practical conclusion that individual
    differences in aptitude should be taken into account in choosing
    teaching technique.

%TI Failure-Driven Learning as Input Bias
%AU Michael T. Cox
%AU Ashwin Ram
%PU Proc. CogSci-94, pp. 231-236
%SC Monday, August 15, 2-3:30
%AB Self-selection of input examples on the basis of performance failure
    is a powerful bias for learning systems. The definition of what
    constitutes a learning bias, however, has been typically restricted
    to bias provided by the input language, hypothesis language, and
    preference criteria between competing concept hypotheses. But if
    bias is taken in the broader context as any basis that provides a
    preference for one concept change over another, then the paradigm of
    failure-driven processing indeed provides a bias. Bias is exhibited
    by the selection of examples from an input stream that are examples
    of failure; successful performance is filtered out. We show that the
    degrees of freedom are less in failure-driven learning than in
    success-driven learning and that learning is facilitated because of
    this constraint. We also broaden the definition of failure, provide
    a novel taxonomy of failure causes, and illustrate the interaction
    of both in a multistrategy learning system called Meta-AQUA.

%TI Machines that Forget: Learning from retrieval failure of mis-indexed explanations
%AU Michael T. Cox
%PU Proc. CogSci-94, pp. 225-230
%SC Tuesday, August 16, 11-12:30
%AB A reasoner may fail at a cognitive task, not because it does not
    have appropriate knowledge with which to reason, but instead because
    it does not have the proper index or cue with which to retrieve such
    knowledge from memory. The reasoner knows this memory item; it
    simply cannot remember the item. This paper argues that forgetting
    provides an opportunity for learning through memory reorganization.
    A reasoner that takes full advantage of such opportunities, however,
    must be able to reason about its own memory system. To do so, it
    must possess a language for declaratively representing its reasoning
    failures and must reflectively inspect such representations if it is
    to fully explain the reason for its failure. Once such an error is
    understood as a memory failure, the problem of forgetting is to
    re-adjust the indexes so that the knowledge is properly retrieved in
    similar, future situations.

%TI The Null List Strength Effect in Recognition Memory: Environmental Statistics and Connectionist Accounts
%AU Simon Dennis
%PU Proc. CogSci-94, pp. 243-247
%SC Sunday, August 14, 2-3:30
%AB In recognition paradigms, increasing the number of occurrences or
    presentation time in a study list of some words improves performance
    on these words (the item strength effect), but does not affect the
    performance on other words (null list strength effect). In contrast,
    adding new items results in a deterioration of performance on the
    other words (list length effect). Taken together these results place
    strong constraints on models of recognition memory.  To explain
    these data an account based on optimisation to the environment is
    presented. A summary is given of environmental analyses which
    suggest that (1) the likelihood of recurrence of a word within a
    context increases as the number of occurrences increases; (2) the
    repetition rates of other words in a context has no significant
    effect on the recurrence probability of a word; and (3) the
    recurrence probability of a word drops as a function of the number
    of words since the last occurrence of that word. A training set
    which reflected these constraints was constructed and presented to
    an optimising connectionist network which was designed to extract
    recurrence statistics (the Hebbian Recurrent Network). The resultant
    model is able to model all three of the effects outlined above.

%TI Effects of Collaborative Interaction and Computer Tool Use
%AU Sharon Derry
%AU Keith Tookey
%PU Proc. CogSci-94, pp. 248-253
%SC Sunday, August 14, 2-3:30
%AB We compared cognitive processing of two complex arithmetic word
    problems by college students randomly assigned to four different
    situating tool and social contexts: individualized problem solving
    with pen and paper; pair problem solving with pen and paper;
    individualized problem solving on TAPS, a computer-based problem
    solving tool; and collaborative problem solving on TAPS.  TAPS users
    differed from users of conventional tools in that they required
    relatively more time for problem solving, spent more time in
    planning activity, and proportionately less time reading.  With
    respect to the influences of social (versus individual) problem
    solving, collaboration also produced significantly more planning
    behavior, such that the combined use of TAPS and collaboration
    produced a marked increase in planning.  Also, significantly more
    behavior associated with metacognitive monitoring occurred in the
    protocols for pairs.  There was no evidence that use of the TAPS
    tool changed the social nature of the collaboration. However, a
    qualitative analysis yielded interesting information regarding
    negotiation processes underlying pair problem solving.  For example,
    we saw specifically some reasons why untrained pair problem solving
    does not proceed naturally and smoothly.  Results are interpreted in
    terms of situated cognition theory, although symbolic processing
    theories also can explain much of the data.

%TI Learning from Instruction: A Comprehension-Based Approach
%AU Stephanie M. Doane
%AU Young Woo Sohn
%AU David Adams
%AU Danielle S. McNamara
%PU Proc. CogSci-94, pp. 254-259
%SC Tuesday, August 16, 11-12:30
%AB A comprehension-based approach to learning assumes that incoming
    information and background knowledge are integrated to form a mental
    represention which is subsequently used to incorporate new
    knowledge.  We demonstrate that this approach can indicate when
    people will learn from instructions.  Specifically, we show that a
    computational model based on the construction-integration theory of
    comprehension (Kintsch, 1988) can explain and predict how individual
    users will comprehend help prompts that guide their generation of
    successful complex commands within an operating system. In previous
    empirical studies, we asked users whose UNIX operating system
    experience varied to produce complex UNIX commands, and then
    provided prompts when the commands they produced were erroneous. The
    prompts were designed to assist subjects with both knowledge and
    processes that our previous efforts have suggested are lacking in
    less expert users. The empirical results showed significant
    differences in response to different prompts as a function of
    background knowledge about UNIX.  In the present work, we extended
    our computational model to include comprehension-based learning
    mechanisms.  We modeled a subset of the individuals in the prompting
    study by representing each subject's initial knowledge base, then
    simulating each user's run through the prompting experiment. The
    results show that the modeled performance matches individual
    performance quite well both quantitatively and qualitatively. This
    work has implications for the development of instructional systems,
    and theoretical implications for the construction-integration theory
    of comprehension.

%TI An Experiment to Determine Improvements in Automated Problem Solving in a Complex Problem Domain
%AU M. Van Dyne
%AU C. Tsatsoulis
%PU Proc. CogSci-94, pp. 899-904
%SC Monday, August 15, 2-3:30
%AB A previously constructed prototype expert system was extended to
    include case-based reasoning/learning, in order to determine if the
    automated problem solving behavior could be improved. The initial
    expert system was developed by using an inductive machine learning
    technique on 9,445 data records of pregnant women, providing
    production rules to predict preterm delivery. Its predictive
    accuracy was tested on a separate set of 9,445 data records. Next,
    the capability to reason from both production rules and input test
    cases was added to the system, in addition to the capability to
    internally modify its confidence in each piece of knowledge (rule or
    case) and the relative importance of patient attributes which appear
    to be predictive of preterm delivery. The system was structured such
    that the accuracy of either type of reasoning could be measured
    individually to determine how rule-based and case-based reasoning
    perform alone, and to determine how they perform together.  Results
    show that the predictive accuracy of the system was improved, with
    different trends emerging, dependent on the bias of the learning
    data. Neither system performed as well alone as did both together.

%TI Managing Disagreement in Intellectual Conversations: Coordinating Interpersonal and Conceptual Concerns in the Collaborative Construction of Mathematical Explanations
%AU Randi A. Engle
%AU James G. Greeno
%PU Proc. CogSci-94, pp. 266-271
%SC Sunday, August 14, 2-3:30
%AB This paper reports research into how mathematical explanations are
    constructed during conversation based on videotapes of pairs of
    student math teachers collaboratively writing explanations in
    geometry.  In particular, we analyzed how disagreements about parts
    of their explanations were managed in these conversations.  In
    contrast to research on disagreement in everyday conversation,
    explanation disagreements were more likely to overlap with preceding
    turns and to be stated baldly without prefaces, token agreements or
    qualifications.  However, the observed frequencies of different
    kinds of disagreements were not consistent with a model favoring
    explicit substantive disgreement either.  Instead, it is proposed
    that both the interpersonal concerns that would motivate a
    preference for agreement and the conceptual concerns for a quality
    explanation that would motivate a preference for substantive
    disagreement are being managed by participants.  Disagreements are
    co-constructed, and conversants are seen to jointly employ complex
    devices for introducing and managing disagreement across turns that
    can satisfy both kinds of concerns with much less conflict between
    them than might have been expected.

%TI Natural Oculomotor Performance in Looking and Tapping Tasks 
%AU Julie Epelboim
%AU Eileen Kowler 
%AU Mark Edwards
%AU Han Collewijn
%AU Casper J. Erkelens
%AU Zygmunt Pizlo 
%AU Robert M. Steinman
%PU Proc. CogSci-94, pp. 272-277
%SC Monday, August 15, 7:30-9
%AB A unique apparatus recorded eye and head movements of subjects as 
    they tapped or only looked at sequences of 2, 4 or 6 nearby, 3-D 
    targets.  Each sequence was repeated 10 times to allow an 
    opportunity for learning.  A stereotypical pattern of movements 
    was established after 2-3 repetitions.  Subjects almost always 
    looked at each target just before tapping it. Looking-only was more 
    difficult than tapping in that it took more time and, unlike 
    tapping, usually did not benefit from practice. The number of 
    targets in a sequence affected time/target in both tasks.  Sequence 
    length and practice effects show that memory was involved.  The 
    persistent strategy of looking before tapping  and the subjects' 
    inability to tap a well-learned pattern with eyes closed, show that 
    visual cues were also important. We conclude that motor planning
    occurred first at the level of the task and then at the level of
    specific motor programs.  The relative difficulty of the less
    natural, looking-only task, in which the eyes worked without a 
    meaningful cognitive or motor purpose, suggests that efficient eye 
    movement programming requires a natural task of the kind eye 
    movements evolved to serve.

%TI The Effect of Similarity on Memory for Prior Problems 
%AU Jeremiah M. Faries 
%AU Karen R. Schlossberg 
%PU Proc. CogSci-94, pp. 278-282
%SC Monday, August 15, 11-12:30
%AB Students often rely on prior work or previously studied examples to
    help them solve their current problems.  In this paper we
    investigate the relative contributions of easily accessed
    superficial similarity and deep, solution relevant, structural
    similarity to memory for prior problems. Some models of memory for
    analogy suggest that superficial similarity initially selects or
    constrains memory for prior examples and predicts that analogs that
    share both surface and structural similarities will be more likely
    noticed by novices.  An experiment is reported in which subjects are
    observed as they learn how to program.  We find that people remember
    the examples that are related in terms of structural features alone
    as frequently as those that are related in terms of both structural
    and superficial features but there is no advantage to having
    superficial similarities as well.  Moreover, even though superficial
    features sometimes are associated with helpful similarities and
    sometimes associated with unhelpful similarities people still do not
    get misled by superficial similarity when that is the only basis for
    similarity.  This finding suggests that models that require
    superficial similarity as a major selection procedure for analogical
    reminding may need to be modified for conditions in which people are
    learning a new skill.

%TI MAGI: Analogy-based Encoding Using Regularity and Symmetry
%AU Ronald W. Ferguson
%PU Proc. CogSci-94, pp. 283-288
%SC Sunday, August 14, 2-3:30
%AB Analogy has always been considered a mechanism for interrelating distinct
    parts of the world, but it is perhaps just as important to consider how
    analogy might be used to break the world into comprehensible parts.  The
    MAGI program uses the Structure-Mapping Engine (SME) to flexibly and
    reliably match a description against itself.  The resulting mapping pulls
    out the two maximally consistent parts of the given description.  MAGI
    then divides out the parts of the mapping and categorizes the mapping as
    symmetrical or regular.  These parts may then be used as the basis for
    new comparisons.  We theorize that MAGI models how people use symmetry
    and regularity to facilitate the encoding task.  We demonstrate this with
    three sets of examples.  First, we show how MAGI can augment traditional
    axis detection and reference frame adjustment in geometric figures.
    Next, we demonstrate how MAGI detects visual and functional symmetry in
    logic circuits, where symmetry of form aids encoding symmetry of
    function.  Finally, to emphasize that regularity and symmetry detection
    is not simply visual, we show how MAGI models some aspects of expectation
    generation in story understanding.  In general, MAGI shows symmetry and
    regularity to be not only pretty, but also cognitively valuable.

%TI Context Effects in Syntactic Ambiguity Resolution: The Location of Prepositional Phrase Attachment
%AU Evelyn Ferstl
%PU Proc. CogSci-94, pp. 295-300
%SC Tuesday, August 16, 11-12:30
%AB Two experiments are reported to test whether the location of
    prepositional phrase attachment can be influenced by syntactic and
    contextual factors. The first experiment tested the hypothesis that
    attachment is delayed until the word after the prepositional phrase.
    Replicating the results of Taraban and McClelland (1988), this
    experiment showed that sentence bias rather than syntactic structure
    determines the ease of processing; attachment effects were observed
    on the words after the noun filler. In addition, using sentences in
    which the noun filler consisted of a compound noun, we also found
    evidence for delayed attachment. Using sentences in which the noun
    filler was modified by an adjective, we found evidence for early
    attachment. In the second experiment, we used context paragraphs to
    induce earlier attachment for the compound noun sentences. When the
    first noun of the compound was mentioned in the prior discourse,
    attachment effects were observed on the disambiguating noun filler.
    When the first noun was not mentioned, attachment effects were
    observed, as in Experiment 1, on the words after the prepositional
    phrase. Thus, the study supports the idea of a context-dependent
    delay strategy for prepositional phrase attachment.

%TI The Construction-Integration Model: A Framework for Studying Context Effects in Sentence Processing
%AU Evelyn Ferstl
%PU Proc. CogSci-94, pp. 289-294
%SC Monday, August 15, 11-12:30
%AB Contextual and pragmatic knowledge facilitates the eventual
    interpretation of a syntactically ambiguous sentence.  However,
    psycholinguistic studies have not provided a clear answer to when
    and how this non-syntactic knowledge is used.  One explanation for
    the discrepancy of the results is that the predictions for parsing
    processes in context cannot be specified unless they are based on a
    theory of text comprehension.  The construction-integration model of
    discourse comprehension (Kintsch, 1988) is proposed as an example
    for such a theory.  The model is parallel and weakly interactive,
    and its psychological validity has been shown in a variety of
    applications.  Three simulations for syntactic ambiguity resolutions
    are presented.  In the first, syntactic constraints are used to
    account for the correct interpretation of a garden-path sentence, as
    well as for common misparses.  In the second example, pragmatic
    knowledge is used to disambiguate a prepositional phrase attachment.
    In the final example, it is shown that the model can also account
    for effects of discourse context in the resolution of prepositional
    phrase attachment ambiguities.

%TI Attention Allocation During Movement Preparation
%AU Martin H. Fischer
%PU Proc. CogSci-94, pp. 307-312
%SC Monday, August 15, 4-5:30
%AB Identification performance was measured for letters which were
    briefly presented at different spatial locations and time delays
    relative to the beginning of manual movement preparation.
    Identification performance depended on the complexity of the
    upcoming movement and decreased prior to movement onset.  Further
    findings of similar identification performance with differerent
    spatial relations between probe location and manual movement
    direction cast doubt on the generality of a premotor theory of
    attention.

%TI Incremental Structure Mapping
%AU Kenneth D. Forbus
%AU Ronald W. Ferguson
%AU Dedre Gentner
%PU Proc. CogSci-94, pp. 313-318
%SC Monday, August 15, 11-12:30
%AB Many cognitive tasks involving analogy, such as understanding
    metaphors, problem-solving, and learning, require the ability to
    extend mappings as new information is found.  This paper describes a
    new version of SME, called I-SME, that operates incrementally.
    I-SME is inspired by Keane's IAM model and the use of incremental
    mapping in Falkenhainer's PHINEAS learning system.  We describe the
    I-SME algorithm and discuss tradeoffs introduced by incremental
    mapping, including parallel versus serial processing and pragmatic
    influences.  The utility of I-SME is illustrated by two examples.
    First, we show that I-SME can account for the psychological results
    found by Keane on a serial version of the Holyoak & Thagard
    attribute mapping task.  Second, we describe how I-SME is used in
    the Minimal Analogical Reasoning System (MARS), which uses analogy
    to solve engineering thermodynamics problems.

%TI Learning the Arabic Plural: The Case for Minority Default Mappings in Connectionist Networks.
%AU Neil Forrester
%AU Kim Plunkett
%PU Proc. CogSci-94, pp. 319-323
%SC Monday, August 15, 7:30-9
%AB Connectionist accounts of inflectional morphology have focussed on
    the domain of the English Past Tense (e.g.  Rumelhart & McClelland
    1986; Plunkett & Marchman 1993). In this inflectional domain, the
    default mapping process (add /ed/) reflects the process of
    suffixation adopted by the majority of the forms in the language.
    Connectionist models exploit the imbalance between English regular
    and irregular verbs when learning the past tense and when responding
    to novel forms in a default fashion. Not all inflectional systems
    have a default mapping which is characterized by a majority of forms
    in the language. The Arabic Plural System has been cited (Marcus et
    al. 1993) as one such system where a minority default mapping
    process operates. The Sound Plural in Arabic applies to only a
    minority of forms in the lexicon (~10%), yet it appears to adopt the
    role of a default mapping for novel nouns. We describe a
    connectionist model that can learn a minority default mapping
    analogous to the Arabic plural and discuss its performance in
    relation to type and token frequency effects, and their distribution
    within phonetic space.

%TI Using Introspective Reasoning to Guide Index Refinement in Case-Based Reasoning
%AU Susan Fox 
%AU David Leake
%PU Proc. CogSci-94, pp. 324-329
%SC Monday, August 15, 2-3:30
%AB Case-based reasoning research on indexing and retrieval focuses
    primarily on developing specific retrieval criteria, rather than on
    developing mechanisms by which such criteria can be learned as
    needed. This paper presents a framework for learning to refine
    indexing criteria by introspective reasoning.  In our approach, a
    self-model of desired system performance is used to determine when
    and how to refine retrieval criteria.  We describe the advantages of
    this approach for focusing learning on useful information even in
    the absence of explicit processing failures, and support its
    benefits with experimental results on how an implementation of the
    model affects performance of a case-based planning system.

%TI How do representations of visual form organize our percepts of visual motion?
%AU Gregory Francis
%AU Stephen Grossberg
%PU Proc. CogSci-94, pp. 330-334
%SC Monday, August 15, 4-5:30
%AB How does the visual system generate percepts of moving forms? How
    does this happen when the forms are emergent percepts (such as
    illusory contours or segregated textures) and the motion percept is
    apparent motion between the emergent forms?  A neural model of
    form-motion interactions is developed to explain parametric
    properties of psychophysical motion data and to make predictions
    about the parallel cortical processing streams V1 --> MT and V1 -->
    V2 --> MT. The model simulates many parametric psychophysical data
    arising from form-motion interactions. A key linkage between form
    and motion data is articulated in terms of properties of visual
    persistence and properties of apparent motion. The model explains
    how an illusory contour can move in apparent motion to another
    illusory contour or to a luminance-derived contour; how illusory
    contour persistence relates to the upper ISI threshold for apparent
    motion; and how upper and lower ISI thresholds for seeing apparent
    motion between two flashes decrease with stimulus duration and
    narrow with spatial separation (Korte's laws).  Psychophysical data
    are derived from an analysis of how orientationally tuned form
    perception mechanisms and directionally tuned motion perception
    mechanisms interact to generate consistent percepts of moving forms.

%TI Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic forgetting  
%AU Robert M. French
%PU Proc. CogSci-94, pp. 335-340
%SC Sunday, August 14, 2-3:30
%AB It is well known that when a connectionist network is trained on one
    set of patterns and then attempts to add new patterns to its
    repertoire, catastrophic interference may result.  The use of
    sparse, orthogonal hidden-layer representations has been shown to
    reduce catastrophic interference.  The author demonstrates that the
    use of sparse representations may, in certain cases, actually result
    in worse performance on catastrophic interference.  This paper
    argues for the necessity of maintaining hidden-layer representations
    that are both as highly distributed and as highly orthogonal as
    possible.  The author presents a learning algorithm, called
    context-biasing, that dynamically solves the problem of constraining
    hidden-layer representations to simultaneously produce good
    orthogonality and distributedness.  On the data tested for this
    study, context-biasing is shown to reduce catastrophic interference
    by more than 50% compared to standard backpropagation.

%TI Inference Processes in Speech Perception
%AU Gareth Gaskell
%AU William Marslen-Wilson
%PU Proc. CogSci-94, pp. 341-345
%SC Sunday, August 14, 11-12:30
%AB Cross-modal priming experiments have shown that surface variations
    in speech are perceptually tolerated as long as they occur in
    phonologically viable contexts. For example, [klim] (cleam) gains
    access to the mental representation of clean when in the context of
    [klimpaks] (cleam parks), since the change is a natural one,
    reflecting the phonological process of place assimilation.  This
    implies that speech perception involves processes of phonological
    inference, which recover the underlying form of speech.  Here we
    investigate the locus of these inference processes, using the
    phoneme monitoring task.  A set of stimulus sentences were created
    containing deviations that were either phonologically viable (as in
    cleam parks above) or unviable.  In Experiment 1, subjects monitored
    for the segment underlying the surface change (in the above example,
    /n/) and in Experiment 2 the following segment (/p/) was the target.
    In addition, the lexical status of the carrier word was manipulated
    (e.g., clean vs threan), contrasting lexical and non- lexical
    theories of phonological inference.  Both experiments showed strong
    effects of phonological viability for real words, with weaker
    effects for the non-word stimuli.  These results suggest that
    phonological inference can occur non-lexically, but that it
    interacts strongly with the process of lexical access.

%TI How Graphs Mediate Analog and Symbolic Representation
%AU Merideth Gattis
%AU Keith Holyoak
%PU Proc. CogSci-94, pp. 346-350
%SC Sunday, August 14, 4-5:30
%AB Three experiments are reported that examine the impact of people's
    goals and conceptual understanding on graph interpretation, in order
    to determine how people use graphical representations to evaluate
    functional dependencies between continuous variables.  Subjects made
    inferences about the relative rate of two continuous linear
    variables (altitude and temperature).  We varied the assignments of
    variables to axes, the perceived cause-effect relation between the
    variables, and the causal status of the variable being queried.  The
    most striking finding was that accuracy was greater when the
    Slope-Mapping Constraint was honored, which requires that the
    variable being queried -- usually the effect or dependent variable,
    but potentially the cause instead -- is assigned to the vertical
    axis, so that steeper lines map to faster changes in the queried
    variable.  This constraint dominates when it conflicts with others,
    such as preserving the low-level mapping of altitude onto the
    vertical axis.  Our findings emphasize the basic conclusion that
    graphs are not pictures, but rather symbolic systems for
    representing higher-order relations.  We propose that graphs provide
    external instantiations of intermediate mental representations,
    which enable people to move from pictorial representations to
    abstractions through the use of natural mappings between perceptual
    properties and conceptual relations.

%TI Classicalism and Cognitive Architecture 
%AU Tim van Gelder
%AU Lars Niklasson
%PU Proc. CogSci-94, pp. 905-909
%SC Sunday, August 14, 4-5:30
%AB This paper challenges the widely accepted claim that "classical"
    cognitive architectures can explain the systematicity of cognition
    (Fodor & Pylyshyn, 1988) . There are plausible ways of rendering
    more precise the systematicity hypothesis (as standardly formulated)
    in which it is entailed by classical architectures, and other
    plausible ways in which it is not.  Therefore, it is not a
    determinate issue whether systematicity is entailed, and hence
    explained, by classical architectures. The general argument is
    illustrated in a particular domain, the systematicity of deductive
    inference. In the case of the capacity to carry out the inference
    modus tollens, the systematicity hypothesis can be made precise in
    two ways, one entailed by classical architectures, another which is
    not. Further, the latter, but not the former, accurately describes
    the actual empirical phenomenon. Put another way, the clumps that
    these deductive inference capacities come in are not the clumps that
    are entailed by classical architectures.  Therefore, in this area at
    least, systematicity considerations count against the classical
    conception of cognitive architecture.

%TI The Coherence Imbalance Hypothesis: A Functional Approach to Asymmetry in Comparison
%AU Dedre Gentner
%AU Brian F. Bowdle
%PU Proc. CogSci-94, pp. 351-356
%SC Monday, August 15, 11-12:30
%AB Directional asymmetry is a well-documented phenomenon in research on
    similarity, metaphor, and analogy.  In this paper, we present an
    account of this phenomenon based on structural alignment.  We
    propose that a major source of asymmetry is coherence imbalance:
    that is, a difference in the degree of systematicity of the
    relational structures being compared.  These claims are tested in
    three experiments which examine the relationship between asymmetry,
    informativity, and conceptual coherence.  The results support the
    hypothesis that coherence imbalance is a key factor in directional
    comparison processes.  Further, by incorporating the insights
    offered by structural alignment, coherence imbalance advances a more
    functional account of asymmetry.

%TI A Corpus Analysis of Recency Preference and Predicate Proximity
%AU Edward Gibson
%AU Jacob Loomis
%PU Proc. CogSci-94, pp. 357-362
%SC Tuesday, August 16, 11-12:30
%AB The recent availability of large on-line parsed corpora makes it possible
    to test theories of psycholinguistic complexity by comparing the
    frequency distributions of closely related constructions.  In this paper,
    we use this technique to test the psycholinguistic theory proposed by
    Gibson et al.  (1993), which includes two independently motivated
    attachment principles: Recency Preference and Predicate Proximity.  In
    order to test this theory, we examined two general classes of attachment
    ambiguities from the parsed Wall Street Journal corpus from the Penn
    Treebank: 1) ambiguities which involve three prospective noun phrase
    attachment sites; and 2) ambiguities which involve three prospective verb
    phrase attachment sites.  Given three prospective noun phrase (NP) sites
    in English, the theory most naturally predicts a complexity ordering of
    NP3 (easiest, most recent), NP1, NP2, but a ranking of VP3, VP2, VP1 for
    verb phrase attachments.  Our corpus analyses support both of these
    predictions.

%TI Using Trajectory Mapping to Analyze Musical Intervals
%AU Stephen A. Gilbert
%AU Whitman Richards
%PU Proc. CogSci-94, pp. 363-368
%SC Monday, August 15, 11-12:30
%AB Cognitive scientists have often pondered the question of perceptual
    spaces, that is, the question of how a certain gamut of familiar
    stimuli might be organized in the mind.  We present Trajectory
    Mapping as an alternative clustering method to the traditional
    algorithm of Multi-Dimensional Scaling.  We suggest that given data
    about the relationships among stimuli, Multi-Dimensional Scaling
    provides one type of information (geometric), while Trajectory
    Mapping offers a second type (relational).  As an illustration we
    present the initial results of applying both clustering techniques
    to subjects' perceptions of musical intervals.  While an
    interpretation of the Multi-Dimensional Scaling requires a priori
    knowledge of music theory, Trajectory Mapping directly reveals the
    music theory that has been internalized by subjects.

%TI Are Children Lazy Learners? A comparison of natural and machine learning of stress
%AU Steven Gillis
%AU Walter Daelemans
%AU Gert Durieux
%PU Proc. CogSci-94, pp. 369-374
%SC Monday, August 15, 4-5:30
%AB Do children acquire rules for main stress assignment or do they
    learn stress in an exemplar-based way? In the language acquisition
    literature, the former approach has been advocated without
    exception: although they hear most words produced with their
    appropriate stress pattern, children are taken to extract rules, and
    do not store stress patterns lexically. The evidence for a
    rule-based approach is investigated, and it will be argued that in
    the literature such an approach is preferred due to an
    oversimplification of exemplar-based models. We will report
    experiments showing that Instance-Based Learning, an exemplar-based
    model, makes the same kinds of stress related errors in production
    that children make: (i) the amount of production errors is related
    to metrical markedness, and (ii) stress shifts and errors with
    respect to the segmental and syllabic structure of words, typically
    take the form of a regularization of stress patterns. Instance-Basd
    Learning belongs to a class of Lazy Learning algorithms. In these
    algorithms, no explicit abstractions in the form of decision trees
    or rules are derived; abstraction is driven by similarity during
    performance. Our results indicate that at least for this domain,
    this kind of lazy learning is a valid alternative to rule-based
    learning. Moreover, the results plead for a reanalysis of language
    acquisition data in terms of exemplar-based models.

%TI Array Representations for Model-Based Spatial Reasoning
%AU Janice Glasgow
%PU Proc. CogSci-94, pp. 375-380
%SC Monday, August 15, 4-5:30
%AB To date, the major focus of research in knowledge representations
    for artificial intelligence has been on sentential or linguistic
    formalisms involving logic and rule-based reasoning.  There is a
    growing body of evidence suggesting, however, that much of human
    problem solving is achieved, not through the application of rules of
    inference, but rather through the manipulation of mental models.
    Such a model is represented by a system with a similar relational
    structure to the reality it represents.  Moreover, spatial reasoning
    with models involves the inspection and transformation of
    representations in ways that are analogous to visually inspecting
    and physically transforming entities in the world.  Since a crucial
    component of knowledge acquisition is to capture an expert's mental
    state and reasoning strategies, it is important to shift some of the
    attention of AI research to the study of representation techniques
    that correspond to the mental models used by humans.  The paper
    begins with a cognitive perspective on model-based reasoning.  A
    knowledge representation scheme for spatial reasoning with models is
    then presented. In this scheme, which has evolved from research in
    computational imagery, spatial models are represented as symbolic
    arrays where dimensions of the array correspond to transitive order
    relations among entities.

%TI Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data
%AU Alexander Grunewald
%AU Stephen Grossberg
%PU Proc. CogSci-94, pp. 387-391
%SC Monday, August 15, 2-3:30
%AB A key problem in cognitive science concerns how the brain binds
    together parts of an object into a coherent visual object
    representation. One difficulty that this binding process needs to
    overcome is that different parts of an object may be processed by
    the brain at different rates and may thus become desynchronized.
    Perceptual framing is a mechanism that resynchronizes cortical
    activities corresponding to the same retinal object. A neural
    network model based on cooperation between oscillators via feedback
    from a subsequent processing stage is presented that is able to
    rapidly resynchronize desynchronized featural activities.  Model
    properties help to explain perceptual framing data, including
    psychophysical data about temporal order judgments.  These
    cooperative model interactions also simulate data concerning the
    reduction of threshold contrast as a function of stimulus length.
    The model hereby provides a unified explanation of temporal order
    and threshold contrast data as manifestations of a cortical binding
    process that can rapidly resynchronize image parts which belong
    together in visual object representations.

%TI Using Connectionist Networks to Examine the Role of Prior Constraints in Human Learning
%AU Michael Harm
%AU Lori Altmann
%AU Mark S. Seidenberg
%PU Proc. CogSci-94, pp. 392-396
%SC Sunday, August 14, 11-12:30
%AB This research investigated the effects of prior knowledge on
    learning in psychologically-plausible connectionist networks.  This
    issue was examined with respect to the benchmark
    orthography-to-phonology mapping task (Sejnowski & Rosenberg, 1986;
    Seidenberg & McClelland, 1989).  Learning about the correspondences
    between orthography and phonology is a critical step in learning to
    read. Children (unlike the networks mentioned above) bring to this
    task extensive knowledge about the sound-structure of their
    language.  We first describe a simple neural network that acquired
    some of this phonological knowledge. We then summarize simulations
    showing that having this knowledge in place facilitates the
    acquisition of orthographic-phonological correspondences, producing
    a higher level of asymptotic performance with fewer implausible
    errors and better nonword generalization.  The results suggest that
    connectionist networks may provide closer approximations to human
    performance if they!  incorporate more realistic assump tions about
    relevant sorts of background knowledge.

%TI Objects, actions, nouns, and verbs
%AU Peter M. Hastings
%AU Steven L. Lytinen
%PU Proc. CogSci-94, pp. 397-402
%SC Monday, August 15, 4-5:30
%AB This paper describes a lexical acquisition mechanism that was
    implemented in order to increase the robustness of a Natural
    Language Processing system.  Although the mechanism was not intended
    to be a cognitive model of children's language acquisition, it
    demonstrates many similarities with psycholinguistic findings.  In
    particular, the structure of the domain knowledge representation
    forces the system to take a bipolar approach to learning nouns and
    verbs.  Psycholinguistic studies demonstrate differing treatment of
    nouns and verbs by children and suggest a structural basis for this
    difference.  The knowledge-level similarities between our system and
    human linguistic knowledge make it possible to infer that children
    must adopt a similar strategy to effectively learn word meanings.

%TI Psychological Evidence for Assumptions of Path-Based Inheritance Reasoning 
%AU Claire Hewson
%AU Carl Vogel
%PU Proc. CogSci-94, pp. 409-414
%SC Monday, August 15, 7:30-9
%AB The psychological validity of inheritance reasoners is clarified.
    Elio and Pelletier (1993) presented the first pilot experiment
    exploring some of these issues.  We investigate other foundational
    assumptions of inheritance reasoning with defaults: transitivity,
    blocking of transitivity by negative defaults, preemption in terms
    of structurally defined specificity and structurally defined
    redundancy of information.  Responses were in accord with the
    assumption of at least limited transitivity, however, reasoning with
    negative information and structurally defined specificity conditions
    did not support the predictions of the literature.  `Preemptive'
    links were found to provide additional information leading to
    indeterminacy, rather than providing completely overriding
    information as the literature predicts.  On the other hand, results
    support the structural identification of certain links as redundant.
    Other findings suggest that inheritance proof-theory might be
    excessively guided by its syntax.

%TI Empirical Evidence Regarding the Folk Psychological Concept of Belief
%AU Claire Hewson
%PU Proc. CogSci-94, pp. 403-408
%SC Tuesday, August 16, 11-12:30
%AB This paper presents empirical evidence regarding the nature of our
    commonsense concept of belief. The findings have significant bearing
    upon claims made by authors concerned with the Folk Psychology
    Debate---in particular, they challenge Stephen Stich's (1983) claim
    that folk psychology is committed to a *broad* account of belief
    states.  In contrast it is found that folk psychology favours a
    *narrow* account of belief. This result is important in refuting
    Stich's claim that the folk psychological concept of belief has no
    role to play in a developed cognitive science. The paper also
    presents evidence regarding the influence of several factors on folk
    psychological judgements of belief individuation (emphasised
    similarities/differences between the referents of beliefs, nature of
    past beliefs, goal of classification), and introduces a methodology
    by which to investigate further factors.  It is argued that the
    observed conflict between individual speculations about likely folk
    psychological intuitions within the philosophical literature and
    actual empirical data regarding subjects' responses highlights the
    important contribution of experimental psychology in exploring such
    philosophical issues.

%TI Abstraction of Sensory-Motor Features
%AU Kazuo Hiraki
%PU Proc. CogSci-94, pp. 415-420
%SC Monday, August 15, 2-3:30
%AB This paper presents a way that enables robots to learn abstract
    concepts from sensory/perceptual data. In order to overcome the gap
    between the low-level sensory data and higher-level concept
    description, a method called "feature abstraction" is used.  Feature
    abstraction dynamically defines abstract sensors from primitive
    sensory devices and makes it possible to learn appropriate
    sensory-motor constraints. This method has been implemented on a
    real mobile robot as a learning system called ACORN2. ACORN2 was
    evaluated with some empirical results and shown that the system can
    learn some abstract concepts more accurately than other existing
    systems.

%TI WanderECHO: A Connectionist Simulation of Limited Coherence
%AU Christopher M. Hoadley
%AU Michael Ranney
%AU Patricia Schank
%PU Proc. CogSci-94, pp. 421-426
%SC Monday, August 15, 7:30-9
%AB The Theory of Explanatory Coherence, or TEC, (Ranney & Thagard,
    1988; Thagard, 1989, 1992) and ECHO, a connectionist implementation
    of TEC, attempt to model human reasoning about evidence and
    hypotheses. The ECHO model is based on the simultaneous satisfaction
    of multiple constraints.  This yields predicted activations
    ("believabilities") for propositions, which are based on the
    propositions' evidential status, their explanatory relationships,
    and their contradictory relationships.  While ECHO has been
    demonstrated to usefully model human reasoning, it does not model
    processing limitations on the maintenance of coherence.  WanderECHO
    is a variation on the ECHO model that attempts to simulate
    attentional and memorial limitations with a stochastic updating
    algorithm that is based on a traveling focus of attention. Several
    variants of the WanderECHO simulation were applied to Schank and
    Ranney's (1991) data, and were found to generally simulate subjects'
    mean believability ratings better than standard ECHO.

%TI PROVERB - A System Explaining Machine-Found Proofs
%AU Xiaorong Huang
%PU Proc. CogSci-94, pp. 427-432
%SC Monday, August 15, 7:30-9
%AB This paper outlines an implemented system called PROVERB that
    explains machine-found natural deduction proofs in natural language.
    Different from earlier works, we pursue a reconstructive approach.
    Based on the observation that natural deduction proofs are at a too
    low level of abstraction compared with proofs found in mathematical
    textbooks, we define first the concept of so-called assertion level
    inference rules.  Derivations justified by these rules can
    intuitively be understood as the application of a definition or a
    theorem. Then an algorithm is introduced that abstracts
    machine-found ND proofs using the assertion level inference rules.
    Abstracted proofs are then verbalized into natural language by a
    presentation module. The most significant feature of the
    presentation module is that it combines standard hierarchical text
    planning and techniques that locally organize argumentative texts
    based on the derivation relation under the guidance of a focus
    mechanism. The behavior of the system is demonstrated with the help
    of a concrete example throughout the paper.

%TI Mapping Hierarchical Structures with Synchrony for Binding: Preliminary Investigations 
%AU John E. Hummel   
%AU Eric R. Melz   
%AU Jeff Thompson  
%AU Keith J. Holyoak
%PU Proc. CogSci-94, pp. 433-438
%SC Sunday, August 14, 2-3:30
%AB Synchrony of firing has recently become a popular technique for
    dynamic binding in neural networks, and has been applied to numerous
    problem domains.  However, hierarchical structures are difficult to
    represent using synchrony for binding.  This paper presents our
    progress toward a framework for representing hierarchies in a neural
    network using synchrony for dynamic binding.  We illustrate the
    approach with a model of analogical mapping.  The model (IMM2) uses
    synchrony to bind case roles to objects within propositions.
    Hierarchies are established by allowing units representing
    propositions to play a dual role, acting both as the argument of one
    proposition and as a pointer to another.

%TI Lexical Disambiguation Based on Distributed Representations of Context Frequency
%AU Marshall R. Mayberry, III
%AU Risto Miikkulainen
%PU Proc. CogSci-94, pp. 601-606
%SC Monday, August 15, 11-12:30
%AB A model for lexical disambiguation is presented that is based on
    combining the frequencies of past contexts of ambiguous words.  The
    frequencies are encoded in the word representations and define the words'
    semantics. A Simple Recurrent Network (SRN) parser combines the context
    frequencies one word at a time, always producing the most likely
    interpretation of the current sentence at its output. This disambiguation
    process is most striking when the interpretation involves semantic
    flipping, that is, an alternation between two opposing meanings as more
    words are read in. The sense of "throwing a ball" alternates between
    "dance" and "baseball" as indicators such as the agent, location, and
    recipient are input. The SRN parser demonstrates how the context
    frequencies are dynamically combined to determine the interpretation of
    such sentences. We hypothesize that several other aspects of ambiguity
    resolution are based on similar mechanisms, and can be naturally
    approached from the distributed connectionist viewpoint.

%TI The Curtate Cycloid Illusion:  Cognitive Constraints on the Processing of Rolling Motion
%AU Matthew I. Isaak
%AU Marcel Adam Just
%PU Proc. CogSci-94, pp. 439-444
%SC Monday, August 15, 4-5:30
%AB When a wheel rolls along a flat surface, a point on the wheel's
    perimeter follows a cycloid trajectory.  Subjects, however draw the
    curtate cycloid, characterized by bottom loops, rather than the cycloid
    to depict the path that a point on a static wheel's perimeter would
    trace if the wheel were rolling.  This is the curtate cycloid illusion. 
    In Experiment 1, we show that animating the wheel does not dispel the
    illusion and that subjects high in spatial ability are less susceptible
    to the illusion than are low-spatials.  Experiments 2, 3a, and 3b
    supported the hypothesis that the illusion occurs when subjects
    reallocate cognitive resources from processing a rolling wheel's
    translation to computing its instant centers, the point about which the
    wheel is rotating at a given instant in time.   This reallocation occurs
    only when a reference point on the wheel's perimeter contacts and leaves
    the surface.  We conclude that the illusion does not reflect fundamental
    perceptual biases, but rather stems from transient shortages of
    cognitive resources during the higher-level processing of the wheel's
    translation and rotation.

%TI Direct and Indirect Measures of Implicit learning
%AU Luis Jimenez 
%AU Axel Cleeremans
%PU Proc. CogSci-94, pp. 445-450
%SC Monday, August 15, 2-3:30
%AB Comparing the relative sensitivity of direct and indirect measures
    of learning is proposed as the best way to provide evidence for
    unconscious learning when both conceptual and operative definitions
    of awareness are lacking. This approach was first proposed by
    Reingold & Merikle (1988) in the context of subliminal perception.
    In this paper, we apply it to a choice reaction task in which the
    material is generated based on a probabilistic finite-state grammar
    (Cleeremans, 1993). We show (1) that subjects progressively learn
    about the statistical structure of the stimulus material over
    training with the choice reaction task, and (2) that they can use
    some of this knowledge to predict the location of the next stimulus
    in a subsequent explicit prediction task. However, detailed partial
    correlational analyses of the correspondence between CRT performance
    and the conditional probabilities of each stimulus showed that large
    effects remained even when controlling for explicit knowlede as
    assessed by the prediction task. Hence we conclude (1) that at least
    some of the knowledge expressed in CRT performance can not be
    characterized as conscious, and (2) that even when associations are
    found at a global level of analysis, dissociations can still be
    obtained when more detailed analyses are conducted. Finally, we also
    show that subjects are limited in the depth of the contingencies
    they can learn about, and that these limitations are shared by the
    Simple Recurrent Network model of Cleeremans & McClelland (1991).

%TI Computational Simulation of Depth Perception in the Mammalian Visual System
%AU Jesse S. Jin
%PU Proc. CogSci-94, pp. 451-456
%SC Monday, August 15, 4-5:30
%AB This paper presents a computational model for stereopsis. Laplacian
    of Gaussian filters are used to simulate ganglion cells and LGN
    cells and zero-crossings extracted provide spatial features in the
    visual scene.  A set of one-octave Gabor filters is used to extract
    orientation information, which cover 0 to 60 cycles/degree interval
    in the human visual system.  A Gaussian sphere model is used to map
    a 3D space onto two 2D image planes, which combines monocular cues
    with binocular cues in stereo matching. The determinant of the
    Jacobian of the mapping is derived and matching is performed using
    zero-crossings associated with their orientation information. The
    possibility of transferring the knowledge such as the probability of
    occurrence of visual scenes to the matching process from the mapping
    is discussed.  Relaxation labelling is used as a co-operative
    process, which simulates binocular fusion and rivalry in the human
    visual process.

%TI Bottom-up recognition learning: A compilation based model of limited-lookahead learning
%AU Todd R. Johnson
%AU Jiajie Zhang
%AU Hongbin Wang
%PU Proc. CogSci-94, pp. 469-474
%SC Monday, August 15, 7:30-9
%AB When faced with a novel problem, people can sometimes decide what to
    do by imagining alternative sequences of actions and then taking the
    sequence that solves the problem. In many problems, however, various
    constraints, such as working memory capacity, limit the amount of
    internal lookahead that people can do. This paper describes
    Bottom-Up Recognition Learning (BURL), a model of limited-lookahead
    learning based on final first learning and knowledge compilation. In
    BURL, knowledge compilation of limited-lookahead search over
    successive problem-solving trials transfers knowledge from the leaf
    nodes of a problem space to the top node. Two experiments test
    BURL's predictions. The first compares the Soar implementation of
    BURL to human subjects learning to play two Tic-Tac-Toe isomorphs.
    This experiment shows that BURL can account for learning that occurs
    when subjects can perform a limited lookahead. The second experiment
    studies transfer between two strategy acquisition tasks for one
    isomorph.  This experiment shows that BURL must be used in
    conjunction with other learning methods to fully explain skill
    acquisition on limited-lookahead tasks.

%TI A computational model of human abductive skill and its acquisition
%AU Todd R. Johnson
%AU Josef Krems
%AU Nasir K. Amra
%PU Proc. CogSci-94, pp. 463-468
%SC Monday, August 15, 7:30-9
%AB Abduction is the process of constructing a plausible explanation for
    a set of observations. It is the fundamental type of reasoning in
    many complex tasks such as scientific discovery and diagnosis. This
    paper presents a mental-model theory of human abductive skill and
    its acquisition in which abduction is viewed as the sequential
    comprehension and integration of data into a single situation model.
    Comprehension and integration are accomplished using satisficing
    search of multiple problem spaces. The model has been implemented in
    Soar and has been tested by comparing its predictions to those of
    human subjects. The experimental results show that the model can
    account for several important behavioral regularities, including
    power-law speed-up, how the order of data presentation affects a
    response, deviation of responses from probability theory, and how
    the task and domain characteristics affect a person's response.

%TI Adaptive learning of Gaussian categories leads to decision bounds and response surfaces incompatible with optimal decision making
%AU Michael Kalish
%PU Proc. CogSci-94, pp. 479-484
%SC Monday, August 15, 7:30-9
%AB Two experiments in category learning are used to examine two types
    of categorization models.  In both a two and four choice experiment,
    subjects are shown to fail to learn to optimally classify two
    dimensional stimuli.  The general recognition theory (GRT) of Ashby
    & Maddox (1990) predicts quadratic decision bounds.  The first
    experiment disconfirms this.  The extended GRT predicts that
    learners adopt a bound of complexity equivalent to the optimal one.
    The second experiment disconfirms this as well.  Both experiments
    support the idea that general resources of adaptive systems can
    provide explanations of observed sub-optimal behavior.

%TI Coping with the Complexity of Design: Avoiding Conflicts and Prioritizing Constraints
%AU Irvin R. Katz
%PU Proc. CogSci-94, pp. 485-489
%SC Monday, August 15, 7:30-9
%AB Design is a complex cognitive task that pushes the limits of human
    information processing.  How do expert designers handle this
    complexity?  Professional and student architects solved a real-world
    diagram construction task that required satisfying multiple,
    sometimes conflicting, constraints to achieve an acceptable design.
    Professionals' initial designs were more consistent with task
    constraints and remained more consistent throughout problem
    solution.  Students restructured their designs more often in their
    unsuccessful attempts to satisfy the multiple constraints imposed by
    the task.  Analysis of subjects' verbal and action protocols
    suggests that one aspect of professionals' superior performance is
    their early recogni-tion of the critical constraints on a design.
    Professionals handle these constraints before others to structure
    the remaining, more negotiable, constraints.  By properly ordering
    constraints, professionals effectively minimize constraint
    conflicts.  As conflict resolution has high processing costs,
    constraint prioritization may be one way that professionals cope
    with the complexity of design.

%TI Adaptation as a Selection Constraint on Analogical Mapping
%AU Mark T. Keane
%PU Proc. CogSci-94, pp. 490-495
%SC Sunday, August 14, 2-3:30
%AB In any given analogy, there are potentially a large number of
    possible mapping interpretations. One of the key issues in analogy
    research is how one of these mappings comes to be selected as
    optimal and used as the basis for the analogical comparison.  It is
    well-established that structural factors, notably systematicity, can
    act as selection constraints on mapping. The present work tests if
    pragmatic and adaptation factors can also act as selection
    constraints on mapping.  The selection of a mapping based on
    pragmatic factors proposes that people can exploit the higher-order
    schematic structure of a domain to select one mapping over another.
    With respect to adaptation factors, the proposal is that a mapping
    will be selected if it is evaluated as being more adaptable than
    other competing mappings. Both of these predictions are tested in a
    novel, problem solving paradigm.  The main finding is that
    adaptation factors act as a selection constraint but that pragmatic
    factors do not. The implications of these results for computational
    models of analogy are discussed.

%TI Semantics and Pragmatics of Vague Probability Expressions
%AU Bernhard Kipper
%AU Anthony Jameson
%PU Proc. CogSci-94, pp. 496-501
%SC Monday, August 15, 7:30-9
%AB Two experiments assessed the membership functions that German speakers
    assign to 12 adverb phrases and 17 modal verb forms that express
    probability assessments. These expressions fall largely into three rather
    homogeneous classes. The membership functions are used as part of the
    semantic knowledge base of the natural language dialog system PRACMA, one
    of whose purposes is to model pragmatic and contextual influences on the
    use of vague expressions.  The system's normative model accounts for the
    role, in the selection and interpretation of vague probability
    expressions, of the listener's prior expectations, the speaker's dialog
    motivation, and the expressions that the speaker could have used but did
    not.

%TI Immediate Effects of Discourse and Semantic Context in Syntactic Processing: Evidence from Eye-Tracking
%AU Michael Spivey-Knowlton
%AU Michael Tanenhaus
%PU Proc. CogSci-94, pp. 812-817
%SC Tuesday, August 16, 11-12:30
%AB We monitored readers' eye-movements to examine the time-course of
    discourse and semantic influences in syntactic ambiguity resolution.
    Our results indicate immediate and simultaneous influences of
    referential context and local semantic fit in the reading of reduced
    relative clauses (i.e., The horse raced past the barn fell.).  These
    results support a model of sentence processing in which alternatives
    of a syntactic ambiguity are differentially activated by the
    bottom-up input, and syntactically-relevant contextual constraints
    simultaneously add activation to their supported alternatives.
    Competition between comparably active alternatives may then cause
    slowed reading times in regions of ambiguity.

%TI The Context-Sensitive Cognitive Architecture DUAL
%AU Boicho Kokinov
%PU Proc. CogSci-94, pp. 502-507
%SC Monday, August 15, 7:30-9
%AB Context-sensitivity is an important characteristic feature of every
    cognitive process and therefore should be reflected in every
    architecture pretending to explain human cognition. In this paper
    some experimental facts demonstrating context effects on various
    cognitive processes are reviewed and an attempt at context modeling
    is described. A hybrid (symbolic/connectionist) cognitive
    architecture, DUAL, is proposed. It consists of a multitude of
    agents having both a symbolic and a connectionist part. The symbolic
    part represents some knowledge structure, while the connectionist
    part represents its relevance to the current context. The
    performance of the cognitive system emerges as result of the work
    and interaction of the currently active agents, where the set of
    active agents is not predefined for a specific task but is dynamic
    and reflects the specific context. So particular symbolic operations
    and data structures may be supported or suppressed depending on the
    particular activation pattern of the connectionist parts which
    represent the context-dependent relevance of the operations and
    structures. In this way a context-sensitive computation emerges. An
    example of context-sensitive deductive reasoning is described.

%TI Learning of rules that have high-frequency exceptions: New empirical data and a hybrid connectionist model
%AU John K. Kruschke
%AU Michael E. Erickson
%PU Proc. CogSci-94, pp. 514-519
%SC Sunday, August 14, 11-12:30
%AB Theorists of human learning, in domains as various as category
    learning and language acquisition, have grappled with the issue of
    whether learners induce rules or remember exemplars, or both. In
    this article we present new data that reflect both rule induction
    and exemplar encoding, and we present a new connectionist model that
    specifies one way in which rule-based and exemplar-based mechanisms
    might interact. Our empirical study was motivated by analogy to past
    tense acquisition, and specifically by the previous work of Palermo
    & Howe (1970).  Human subjects learned to categorize items, most of
    which could be classified by a simple rule, except for a few
    frequently recurring exceptions. The modeling was motivated by the
    idea of combining an exemplar-based module (ALCOVE, Kruschke 1992)
    and a rule-based module in a connectionist architecture, and
    allowing the system to learn which module should be responsible for
    which instances, using the competitive gating mechanism introduced
    by Jacobs, Jordan, Hinton & Nowlan (1991).  We report quantitative
    fits of the model to the learning data.

%TI Recurrent Natural Language Parsing
%AU Stan C. Kwasny
%AU Sahnny Johnson
%AU Barry L. Kalman
%PU Proc. CogSci-94, pp. 525-530
%SC Monday, August 15, 7:30-9
%AB A recurrent network was trained from sentence examples to construct
    symbolic parses of sentence forms.  Hundreds of sentences,
    representing significant syntactic complexity, were formulated and
    then divided into training and testing sets to evaluate the ability
    of a recurrent network to learn their structure.  The network is
    shown to generalize well over test sentences and the errors that do
    remain are found to be of a single type and related to human
    limitations of sentence processing.

%TI When 'Or' Means 'And': A Study in Mental models
%AU Philip N. Johnson-Laird
%AU Patricia E. Barres
%PU Proc. CogSci-94, pp. 475-478
%SC Monday, August 15, 4-5:30
%AB We describe an algorithm that constructs mental models of assertions
    containing sentential connectives, such as and, if, and or.  It
    performs at three levels of expertise depending on the completeness
    of the models it constructs.  At a rudimentary level of performance,
    it constructs models that make explicit as little as possible.  One
    unexpected consequence is that it produces the same explicit models
    for assertions of the form:
        if p then q, and if r then s
        if p then q, or if r then s
        p and q, or r and s.
    We initially suspected that there was a bug in the algorithm (or
    theory), but there was not.  We therefore carried out two
    experiments with logically-untrained subjects.  Their results
    confirmed the phenomena: for many individuals, a conjunction of
    conditionals is equivalent to their disjunction, which in turn is
    equivalent to a disjunction of conjunctions.

%TI Levels of Semantic Constraint and Learning Novel Words
%AU James M. Lampinen
%AU Jeremiah M. Faries
%PU Proc. CogSci-94, pp. 531-536
%SC Monday, August 15, 4-5:30
%AB A common method of teaching vocabulary involves presenting students
    with new words in context and having the students derive the meaning
    of these words based on contextual cues.  Beck, McKeown and McCaslin
    (1983) have argued that the contexts used to teach new words should
    be highly constraining.  Although highly constraining contexts avoid
    ambiguity they do not present the learner with the necessity of
    combining contextual and word specific information and thus
    practicing skills needed for general comprehension.  We suggest that
    a superior method of teaching is to relax the amount of contextual
    constraint because to optimize the learning from the presentation of
    a sentence the student must use both top down and bottom up
    processes to discover the meaning of the sentence, thus integrating
    two sources of knowledge about the word.  The present research
    compares knowledge and use of newly learned words between students
    who learned the new words using three encounters with highly
    constraining contexts, three encounters with moderately constraining
    contexts or three progressively less constraining contexts.
    Students were given definitional and comprehension tests both
    immediately after study and at a one week delay.  The results
    suggest that repeated encounters with moderately constraining
    contexts are superior to repeated encounters with highly
    constraining contexts.

%TI Models of Metrical Structure in Music
%AU Edward W. Large
%PU Proc. CogSci-94, pp. 537-542
%SC Monday, August 15, 11-12:30
%AB Recent models of metrical structure in music rely upon notions of
    oscillation and synchronization. Such resonance models treat the
    perception of metrical structure as a dynamic process in which the
    temporal organization of musical events synchronizes, or entrains, a
    listener's internal processing mechanisms. The entrainment of a
    network of oscillators to an afferent rhythmic pattern models the
    perception of metrical structure. In this paper, I compare one
    resonance model with several previously proposed models of meter
    perception. Although the resonance model is consistent with previous
    models in a number of ways, mathematical analysis reveals properties
    that are quite distinct from properties of the previously proposed
    models.

%TI Simulating Similarity-Based Retrieval:  A Comparison of ARCS and MAC/FAC
%AU Keith Law
%AU Kenneth D. Forbus
%AU Dedre Gentner
%PU Proc. CogSci-94, pp. 543-548
%SC Sunday, August 14, 2-3:30
%AB Current theories and supporting simulations of similarity-based
    retrieval disagree in their process model of semantic similarity
    decisions.  We compare two current computational simulations of
    similarity-based retrieval, MAC/FAC and ARCS, with particular
    attention to the semantic similarity models used in each.  Four
    experiments are presented comparing the performance of these
    simulations on a common set of representations.  The results suggest
    that MAC/FAC, with its identicality-based constraint on semantic
    similarity, provides a better account of retrieval than ARCS, with
    its similarity-table based model

%TI Towards A Computer Model of Memory Search Strategy Learning
%AU David B. Leake
%PU Proc. CogSci-94, pp. 549-554
%SC Tuesday, August 16, 11-12:30
%AB Much recent research on modeling memory processes has focused on
    identifying useful indices and retrieval strategies to support
    particular memory tasks.  Another important question concerning
    memory processes, however, is how retrieval criteria are learned.
    This paper examines the issues involved in modeling the learning of
    memory search strategies.  It discusses the general requirements for
    appropriate strategy learning and presents a model of memory search
    strategy learning applied to the problem of retrieving relevant
    information for adapting cases in case-based reasoning.  It
    discusses an implementation of that model, and, based on the lessons
    learned from that implementation, points towards issues and
    directions in refining the model.

%TI Error Modeling in the ACT-R Production System
%AU Christian Lebiere
%AU John R. Anderson
%AU Lynne M. Reder
%PU Proc. CogSci-94, pp. 555-559
%SC Monday, August 15, 7:30-9
%AB We describe how to extend the ACT-R production system to model human
    errors in the performance of a high-level cognitive task: to solve
    simple linear algebra problems while memorizing a digit span.
    Errors of omission are produced by introducing a cutoff on the
    latency of memory retrievals.  If a memory chunk cannot gather
    enough activation to be retrieved before the threshold is reached,
    retrieval fails.  Adding Gaussian noise to chunk activation produces
    a pattern quantitatively similar to subject errors.  Errors of
    commission are introduced by allowing imperfect matching in the
    condition side of productions.  The wrong memory chunk can be
    retrieved if its activation is large enough to allow it to overcome
    the mismatch penalty.  This mechanism provides a qualitative and
    quantitative fit to subject errors.  In conclusion, this paper
    demonstrates that human-like errors, sometimes thought of as the
    exclusive domain of connectionist models, can be successfully
    duplicated in production system models.

%TI Priming, Perceptual Reversal, and Circular Reaction in a Neural Network Model of Schema-Based Vision
%AU Wee Kheng Leow
%AU Risto Miikkulainen
%PU Proc. CogSci-94, pp. 560-565
%SC Monday, August 15, 7:30-9
%AB VISOR is a neural network system for object recognition and scene
    analysis that learns visual schemas from examples. Processing in VISOR is
    based on cooperation, competition, and parallel bottom-up and top-down
    activation of schema representations.  Similar principles appear to
    underlie much of human visual processing, and VISOR can therefore be used
    to model various perceptual phenomena. This paper focuses on analyzing
    three phenomena through simulation with VISOR: (1) priming and mental
    imagery, (2) perceptual reversal, and (3) circular reaction.  The results
    illustrate similarity and subtle differences between the mechanisms
    mediating priming and mental imagery, show how the two opposing accounts
    of perceptual reversal (neural satiation and cognitive factors) may both
    contribute to the phenomenon, and demonstrate how intentional actions can
    be gradually learned from reflex actions.  Successful simulation of such
    effects suggests that similar mechanisms may govern human visual
    perception and learning of visual schemas.

%TI Understanding Diagrammatic Demonstrations
%AU Robert K. Lindsay
%PU Proc. CogSci-94, pp. 572-576
%SC Sunday, August 14, 4-5:30
%AB In this paper I examine the question of how a diagrammatic
    demonstration (a "proof without words") could be understood by a
    computational model.  The computational model (a) has a means of
    representing geometric diagrams composed exclusively of points, line
    segments, triangles, and quadrilaterals, including the special cases
    of parallelograms, rhombuses, rectangles, and squares; (b) accepts
    step-by-step descriptions of specific diagrams, and constructs in
    computer memory a representation of the diagram as it is described;
    (c) includes the ability to make modifications to the diagram by
    construction steps that specify movement of previously constructed
    components; (d) after each construction step notices any new objects
    (line segments, triangles, etc.) that are created by the step; (e)
    accepts a goal statement that the construction sequence is allegedly
    demonstrating; and (f) attempts to find a justification that
    confirms the goal statement.

%TI Predicting Irregular Past Tenses: Comparing Symbolic and Connectionist Models Against Native English Speakers 
%AU Charles X. Ling
%PU Proc. CogSci-94, pp. 577-582
%SC Monday, August 15, 4-5:30
%AB Learning the past tense of English verbs has become a landmark
    task for testing the adequacy of cognitive modeling.
    We review a set of intriguing psychological phenomena
    that any modeling of past-tense acquisition has to account for.
    Traditional grammatical theories fail to explain phenomena
    of irregular verbs, while connectionist models, which require no 
    symbols and explicit rules, fail on regular verbs.
    We present a general-purpose symbolic pattern associator (SPA)
    which learns a set of sufficient and necessary symbolic rules for 
    both distinguishing and predicting regular and irregular verbs.
    Our all-rule theory is similar in spirit to Pinker's (1991, 1993)
    modular hypothesis, and is able to account for most
    psychological phenomena in the past-tense acquisition.
    Even for the task of irregular past-tense generalization,
    the SPA is judged to be more psychologically plausible than 
    the connectionist model by adult native English speakers.
    Our results support the view that language acquisition and processing 
    should be better modeled by symbolic, rather than connectionist, systems.

%TI Distributed Meeting Scheduling
%AU JyiShane Liu 
%AU Katia Sycara
%PU Proc. CogSci-94, pp. 583-588
%SC Sunday, August 14, 2-3:30
%AB Meeting scheduling takes place when a group of people intend to meet
    with each other. Since each person has individual availability
    constraints and preferences, meeting scheduling is naturally
    distributed and there is a need to schedule the meeting in such a
    way as to consider the preferences of the set of meeting
    participants.  In addition, individual meeting constraints and
    preferences may change both as a result of an agent's situation or
    as a result of other agents' scheduling decisions. Therefore, there
    is a need for distributed reactive schedule revision in response to
    changing requirements and constraints.  We present an approach to
    distributed meeting scheduling based on modeling and communication
    of constraints and preferences among the agents.  When a feasible
    global schedule cannot be found, agents enter a negotiation and
    relax their constraints. The approach enables the agents to find and
    reach agreement on the schedule with the highest joint utility and
    to reactively revise the schedule in response to new information.

%TI Uniform Representations for Syntax-Semantics Arbitration
%AU Kavi Mahesh
%AU Kurt P. Eiselt
%PU Proc. CogSci-94, pp. 589-594
%SC Monday, August 15, 11-12:30
%AB Psychological investigations have led to considerable insight into the
    working of the human language comprehension system. In this article, we
    look at a set of principles derived from psychological findings to argue
    for a particular organization of linguistic knowledge along with a
    particular processing strategy and present a computational model of
    sentence processing based on those principles. Many studies have shown
    that human sentence comprehension is an incremental and interactive
    process in which semantic and other higher-level information interacts
    with syntactic information to make informed commitments as early as
    possible at a local ambiguity.  Early commitments may be made by using
    top-down guidance from knowledge of different types, each of which must
    be applicable independently of others.  Further evidence from studies of
    error recovery and delayed decisions points toward an arbitration
    mechanism for combining syntactic and semantic information in resolving
    ambiguities. In order to account for all of the above, we propose that
    all types of linguistic knowledge must be represented in a common form
    but must be separable so that they can be applied independently of each
    other and integrated at processing time by the arbitrator. We present
    such a uniform representation and a computational model called COMPERE
    based on the representation and the processing strategy.

%TI Acoustic-based syllabic representation and articulatory gesture detection: Prerequisites for early childhood phonetic and articulatory development
%AU Kevin L. Markey
%PU Proc. CogSci-94, pp. 595-600
%SC Sunday, August 14, 11-12:30
%AB We describe the perceptual foundations of a sensorimotor model of
    early childhood phonetic and articulatory development.  The model's
    auditory perception is sensitive to prosodic and syllabic structure
    and simulates the categorical phonetic perception of late infancy.
    Importantly, the model relies on exclusively acoustic cues and their
    statistical distribution in the linguistic environment, avoiding
    prior assumptions of articulatory-acoustic correlations or
    linguistic contrasts which are inappropriate for a model of
    perceptual development.  The model detects and categorizes speech
    segments, which, despite their acoustic basis, correlate with
    linguistic events and articulatory gestures.  The resulting
    representation supports not only word recognition but also the
    unique demands of articulatory motor control and its development.
    In simulations examining the distinctiveness and faithfulness of the
    representation, we find that it preserves and makes explicit
    information about the phonetic properties of the acoustic signal.

%TI Time as Phase: A Dynamic Model of Time Perception
%AU J. Devin McAuley
%PU Proc. CogSci-94, pp. 607-612
%SC Monday, August 15, 11-12:30
%AB In this paper, a dynamic model of human time perception is presented
    which treats time as phase, relative to the period of an oscillator
    that adapts its oscillation rate in response to an input rhythm.
    The adaptive oscillator mechanism is characterized by four
    fundamental properties: (1) a preferred oscillation rate which
    captures the notion of a preferred tempo, (2) a fast-acting
    synchronization procedure which models our ability to perceptually
    lock onto salient aspects of a rhythm, (3) a decay process to oppose
    synchronization, and (4) a drift process which causes the preferred
    rate to gradually drift towards the adapted rate, thereby modeling
    the context effects of long-term pattern exposure. By assuming that
    sensitivity to duration is a function of oscillator entrainment to
    the contextual rhythm, the model provides a qualitative match to
    data on tempo discrimination, and predicts the types of errors
    subjects would make on such tasks.  These predictions are in
    agreement with data showing that subjects overestimate short
    intervals and underestimate long intervals.

%TI Letter Perception: Toward a conceptual approach
%AU Gary McGraw
%AU John Rehling
%AU Rob Goldstone
%PU Proc. CogSci-94, pp. 613-618
%SC Monday, August 15, 11-12:30
%AB We present the results of a simple experiment in lowercase letter
    recognition.  Unlike most psychology studies of letter recognition,
    we include in our data set letters at the extremes of their
    categories and investigate the recognition of letters of multiple
    typefaces.  We are interested in the relationship between the
    recognition of normal letters and the recognition of non-standard
    letters.  Results provide empirical evidence for top-down conceptual
    constraints on letter perception in the form of roles and relations
    between perceptually-based structural subcomponents.  A process
    model based on the hypothesis developed below is currently being
    implemented.

%TI Towards a New Model of Phonological Encoding
%AU Drs. Paul J. A. Meijer
%PU Proc. CogSci-94, pp. 619-623
%SC Sunday, August 14, 11-12:30
%AB The sound-form generation of a word in speech production involves
    the retrieval of segmental and suprasegmental information from the
    mental lexicon.  A translation task experiment showed that the
    naming latencies of target items can be reduced when prime words are
    presented that have the same placement of the lexical stress as the
    target.  However, this reduction will only occur when primes and
    targets have the same word onset.  A second experiment showed that
    primes that have the same number of segments as the targets will
    cause naming facilitation compared to primes that have different
    numbers of segments.  I have developed a new model of phonological
    encoding that incorporates ordered selection of the various
    elements.  Lexical stress is chosen first, followed by information
    about the number of slots, the word onset, the second segment, and
    the other segments, until all segments have been selected.  The
    model further employs mechanisms that allow for the retrieval of the
    initial segment to influence the retrieval of lexical stress.
    Various simulations show that the model can replicate the findings
    of the two experiments. Other models of phonological encoding
    largely neglect suprasegmental retrieval and cannot explain these
    results.

%TI How Mathematicians Prove Theorems
%AU Erica Melis
%PU Proc. CogSci-94, pp. 624-628
%SC Sunday, August 14, 11-12:30
%AB This paper analyzes how mathematicians prove theorems. The analysis
    is based upon several empirical sources such as reports of
    mathematicians and mathematical proofs by analogy. In order to
    combine the strength of traditional automated theorem provers with
    human-like capabilities, the questions arise: Which problem solving
    strategies are appropriate?  Which representations have to be
    employed?  As a result of our analysis, the following reasoning
    strategies are recognized: proof planning with partially
    instantiated methods, structuring of proofs, the transfer of
    subproofs and of reformulated subproofs. We discuss the
    representation of a component of these reasoning strategies, as well
    as its properties.  We find some mechanisms needed for theorem
    proving by analogy, that are not provided by previous approaches to
    analogy.  This leads us to a computational representation of new
    components and procedures for automated theorem proving systems.

%TI Scaffolding Effective Problem Solving Strategies in Interactive Learning Environments
%AU Douglas C. Merrill
%AU Brian J. Reiser
%PU Proc. CogSci-94, pp. 629-634
%SC Sunday, August 14, 4-5:30
%AB Novices often experience great difficulty learning new domains.
    Thus, understanding how best to scaffold novice problem solving has
    potentially tremendous importance for learning in formal domains.
    In this paper, we present results from an experimental study that
    compared learning outcomes of students solving introductory
    programming problems in three different learning environments.  This
    range of environments varies in two ways.  First, the notations used
    in the environments vary between diagrammatic and textual.  More
    importantly, the environments differ in the cognitive activities
    students are led to perform while solving problems, such as
    prediction of intermediate results and noting future goals to
    achieve.  This experiment demonstrated that environments that
    scaffold more of the important cognitive activities lead to superior
    performance, regardless of whether the environments are textual or
    diagrammatic.

%TI Modeling Inter-Category Typicality within a Symbolic Search Framework
%AU Craig S. Miller
%PU Proc. CogSci-94, pp. 635-639
%SC Sunday, August 14, 11-12:30
%AB This paper addresses category typicality in the context of a
    category naming task.  In contrast to the predominant effort with
    gradient models, a symbolic search framework is taken.  Within this
    framework, the SCA (Symbolic Concept Acquisition) model demonstrates
    varying response times as a function of an instance's intra-category
    typicality.  Here its coverage is expanded to inter-category
    typicality.  A functionally motivated extension for SCA is advanced
    that pursues search backtracking under ambiguous cases.  I explain
    how the backtracking extension accounts for inter-category
    typicality effects, and support it with some empirical evidence.  I
    discuss how the effect generalizes to a larger class of symbolic
    search models.

%TI Mental models for proportional reasoning
%AU Joyce L. Moore
%AU Daniel L. Schwartz
%PU Proc. CogSci-94, pp. 640-645
%SC Monday, August 15, 4-5:30
%AB Three studies investigated the role of perceptual and quantitative
    situational factors on the structure of 5th- and 6th-graders' mental
    models.  A task involved a carton of orange juice made from
    concentrate and water, and two glasses of different sizes filled
    from the carton.  The children had to predict whether the two
    glasses would taste the same.  We manipulated whether students were
    presented with physical, diagrammatic, photographic, or textual
    information. We also manipulated the type of relationship specified
    between quantities: qualitative, easy numerical, or difficult
    numerical.  We found that for the diagram condition, difficult
    numerical relationships yielded poor performance, whereas the easy
    numerical and qualitative relationships yielded excellent
    performance.  In contrast, in the physical condition, the easy
    numerical relationships yielded poor performance, whereas the
    difficult numerical and qualitative relationships yielded excellent
    performance.  These and other results are interpreted by developing
    a sketch of the mental models pre-proportional children construct to
    reason about this quantitative situation, and describing how
    situational factors influence the construction of the models.  For
    example, physical features led to models that captured the identity
    relationship between the juice in the glasses (e.g., the juice came
    from the same carton) whereas numerical features led to models that
    captured the relationship between the constituents of concentrate
    and water in each glass (e.g., within a glass there is more water
    than concentrate).

%TI Integrating Creativity and Reading:  A Functional Approach 
%AU Kenneth Moorman
%AU Ashwin Ram 
%PU Proc. CogSci-94, pp. 646-651
%SC Sunday, August 14, 4-5:30
%AB Reading has been studied for decades by a variety of cognitive
    disciplines, yet no theories exist which sufficiently describe and
    explain how people accomplish the complete task of reading
    real-world texts.  In particular, a type of knowledge intensive
    reading known as creative reading has been largely ignored by the
    past research.  We argue that creative reading is an aspect of
    practically all reading experiences; as a result, any theory which
    overlooks this will be insufficient.  We have built on results from
    psychology, artificial intelligence, and education in order to
    produce a functional theory of the complete reading process.  The
    overall framework describes the set of tasks necessary for reading
    to be performed.  Within this framework, we have developed a theory
    of creative reading.  The theory is implemented in the ISAAC
    (Integrated Story Analysis And Creativity) system, a reading system
    which reads science fiction stories.

%TI A Study of Diagrammatic Reasoning from Verbal and Gestural Data
%AU N. Hari Narayanan
%AU Masaki Suwa
%AU Hiroshi Motoda
%PU Proc. CogSci-94, pp. 652-657
%SC Tuesday, August 16, 11-12:30
%AB This paper reports on an exploratory study of diagrammatic
    reasoning.  Concurrent think-aloud protocols and gestures of
    subjects solving a set of device behavior hypothesis problems
    presented as labeled diagrams were collected. In addition to
    analyzing verbal protocols, the gestures and marks made by the
    subjects were examined and used to annotate encoded verbal data.  A
    model of diagrammatic reasoning in this task is proposed and
    compared with results of analyzing the protocols.  Besides lending
    support to results of previous experimental studies, this study also
    revealed some interesting aspects of diagrammatic reasoning that
    merit further investigation.

%TI Integrating Cognitive Capabilities in a Real-Time Task
%AU Greg Nelson
%AU Jill Fain Lehman
%AU Bonnie E. John
%PU Proc. CogSci-94, pp. 658-663
%SC Sunday, August 14, 4-5:30
%AB NTD-Soar is a model of the perceptual, cognitive, and motor actions
    performed by the NASA Test Director as he utilizes the materials in
    his surroundings and communicates with others to prepare for a Space
    Shuttle Launch.  The model, built within the framework of a serial
    symbolic architecture, is based on a number of independently
    designed general cognitive capabilities as well as a cognitive
    analysis of a particular task.  This paper presents a detailed
    description of the model and an assessment of its performance when
    compared to human data.  NTD-Soar's ability to display human-like
    real-time performance demonstrates that symbolic models with a
    serial bottleneck can account for complex behaviors which appear to
    happen in parallel, simply by opportunistically interleaving small
    elements of the different subtasks.

%TI Can Connectionist Models Exhibit Non-Classical Structure Sensitivity?
%AU Lars Niklasson
%AU Tim van Gelder
%PU Proc. CogSci-94, pp. 664-669
%SC Sunday, August 14, 2-3:30
%AB Several connectionist models have been supplying non-classical
    explanations to the challenge of explaining systematicity, i.e.,
    structure sensitive processes, without merely being implementations
    of classical architectures. However, lately the challenge has been
    extended to include learning related issues. It has been claimed
    that when these issues are taken into account, only a restricted
    form of systematicity could be claimed by the connectionist models
    put forward so far. In this paper we investigate this issue further,
    and supply a model and results that satisfies even the revised
    challenge.

%TI Cognitive Development and Infinity in the Small: Paradoxes and Consensus
%AU Rafael Nunez
%PU Proc. CogSci-94, pp. 670-674
%SC Monday, August 15, 7:30-9
%AB Throughout history the concept of infinity has played an important
    role in almost every branch of human knowledge. Paradoxically, very
    little effort has been made by the various theoretical schools in
    Cognitive Science to study this fascinating aspect of human mental
    activity. The study of subdivision offers an interesting subject
    matter to address the question of how the idea of infinity in the
    small emerge in our minds. 32 students, aged 8, 10, 12 and 14 (high
    and low intellectual*academic performers), participated in this
    study, in which a version of one of Zeno's paradoxes was analyzed by
    means of individual interviews. Results suggest that between ages 10
    and 12, a certain intuition of the entailments of subdivision
    emerges, remaining very labile afterwards and being very influenced
    by the context. 66% of the 12- and 14-year-old children said that
    the process involved in the paradox comes to an end. Less than 25%
    considered (with deep hesitations) the possibility that the process
    might continue endlessly.  This suggests that the classic piagetian
    view that the indefinite subdivision is mastered at the period of
    formal operations must be reassessed. Some epistemological
    consequences based on an embodied- cognition oriented perspective
    are discussed.

%TI Changing the Viewpoint: Re-Indexing by Introspective Questioning
%AU Ruediger Oehlmann
%AU Pete Edwards
%AU Derek Sleeman
%PU Proc. CogSci-94, pp. 675-680
%SC Monday, August 15, 2-3:30
%AB Various cognitive and computational models have addressed the use of
    previous experience to understand a new domain. In particular,
    research in case-based reasoning has explored the ideas of
    retrieving and adapting previous experience in the form of cases,
    which can only be retrieved when they are appropriately indexed.  In
    contrast to learning new indexes, re-indexing of existing cases has
    received little attention. The need for re-indexing a case arises
    when a previous situation has been incorrectly or incompletely
    understood. We describe a novel approach to re-indexing which
    integrates results from two different areas: multiple viewpoints
    used in intelligent tutoring systems and introspective questioning
    used in metacognitive activities.  Furthermore, we apply ideas from
    Case-Based Reasoning to the re-indexing process itself. The revised
    index can be tested by active interaction with the agent's
    environment. An example of our implementation, IULIAN, will
    illustrate the re-indexing process.

%TI The Power of Negative Thinking: The Central Role of Modus Tollens in Human Cognition
%AU Stellan Ohlsson
%AU Nina Robin
%PU Proc. CogSci-94, pp. 681-686
%SC Sunday, August 14, 11-12:30
%AB Thinking is governed by abstract schemas. Verbal protocols
    illustrate spontaneous use, by logically unsophisticated subjects,
    of the schema known as modus tollens. The tollens inference schema
    appeared embedded within two reasoning strategies, the classical
    reductio ad absurdum and reasoning by elimination. The psychological
    reality of modus tollens is implicitly assumed by many theories in
    cognitive science and the hypothesis that it is a basic component of
    human cognition cannot be dismissed.

%TI Similarity by feature creation: Reexamination of the asymmetry of similarity
%AU Hitoshi Ohnishi
%AU Hiroaki Suzuki
%AU Kazuo Shigemasu
%PU Proc. CogSci-94, pp. 687-692
%SC Sunday, August 14, 2-3:30
%AB We developed a computational model of similarity judgment in
    problem-solving contexts. The model first attempts to transform an
    object to another using the knowledge of the domain, the strategy,
    and the goal. If the transformation succeeds, new feature about
    transformability is created. A similarity of an object to another is
    computed, based on the created features. If the model fails to
    create a new feature, it computes a similarity by feature comparison
    in the same way as the contrast model. An important prediction of
    the model is that the asymmetry of similarity judgments is caused by
    the directionality of the problem-solving skills. We examined the
    model's prediction. The material was the Tower of Hanoi puzzle.
    Subjects were required to rate the similarities of one state to the
    goal as well as those of the goal to a state.  In Experiment 1, we
    taught one group of subjects the `move-pattern strategy' that
    induced learners to acquire highly directional skills, and compared
    their judgments with those by naive subjects.  The asymmetry was
    observed only in the judgments by the trained subjects. The second
    experiment showed that the results of the experiment 1 could not be
    attributed to the `prototypicality' of the goal.

%TI A connectionist account of Global Precedence: Theory and data
%AU Elizabeth M. Olds
%PU Proc. CogSci-94, pp. 693-698
%SC Monday, August 15, 11-12:30
%AB A connectionist model was developed to investigate the relationship
    between global and local information in visual perception, and an
    experiment tested a prediction generated by the model.  The research
    focused on the fact that processing of global information is found
    to dominate processing of local information in many tasks ("global
    precedence").  The connectionist model demonstrated that global
    precedence can arise out of simple parallel processing.  The
    experiment demonstrated that rotating global elements eliminates
    Global Precedence.  This empirical result supports the possibility,
    raised by the model, that Global Precedence is due in part to
    simplicity of input-output mapping.

%TI Modeling the Use of Frequency and Contextual Biases in Sentence Processing
%AU Neal J. Pearlmutter
%AU Kim G. Daugherty
%AU Maryellen C. MacDonald
%AU Mark S. Seidenberg
%PU Proc. CogSci-94, pp. 699-704
%SC Tuesday, August 16, 11-12:30
%AB MacDonald, Pearlmutter, and Seidenberg (1993) propose an alternative
    to the dominant view in sentence processing that syntactic
    ambiguities are resolved by heuristics based on structural
    simplicity.  MacDonald et al.  argue that such ambiguities can be
    defined in terms of alternatives associated with information in
    individual lexical items, and thus that syntactic ambiguities can be
    resolved by lexical disambiguation mechanisms relying on access to
    the relative frequencies of alternatives and to biases created by
    contextual constraints.  We present evidence from a computer
    simulation of the use of frequency-based and contextual constraints
    in the processing of the main verb/reduced relative syntactic
    ambiguity, showing that frequency and relatively limited contextual
    information from a sample of natural language can interact
    sufficiently to model basic results in the literature.

%TI KA: Situating Natural Language Understanding in Design Problem Solving
%AU Justin Peterson
%AU Kavi Mahesh
%AU Ashok Goel
%AU Kurt Eiselt
%PU Proc. CogSci-94, pp. 711-716
%SC Sunday, August 14, 4-5:30
%AB In this paper, we investigate the interaction between linguistic and
    non-linguistic processes by considering the role of functional reasoning
    in understanding design specifications written in natural language.  We
    describe KA, an experimental model-based interpretation and design
    system which understands English language descriptions of the design
    problems it solves, and examine whether KA's problem-solving
    capabilities help i) ascertain the relevance of ambiguous design
    specifications and ii) identify unspecified relations between design
    requirements.  Our results demonstrate that augmenting language
    processing with the ability to reason about function along the lines
    suggested in KA provides effective solutions to these problems in
    particular as well as to other problems in natural language
    understanding.

%TI Correspondences between Syntactic Form and Meaning From Anarchy to Hierarchy
%AU Justin Peterson
%AU Dorrit Billman
%PU Proc. CogSci-94, pp. 705-710
%SC Monday, August 15, 4-5:30
%AB If we are to develop language processing systems that model human
    capabilities and performance, we must identify correspondences between
    the grammatical features and meaning of language and employ them in our
    computational models of sentence interpretation.  In this paper, we
    present a computational model of sentence interpretation and a theory of
    compositional semantics.  Our model provides a method for addressing a
    range of lexical novelty (e.g., novel verbs, novel uses of known
    verbs), relying on a semantic representation that maintains principled
    correspondences with syntactic form.  In our approach, syntactic
    structure preserves critical information about the hierarchical
    structure of semantic interpretations.  This property of the semantic
    representation along with restrictions on semantic interpretations
    enable the model to infer the semantics of novel verbs, disambiguate the
    semantics of known verbs, and determine the contributions that verb
    arguments make to sentence interpretation in a constrained and
    principled manner.  This research offers a fruitful approach for using
    linguistic analysis to address the recovery of meaning in natural language
    processing systems.

%TI Categorization and the Parsing of Objects
%AU Rachel Pevtzow
%AU Robert L. Goldstone
%PU Proc. CogSci-94, pp. 717-722
%SC Sunday, August 14, 11:00am-12:30pm
%AB Several models of categorization suggest that fixed inputs
    (features) are combined together to create categorization rules.  It
    is also possible that categorization influences what features are
    perceived and used.  This experiment explored the possibility that
    categorization training influences how an object is decomposed into
    parts.  In the first part of this experiment, subjects learned to
    categorize objects based on particular sets of line segments.
    Following categorization training, subjects were tested in a
    whole-part decomposition task, making speeded judgments of "does
    whole X contain probe Y".  All diagnostic and nondiagnostic category
    parts were used as parts within the whole objects, and as probes.
    Categorization training in the first part of the experiment affected
    performance on the second task.  In particular, subjects were faster
    to respond when the whole object contained a part that was
    diagnostic for categorization than when it contained a nondiagnostic
    part.  When the probe was a diagnostic category part, subjects were
    faster to respond that it was present than absent, and when the
    probe was a nondiagnostic part, subjects were faster to respond that
    it was absent than that it was present.  These results are discussed
    in terms of perceptual sensitivity, response bias, and the
    modulating influence of experience.

%TI Strong Systematicity within Connectionism: The Tensor-Recurrent Network
%AU Steven Phillips
%PU Proc. CogSci-94, pp. 723-727
%SC Sunday, August 14, 2-3:30
%AB Systematicity, the ability to represent and process structurally
    related objects, is a significant and pervasive property of
    cognitive behaviour, and clearly evident in language. In the case of
    Connectionist models that learn from examples, systematicity is
    generalization over examples sharing a common structure.  Although
    Connectionist models (e.g., the recurrent network and its variants)
    have demonstrated generalization over structured domains, there has
    not been a clear demonstration of strong systematicity (i.e.,
    generalization across syntactic position). The tensor has been
    proposed as a way of representing structured objects, however, there
    has not been an effective learning mechanism (in the strongly
    systematic sense) to explain how these representations may be
    acquired. I address this issue through an analysis of tensor
    learning dynamics. These ideas are then implemented as the
    tensor-recurrent network which is shown to exhibit strong
    systematicity on a simple language task. Finally, it is suggested
    that the properties of the tensor-recurrent network that give rise
    to strong systematicity are analogous to the concepts of variables
    and types in the Classical paradigm.

%TI A Simple Co-Occurrence Explanation for the Development of Abstract Letter Identities
%AU Thad A. Polk
%AU Martha J. Farah
%PU Proc. CogSci-94, pp. 728-732
%SC Monday, August 15, 4-5:30
%AB Evidence suggests that an early representation in the visual
    processing of orthography is neither visual nor phonological, but
    codes abstract letter identities (ALIs) independent of case, font,
    size, etc.  How could the visual system come to develop such a
    representation?  We propose that, because many letters look similar
    regardless of case, font, etc., different visual forms of the same
    letter tend to appear in visually similar contexts (e.g., in the
    same words written in different ways) and that correlation-based
    learning in visual cortex picks up on this similarity among contexts
    to produce ALIs.  We present a simple self-organizing Hebbian neural
    network that illustrates how this idea could work and that produces
    ALIs when presented with appropriate input.

%TI Probabilistic Reasoning under Ignorance 
%AU Marco Ramoni 
%AU Alberto Riva 
%AU Vimla L. Patel 
%PU Proc. CogSci-94, pp. 733-738
%SC Monday, August 15, 7:30-9
%AB The representation of ignorance is a long standing challenge for
    researchers in probability and decision theory. During the past
    decade, Artificial Intelligence researchers have developed a class
    of reasoning systems, called Truth Maintenance Systems, which are
    able to reason on the basis of incomplete information. In this paper
    we will describe a new method for dealing with partially specified
    probabilistic models, by extending a logic-based truth maintenance
    method from Boolean truth-values to probability intervals. Then we
    will illustrate how this method can be used to represent Bayesian
    Belief Networks - one of the best known formalisms to reason under
    uncertainty - thus producing a new class of Bayesian Belief
    Networks, called Ignorant Belief Networks, able to reason on the
    basis of partially specified prior and conditional probabilities.
    Finally, we will discuss how this new method relates to some
    theoretical intuitions and empirical findings in decision theory and
    cognitive science.

%TI Troubleshooting Strategies in a Complex, Dynamical Domain
%AU Margaret M. Recker
%AU T. Govindaraj
%AU Vijay Vasandani
%PU Proc. CogSci-94, pp. 739-744
%SC Monday, August 15, 2-3:30
%AB In this paper, we present results from two empirical studies in
    which subjects diagnosed faults that occurred in a computer-based,
    dynamical simulation of an oil-fired marine power plant, called
    Turbinia.  Our results were analyzed in the framework of dual
    problem space search (DPSS), in which non-routine diagnosis was
    characterized as a process of generating hypotheses to explain the
    observed faults, and testing these hypotheses by conducting
    experiments.  In the first study, we found that the less-efficient
    subjects conducted significantly more experiments, indicating a
    strong bottom-up bias in their diagnostic strategy.  In the second
    study, we examined the effects of imposing external resource bounds
    on subjects' diagnostic strategies.  Results indicated that
    constraints on diagnosis time led to a reduction in the number of
    actions performed and components viewed, without appearing to affect
    diagnostic performance.  Constraints on the number of diagnostic
    tests reduced search in the experiment space, which appeared to
    negatively affect performance.  Taken together, these suggest
    results that subjects' diagnostic strategies were sensitive to
    constraints in the external task environment.  We close with a
    sketch of how DPSS might be augmented to include effects due to
    external resource bounds.

%TI The Guessing Game: A Paradigm for Artificial Grammar Learning
%AU Martin Redington
%AU Nick Chater
%PU Proc. CogSci-94, pp. 745-749
%SC Monday, August 15, 7:30-9
%AB In a guessing game, subjects reconstruct a sequence by guessing each
    successive element of the sequence from a finite set of
    alternatives, receiving feedback after each guess.  An upper bound
    on Ss knowledge of the sequence is given by H, the estimated entropy
    of the numbers of guesses.  The method provides a measure of
    learning independent of material type and distractors, and the
    resulting data set is very rich.  Here, the method is applied to
    artificial grammar learning; subjects were exposed to strings from a
    finite state grammar and subsequently distinguished between strings
    that followed or violated the grammar reliably better than subjects
    who had not seen the learning strings (but who themselves performed
    at above chance levels).  subjects knowledge of the strings, H,
    reflected both grammaticality and exposure to learning strings, and
    was correlated with overall judgement performance.  For
    non-grammatical strings, the strings that Ss knew most about were
    those they found most difficult to classify correctly.  These
    results support the hypothesis that fragment knowledge plays an
    important part in artificial grammar learning, and we suggest that
    the guessing game paradigm is a useful tool for studies of learning
    and memory in general.

%TI Educational Implications of CELIA: Learning by Observing and Explaining
%AU Michael Redmond
%PU Proc. CogSci-94, pp. 750-755
%SC Monday, August 15, 7:30-9
%AB CELIA is a computational model of how a novice student can quickly
    become competent at a procedural task through observing and
    understanding an expert's problem solving. This model was inspired
    by protocol studies, and implemented in a computer program.  This
    model of a student's effective learning suggests some implications
    for teaching novices in a new domain. These may be relevant for both
    human teaching and intelligent tutoring. The implications include:
    encourage the student to predict, interactive step-by-step
    presentation of example steps, encourage self-explanation by the
    student, order example steps to match their logical order, give a
    variety of examples in early instruction, allow flexible interaction
    with the student, and present basic background concepts prior to
    examples. These implications represent hypotheses that follow from
    the learning model; they suggest further research.

%TI Improving Design with Artifact History
%AU Brent Neal Reeves
%PU Proc. CogSci-94, pp. 756-761
%SC Monday, August 15, 7:30-9
%AB History tools play an important part in supporting human computer
    interaction.  Most research in history tools has focussed on user
    interaction histories.  In contrast, this paper presents a
    theoretical framework for artifact history and describes a computer
    based design environment which implements embedded artifact history.
    The most promising area for history tools is in collaborative
    design, helping users to understand others' as well as one's own
    previous work.

%TI Explanatory AI, Indexical Reference, and Perception
%AU Lawrence D. Roberts
%PU Proc. CogSci-94, pp. 762-765
%SC Monday, August 15, 7:30-9
%AB Researchers in AI often say that certain types of reference are
    based on perception.  Their models, however, do not reflect
    perceptual functioning, but instead represent denota- tion, an
    intellectually modeled relation, by using exact fea- ture matching
    in a serial device as the basic mechanism for reference.  I point
    out four problems in this use of denota- tion: substitution of an
    intellectual model for a perceptual one; unclarity about the nature
    of referential identification; relative neglect of the role of
    contrast in reference; and inexact matches.  I then suggest an
    alternative theoretical account for perceptually based indexical
    reference, the figure-ground model, and I explain how this model
    handles the four problems.

%TI Learning Features of Representation in Conceptual Context
%AU Luc Rodet
%AU Philippe G. Schyns
%PU Proc. CogSci-94, pp. 766-771
%SC Monday, August 15, 7:30-9
%AB When people categorize an object, they often encode a certain number of
    its properties for later classification. In Schyns and Murphy (in
    press), we suggested that the way people group objects into categories
    could induce the learning of new dimensions of categorization--i.e.,
    dimensions that did not exist prior to the experience with the
    categorization system. In this research, we examine whether the context
    of known concepts can influence feature extraction. The first experiment
    simply tested whether the context of different object categories could
    change the perception of the same target stimuli. The second experiment
    examined whether learning category B given the concept of category A may
    result in different features being learned that learning A given B. The
    results showed that the context of known concepts influence the features
    people learn to represent object categories.

%TI On-line versus Off-line Priming of Word-Form Encoding in Spoken Word Production
%AU Ardi Roelofs
%PU Proc. CogSci-94, pp. 772-777
%SC Sunday, August 14, 11-12:30
%AB The production of a disyllabic word is speeded up by advance
    (off-line) knowledge of the first syllable, but not by knowledge
    about the second syllable (Meyer, 1990). By contrast, when first
    syllable or second-syllable primes are presented during the
    production of a disyllabic word (on-line), both primes yield a
    facilitatory effect (Meyer & Schriefers, 1991). In this paper, the
    computational model of word-form encoding in speaking developed in
    Roelofs (1992b, submitted) is applied to these contradictory
    findings. Central to the model is the proposal by Levelt (1992) that
    morphemic representations are mapped onto stored syllable programs
    by serially grouping the morphemes' segments into phonological
    syllables, which are then used to address the programs in a
    syllabary. Results of computer simulations reported in this paper
    show that the model resolves the empirical discrepancy.

%TI Do Children have Epistemic Constructs about Explanatory Frameworks: Examples from Naive Ideas about the Origin of Species
%AU Ala Samarapungavan
%AU Reinout Wiers
%PU Proc. CogSci-94, pp. 778-783
%SC Monday, August 15, 4-5:30
%AB This paper presents the results of a study which examined children's
    ideas about the origin and differentiation of species.  The focus of
    this paper is on the epistemic constructs associated with children's
    explanatory frameworks. Two groups of elementary school students,
    9-year-olds and 12-year-olds, were interviewed using a
    semi-structured questionnaire. The results indicate that most
    children explain the phenomena of speciation in terms of a
    conceptual framework that strongly resembles either early Greek or
    later renaissance variants of Essentialist theories in biology.
    Children also demonstrate a spontaneous understanding of important
    epistemic constructs associated with theoretical frameworks.  For
    example, most children show an explicit awareness of the boundaries
    of their theoretical frameworks and have some idea of the phenomena
    that such a framework can and should explain.  Many children treat
    questions about the origins of the first animal and plant species as
    "first questions," or questions which are in principle unanswerable.
    The children appear to distinguish between facts that they as
    individuals lack but that are probably known by experts, domain
    problems that are unsolved but could in principle be answered by
    biological theories, and problems that are beyond the explanatory
    scope of biological theories.

%TI A Connectionist Model of Verb Subcategorization
%AU Hinrich Schutze
%PU Proc. CogSci-94, pp. 784-788
%SC Tuesday, August 16, 11-12:30
%AB Much of the debate on rule-based vs. connectionist models in
    language acquisition has focussed on the English past tense. This
    paper investigates a new area, the acquisition of verb
    subcategorization.  Verbs differ in how they express their arguments
    or subcategorize for them. For example, ``She gave him a book.''  is
    good, but ``She donated him a book.'' sounds odd.  The paper
    describes a connectionist model for the acquisition of verb
    subcategorization and how it accounts for overgeneralization and
    learning in the absence of explicit negative evidence. It is argued
    that the model presents a better explanation for the transition from
    the initial rule-less state to final rule-like behavior for some
    verb classes than the symbolic account proposed by Pinker (1989).

%TI Viewpoint dependence and face recognition 
%AU Philippe G. Schyns
%AU Heinrich H. Bulthoff
%PU Proc. CogSci-94, pp. 789-793
%SC Monday, August 15, 11-12:30
%AB Face recognition stands out as a singular case of object recognition:
    although most faces are very much alike, people discriminate between
    many different faces with outstanding efficiency.  Even though little
    is known about the mechanisms of face recognition, viewpoint
    dependence, a recurrent characteristic of many research on faces, 
    could inform algorithms and representations.  Poggio and Vetter's symmetry
    argument predicts that learning only one view of a face may be sufficient
    for recognition, ifthis view allows the computation of a symmetric,
    ``virtual,'' view.  More specifically, as faces are roughly bilaterally
    symmetric objects, learning a side-view--which always has a symmetric 
    view--should give rise to better generalization performances than
    learning the frontal view.  It is also predicted that among all
    new views, a virtual view should be best recognized.  We ran two
    psychophysical experiments to test these predictions.  Stimuli
    were views of 3D models of laser-scanned faces.  Only shape was available
    for recognition; all other face cues--texture, color, hair, etc.--were
    removed from the stimuli.  The first experiment tested whether a particular 
    view of a face was canonical.  The second experiment tested which single
    views of a face give rise to best generalization performances.  The
    results were compatible with the symmetry argument:  face recognition
    from a single view is always better when the learned view allows the
    computation of a symmetric view.  

%TI Multiple Learning Mechanisms Within Implicit Learning
%AU Carol Augart Seger
%PU Proc. CogSci-94, pp. 794-799
%SC Monday, August 15, 2-3:30
%AB The experiment reported in this paper provides evidence that there
    are at least two independent implicit learning mechanisms in
    implicit learning: an efficiency mechanism, which underlies changes
    in reaction time to patterned stimuli, and a conceptual fluency
    mechanism, which underlies the ability to make judgments about
    stimuli based on implicit knowledge.  Each of these implicit
    mechanisms is independent of explicit learning.  Subjects performed
    a serial reaction time task under one of three learning conditions
    (nonattentional, attentional and observational) for one of three
    study lengths (2, 6 or 12 blocks). Subjects then completed five
    tests of their knowledge: attentional and nonattentional reaction
    time tasks (measuring two kinds of efficiency learning), awareness
    questionnaire (measuring explicit knowledge) , a generation task,
    and a conceptual fluency task.  Correlation analyses and criterion
    analyses found no dependencies between the measures in low awareness
    subjects. In addition, the measures were influenced differently by
    the independent variables of learning condition and study length;
    these dissociations indicate separate underlying mechanisms.
    Implications of the existence of multiple implicit mechanisms for
    connectionist modeling of implicit learning are drawn.

%TI Learning with friends and foes
%AU Mahendra Sekaran
%AU Sandip Sen
%PU Proc. CogSci-94, pp. 800-805
%SC Tuesday, August 16, 11-12:30
%AB Social agents, both human and computational, inhabiting a world
    containing multiple active agents, need to coordinate their
    activities.  This is because agents share resources, and without
    proper coordination or ``rules of the road'', everybody will be
    interfering with the plans of others.  As such, we need coordination
    schemes that allow agents to effectively achieve local goals without
    adversely affecting the problem-solving capabilities of other
    agents.  Researchers in the field of stributed Artificial
    Intelligence (DAI) have developed a variety of coordination schemes
    under different assumptions about agent capabilities and
    relationships.  Whereas some of these research have been motivated
    by human cognitive biases, others have approached it as an
    engineering problem of designing the most effective coordination
    architecture or protocol.  We propose reinforcement learning as a
    coordination mechanism that imposes little cognitive burden on
    agents.  More interestingly, we show that a uniform learning
    mechanism suffices as a coordination mechanism in both cooperative
    and adversarial situations.  Using an example block-pushing problem
    domain, we demonstrate that agents can use reinforcement learning
    algorithms, without explicit information sharing, to develop
    effective policies to coordinate their actions both with agents
    acting in unison and with agents acting in opposition.

%TI Tractable Learning of Probability Distributions Using the Contrastive Hebbian Algorithm
%AU Craig E. L. Stark
%AU James L. McClelland
%PU Proc. CogSci-94, pp. 818-823
%SC Monday, August 15, 7:30-9
%AB In some tasks (e.g., assigning meanings to ambiguous words) humans
    produce multiple distinct alternatives in response to a particular
    stimulus, apparently mirroring the environmental probabilities
    associated with each alternative. For this purpose, a network
    architecture is needed that can produce a distribution of outcomes,
    and a learning algorithm is needed that can lead to the discovery of
    ensembles of connection weights that reproduce the environmentally
    specified probabilities.  Stochastic symmetric networks such as
    Boltzmann machines and networks that use graded activations
    perturbed with Gaussian noise can exhibit such distributions at
    equilibrium, and they can be trained to match environmentally
    specified probabilities using Contrastive Hebbian Leaning, the
    generalized form of the Boltzmann Learning algorithm. Learning
    distributions exacts a considerable computational cost as processing
    time is used both in settling to equilibrium and in sampling
    equilibrium statistics. The work presented here examines tge extent
    of this cost and how it may be minimized, and produces speedups of
    roughly a foactor of 5 compared to previously published results.

%TI A Unified Model of Preference and Recovery Mechanisms in Human Parsing
%AU Suzanne Stevenson
%PU Proc. CogSci-94, pp. 824-829
%SC Monday, August 15, 11-12:30
%AB Models of human parsing typically focus on explaining syntactic
    preferences and garden-path phenomena.  This paper explores another
    aspect of the processing of syntactic ambiguity---the successful
    revision of previously preferred structure.  In the competitive
    attachment model of parsing, a hybrid connectionist network directly
    represents the attachment structure among phrasal nodes in a parse
    tree.  A syntactic ambiguity leads to a network of alternative
    attachments that compete for numeric activation.  The winning
    attachments are determined within a parallel operation that
    simultaneously revises earlier attachments as needed when initially
    attaching a new phrase to the developing parse tree.  Because of the
    unique parallel structuring operation, the competitive attachment
    model provides a unified explanation of human preference and
    recovery mechanisms in parsing.  The paper demonstrates this ability
    by showing how the model accounts for recency effects in human
    syntactic processing.  In the parsing network, a mechanism of decay,
    which is independently needed to manage the finite pool of
    processing nodes, allows more recent phrases to compete more
    effectively than less recent phrases for new attachments.  The
    effect of decay on the attachment competition underlies a unified
    account of psycholinguistic observations of recency, both in initial
    syntactic preferences and in the revision of erroneous attachments.

%TI PCLEARN: A model for learning perceptual-chunks
%AU Masaki Suwa
%AU Hiroshi Motoda
%PU Proc. CogSci-94, pp. 830-835
%SC Monday, August 15, 2-3:30
%AB Past research in cognitive science reveals that prototypical
    configurations of domain objects, called perceptual-chunks, underlie
    the abilities of experts to solve problems efficiently.  Little
    research, however, has been carried out on the mechanism used for
    learning perceptual-chunks from solving problems.  The present paper
    addresses this issue in the domain of geometry proof
    problem-solving.  We have developed a computational model that
    chunks, from problem diagrams, configuration of the elements which
    are visually grouped together, based on perceptual chunking
    criterion.  This criterion, called recognition rules, reflects how
    people see problem diagrams and thus works effectively to determine
    which portion of problem diagrams are more likely to be grouped as a
    chunk.  This distinguishes the proposed method from the
    goal-oriented chunking techniques used in machine-learning
    community.  Experiments on solving geometry problems show that our
    technique can detect essential diagram configurations common to many
    problems.  Additionally, implications of the recognition rules are
    discussed from a cognitive point of view.

%TI Toward A Theoretical Account of Strategy Use and Sense-Making in Mathematics Problem Solving
%AU Hermina J.M. Tabachneck
%AU Kenneth R. Koedinger
%AU Mitchell J. Nathan
%PU Proc. CogSci-94, pp. 836-841
%SC Sunday, August 14, 11-12:30
%AB Much problem solving and learning research in math and science has
    focused on formal representations.  Recently researchers have documented
    the use of unschooled strategies for solving daily problems -- informal
    strategies which can be as effective, and sometimes as sophisticated, as
    school-taught formalisms.  Our research focuses on how formal and
    informal strategies interact in the process of doing and learning
    mathematics.  We found that combining informal and formal strategies is
    more effective than single strategies.  We provide a theoretical account
    of this multiple strategy effect and have begun to formulate this theory
    in an ACT-R computer model.  We show why students may reach common
    impasses in the use of written algebra, and how subsequent or concurrent
    use of informal strategies leads to better problem-solving performance.
    Formal strategies facilitate computation because of their abstract and
    syntactic nature; however, abstraction can lead to nonsensical
    interpretations and conceptual errors.  Reapplying the formal strategy
    will not repair such errors; switching to an informal one may.  We
    explain the multiple strategy effect as a complementary relationship
    between the computational efficiency of formal strategies and the
    sense-making function of informal strategies.

%TI How Does an Expert Use a Graph? A Model of Visual and Verbal Inferencing in Economics
%AU Hermina J.M. Tabachneck
%AU Anthony M. Leonardo
%AU Herbert A. Simon
%PU Proc. CogSci-94, pp. 842-847
%SC Sunday, August 14, 4-5:30
%AB This research aims to clarify, by constructing and testing a computer
    simulation, the use of multiple representations in problem solving,
    focusing on the role of visual representations. We model the behavior of
    an economics expert as he teaches some economics principles while drawing
    a graph on a blackboard. Concurrent verbal protocols are used to guide
    construction of a production system. The model employs
    representation-specific data structures and rules. The graph on the
    blackboard is represented by a bit map; the pictorial working memory (WM)
    and long term memory (LTM) representations are node-link structures of a
    pictorial nature; the auditory WM and LTM representations are node-link
    structures of a verbal-semantic nature. Pieces from the different
    representations are linked together on a sequential and temporary basis
    to form a reasoning and inferencing chain, using cues from LTM and from
    the external graph. The expert used two representations so as to exploit
    the unique advantages of each. The graphical representation served as a
    place holder during reasoning, as well as a summary. The verbal-semantic
    representation served to give semantic meaning and causal background.
    Both could initiate reasoning chains. We compare the expert behavior
    with novices trying to learn the same principles.

%TI A Lexical Model of Learning to Read Single Words Aloud
%AU Roman Taraban
%AU Carolyn Beth Taraban
%PU Proc. CogSci-94, pp. 848-853
%SC Monday, August 15, 4-5:30
%AB Three principles governing the operation of the lexical pathway in a
    model of reading single words aloud were applied to the question of
    learning, as measured by times to initiate correct pronunciations.
    I. At the lexical level, a target word activates a neighborhood of
    orthographically similar entries in the lexicon. II. At the phoneme
    level, the correct phonemes in the phonemic spelling of the word
    compete with the other active phonemes. III. At the naming level,
    the pronunciation is composed of a conjunction of phonemes.  These
    principles were tested using the data from a 4-year-old beginning
    reader (LT), resulting in a goodness-of-fit R2 = .44. When a rule
    pathway using grapheme-phoneme correspondences was added to the
    lexical pathway, the goodness-of-fit was comparable (R2 = .46). When
    single entries were accessed along the lexical pathway, instead of
    word neighborhoods, and grapheme-phoneme correspondences were
    accessed along the rule pathway, as in standard dual-route models,
    the goodness-of-fit R2 fell to .27.  Although the model- fitting
    supported the importance of neighborhood activation and failed to
    support the importance of rules, grapheme-phoneme correspondences
    were overtly used by LT in the initial trials with words and when
    feedback indicated an errorful pronunciation.  Thus, rule
    application may be relatively slow in normal fluent word naming, but
    may still play a strategic role in attempts to initially decode
    letter strings or to correct errors.

%TI Formal Rationality and Limited Agents
%AU Jonathan King Tash
%PU Proc. CogSci-94, pp. 854-857
%SC Sunday, August 14, 4-5:30
%AB Many efforts have been made to use normative theories of rational
    decision-making, such as Bayesian decision theory, to construct and
    model agents exhibiting intelligent behavior.  In order to
    accommodate agents possessing only limited computational resources
    to apply to their decision making, however, a significant change is
    required in how the role of formal rationality is to be viewed.
    This paper argues that rationality is best seen as a property of the
    relationship between the agent and a designer.  Such a perspective
    has several consequences for the design and modelling of agents,
    bearing on assessment of rationality, induction, reactivity, and
    metalevel control.  It also illuminates several concerns put forth
    by critics of the work of the artificial intelligence community.

%TI Limiting nested beliefs in cooperative dialogue
%AU Jasper Taylor
%AU Jean Carletta
%PU Proc. CogSci-94, pp. 858-863
%SC Monday, August 15, 7:30-9
%AB Models of rationality typically rely on underlying logics that allow
    simulated agents to entertain beliefs about one another to any depth
    of nesting.  We argue that representations of individual deeply
    nested beliefs are in principle unnecessary for any cooperative
    dialogue. We describe a simulation of such dialogues in a simple
    domain, and attempt to generalize the principles of this simulation,
    first to explain features of human dialogue in this domain, then
    those of cooperative dialogues in general.  We propose that for the
    purposes of cooperative interaction, the status of all deeply-nested
    beliefs about each concept can be conjoined into a single
    represented value, which will be affected by reasoning that might be
    expected to lead to conclusions in terms of deeply-nested beliefs.
    We concede that people are capable of using individual deeply-nested
    beliefs to some degree, but such beliefs need only be handled
    explicitly in dialogues involving secrecy or deception.

%TI Functional Parts
%AU Joshua Tenenbaum 
%PU Proc. CogSci-94, pp. 864-869
%SC Monday, August 15, 7:30-9
%AB Previous work in visual cognition has extensively explored the power
    of parts-based representations of objects for recognition,
    categorization, and functional reasoning.  We propose a novel,
    parts-based representation of objects, where the parts of an object
    are found by grouping together object elements that move together
    over a set of images.  The distribution of object configurations is
    then succinctly descibed in terms of these functional parts and an
    orthogonal set of modal transformations of these parts.  If the
    distribution has a natural set of principal axes, the computed modes
    are stable and functionally significant.  Moreover, the
    representation is always unique and robustly computable because it
    does not rely critically on the properties of any particular element
    in any particular instance of the object.  Most importantly, the
    representation provides a set of direct cues to object funtionality
    without making any assumptions about object geometry or invoking any
    high-level domain knowledge.  This robustness and functional
    transparency may be contrasted with standard representations based
    on geometric parts, such as generalized cylinders (Marr and
    Nishihara, 1978) or geons (Biederman, 1987), which are sensitive to
    accidental alignments and occlusions (Biederman, 1987), and which
    only support functional reasoning in conjunction with high-level
    domain knowledge (Tversky and Hemenway, 1984).

%TI Simulated Perceptual Grouping: An Application to Human-Computer Interaction
%AU Kristinn R. Thorisson
%PU Proc. CogSci-94, pp. 876-881
%SC Monday, August 15, 11-12:30
%AB The perceptual principles that allow people to group visually
    similar objects into entities, or groups, have been called the
    Gestalt Laws of perception.  Two well known principles of perceptual
    grouping are proximity and similarity: objects that lie close
    together are perceived to fall into groups; objects of similar
    shape, size or color are more likely to form groups than objects
    differing along these dimensions.  While the primary function of
    these "laws" is to help us perceive the world, they also enter into
    our communications.  People can build on assumptions about each
    other's perception of the world as a basis for simplifying
    discourse: for example, we invariably refer to collections of
    objects simply by gesturing in their direction and uttering "those."
    The current work describes an algorithm that simulates parts of the
    visual grouping mechanism at the object level.  The system uses
    feature spaces and simple ranking methods to produce object
    groupings.  Computational aspects of this system are described in
    detail and its uses for enhancing multi-modal interfaces are
    explained.

%TI Handling Unanticipated Events During Collaboration
%AU Roy M. Turner
%AU Peggy S. Eaton
%PU Proc. CogSci-94, pp. 887-892
%SC Sunday, August 14, 2-3:30
%AB Handling unanticipated events during problem solving is difficult
    enough when an agent is operating by itself.  When the agent is part
    of a cooperative distributed problem solving (CDPS) system, the
    task's difficulty increases dramatically.  Now the agent is forced
    to consider the effect of the event not only on itself, but also on
    others and the group as a whole.  It must also consider who should
    handle the event and the likely impact that actions taken to
    diagnose the event or respond to it may have on other agents.  In
    this paper, we discuss preliminary work aimed at developing a
    process for handling events during multiagent cooperative problem
    solving.  The domain in which the work is being done is cooperating
    multiple autonomous underwater vehicles (AUVs).  However, the
    approach should have broader applicability to almost any real-world
    cooperative problem solving task involving autonomous or nearly
    autonomous agents.

%TI Exploiting Problem Solving to Select Information to Include in Dialogues between Cooperating Agents
%AU Elise H. Turner
%PU Proc. CogSci-94, pp. 882-886
%SC Monday, August 15, 7:30-9
%AB When agents cooperate to solve complex problems in the real-world,
    they must choose which information to communicate from the mass of
    information that might affect the problem.  A speaker should
    communicate the information that will be most helpful to the other
    agent.  However, the speaker may not have a great deal of knowledge
    about the other.  In addition, the speaker is also involved in
    reasoning about the collaborative problem solving task.  So,
    processing that is done solely to select information will be taken
    from the resources available to work on the primary problem.  In
    this paper, we present preliminary work on a new approach to
    selecting information that should be included in a dialogue.  Our
    approach uses the speaker's knowledge of its own problem solving to
    determine how useful some piece of information might be to other
    agents.  Consequently, the speaker can make its decision to include
    information in the dialogue using no additional knowledge and few
    additional computational resources beyond those required to reason
    about the primary problem solving task.  We suggest heuristics which
    translate problem solving into estimates of how useful information
    will be for others.

%TI STEPS: A Preliminary Model of Learning from a Tutor
%AU Sigalit Ur
%AU Kurt VanLehn
%PU Proc. CogSci-94, pp. 893-898
%SC Monday, August 15, 7:30-9
%AB This paper describes a prototype of a simulated physics student that
    learns by interacting with a human tutor. The system solves physics
    problems while showing its work on a workstation screen, and the
    tutor can intervene at certain points during problem-solving to
    advise the simulated student.  This prototype constitutes an initial
    cognitive task analysis of the skill of learning from a tutor, which
    prescribes several tutoring practices that appear to be plausible
    for both human and computer tutors.

%TI Belief Modelling, Intentionality and Perlocution in Metaphor Comprehension
%AU Tony Veale
%AU Mark T. Keane
%PU Proc. CogSci-94, pp. 910-915
%SC Tuesday, August 16, 11-12:30
%AB Metaphor is an elegant, concise, often startling communicative form
    which is employed by a speaker as a means of conveying a state of
    affairs to a hearer; as such, it deserves to be analysed as a
    speech-act, with a particular illocutionary intent and
    perlocutionary effect. This paper describes a hybrid
    symbolic/connectionist model of metaphor (SAPPER by Veale & Keane,
    1993), which incorporates elements of the belief ascription model of
    Wilks, Barnden & Wang (1991). This extended framework provides a
    suitable computational environment for analysing the illocutionary
    intent of the speaker, and perlocutionary effect upon the hearer's
    belief space, of a broad class of metaphors with an observable
    ameliorative/pejorative connotation.

%TI Goal Specificity in Hypothesis Testing and Problem Solving  
%AU Regina Vollmeyer 
%AU Keith J. Holyoak 
%AU Bruce D. Burns
%PU Proc. CogSci-94, pp. 916-921
%SC Monday, August 15, 2-3:30
%AB Theories of skill acquisition have made radically different
    predictions about the role of means-ends analysis in acquiring
    general rules that promote effective transfer to new problems.
    Under one view, means-ends analysis is assumed to provide the basis
    for efficient knowledge compilation (Anderson, 1987), whereas under
    the alternative view means-ends analysis is believed to disrupt rule
    induction (Sweller, 1988).  We suggest that in the absence of a
    specific goal people are more likely to use a rule-induction
    learning strategy, whereas providing a specific goal fosters use of
    means-ends analysis, which is a non-rule-induction strategy.  We
    performed an experiment to investigate the impact of goal
    specificity and systematicity of rule-induction strategies in
    learning and transfer within a complex dynamic system.  Subjects who
    were provided with a specific goal were able to solve the initial
    problem, but were impaired on a transfer test using a similar
    problem with a different goal, relative to subjects who were
    encouraged to use a systematic rule-induction strategy to freely
    explore the problem space. Our results support Sweller's proposal
    that means-ends analysis leads to specific knowledge of an isolated
    solution path, but does not provide an effective method for learning
    the overall structure of a problem space.

%TI Computing Goal Locations from Place Codes
%AU Hank S. Wan
%AU David S. Touretzky
%AU A. David Redish
%PU Proc. CogSci-94, pp. 922-927
%SC Monday, August 15, 2-3:30
%AB A model based on coupled mechanisms for place recognition, path
    integration, and maintenance of head direction in rodents replicates
    a variety of neurophysiological and behavioral data.  Here we
    consider a task described in [Collett et. al. 86] in which gerbils
    were trained to find food equidistant from three identical landmarks
    arranged in an equilateral triangle.  In probe trials with various
    manipulations of the landmark array, the model produces behaviors
    similar to those of the animals.  We discuss computer simulations
    and an implementation of portions of the model on a mobile robot.

%TI Verb Inflections in German Child Language: A Connectionist Account
%AU Gert Westermann
%AU Risto Miikkulainen                
%PU Proc. CogSci-94, pp. 928-933
%SC Monday, August 15, 4-5:30
%AB The emerging function of verb inflections in German language acquisition
    is modeled with a connectionist network. A network that is initially
    presented only with a semantic representation of sentences uses the
    inflectional verb ending -t to mark those sentences that are low in
    transitivity, whereas all other verb endings occur randomly.  This
    behavior matches an early stage in German language acquisition where verb
    endings encode a similar semantic rather than a grammatical function.
    When information about the surface structure of the sentence is added to
    the input data, the network learns to use the correct verb inflections in
    a process very similar to children's learning.  This second phase is
    facilitated by the semantic phase, suggesting that there is no shift from
    semantic to grammatical encoding, but rather an extension of the initial
    semantic encoding to include grammatical information.  This can be seen
    as evidence for the strong version of the functionalist hypothesis of
    language acquisition.

%TI Analogical Transfer Through Comprehension and Priming
%AU Charles M. Wharton
%AU Trent E. Lange
%PU Proc. CogSci-94, pp. 934-939
%SC Sunday, August 14, 2-3:30
%AB An unexplored means by which analogical transfer might take place is
    through indirect priming through the interaction of text
    comprehension and memory retrieval processes. REMIND is a structured
    spreading- activation model of language understanding and reminding
    in which simple transfer can result from indirect priming from
    previously processed source analogs. This paper describes two
    experiments based on REMIND's priming-based transfer framework. In
    Experiment 1, subjects (1) summarized analogous source stories'
    common plot; (2) rated the comprehensibility of targets related to
    sources by similar themes, contexts, or themes and contexts; then
    (3) described any sources incidentally recalled during target
    rating. Source/target similarity influenced comprehensibility and
    reminding without any explicit mapping or problem-solving. In
    Experiment 2, subjects (1) rated each story's com- prehensibility in
    source/target pairs having similar relationships to each other as in
    Experiment 1; then (2) rated source/target similarity. Analogous
    targets were rated as more comprehensible than non-analogous
    targets.  Both experiments imply that transfer can be caused by
    activation of abstract knowledge representations without explicit
    mapping.

%TI Explaining Serendipitous Recognition in Design
%AU Linda M. Wills 
%AU Janet L. Kolodner
%PU Proc. CogSci-94, pp. 940 
%SC Monday, August 15, 7:30-9
%AB Creative designers often see solutions to pending design problems in the
    everyday objects surrounding them.  This can often lead to innovation and
    insight, sometimes revealing new functions and purposes for common design
    pieces in the process.  We are interested in modeling serendipitous
    recognition of solutions to pending problems in the context of creative
    mechanical design.  This paper characterizes this ability, analyzing
    observations we have made of it, and placing it in the context of other
    forms of recognition.  We propose a computational model to capture and
    explore serendipitous recognition which is based on ideas from
    reconstructive dynamic memory and situation assessment in case-based
    reasoning.

%TI Towards a Principled Representation of Discourse Plans
%AU R. Michael Young
%AU Johanna D. Moore
%AU Martha E. Pollack
%PU Proc. CogSci-94
%SC Monday, August 15, 7:30-9
%AB We argue that discourse plans must capture the intended causal and
    decompositional relations between communicative actions.  We present
    a planning algorithm, DPOCL, that builds plan structures that
    properly capture these relations, and show how these structures are
    used to solve the problems that plagued previous discourse planners,
    and allow a system to participate effectively and flexibly in an
    ongoing dialogue.

%TI The Representation of Relational Information
%AU Jiajie Zhang
%AU Donald A. Norman
%PU Proc. CogSci-94
%SC Monday, August 15, 7:30-9
%AB Most graphic and tabular displays are relational information
    displays--displays that represent relational information, which is a
    relation on a set of dimensions.  In this paper, we argue that
    relational information displays are distributed representations --
    representations that are distributed cross the internal mind and the
    external environment, and display-based tasks are distributed
    cognitive tasks--tasks that require the interwoven processing of
    internal and external information.  The basic components of
    relational information displays are dimensions.  Through a
    theroretical analysis of dimensional representations, we identified
    four major factors that affect the representational efficiencies of
    relational information displays: the distributed representation of
    scale information, the relation between psychological and physical
    measurements, the interaction between dimensions, and the visual and
    spatial properties of dimenisions.  Based on the representational
    analysis of relational information displays, we proposed a
    representational taxonomy of relational information displays.  This
    taxonomy can be used to classify most types of relational
    information displays.  In addition, it can be used as a theoretical
    framework to study the empirical issues of relational information
    displays in a systematic way.

%TI Segmenting Speech without a Lexicon: Evidence for a Bootstrapping Model of Lexical Acquisition
%AU Timothy A. Cartwright
%AU Michael R. Brent
%PU Proc. CogSci-94, pp. 148-152
%SC Monday, August 15, 4-5:30
%AB Infants face the difficult problem of segmenting continuous speech
    into words without the benefit of a fully developed lexicon.
    Several information sources in speech---prosody, semantic
    correlations, phonotactics, and so on---might help infants solve
    this problem. Research to date has focused on determining to which
    of these information sources infants might be sensitive, but little
    work has been done to determine the usefulness of each source. The
    computer simulations reported here are a first attempt to measure
    the usefulness of distributional and phonotactic information in
    adult- and child-directed speech. The simulations hypothesize
    segmentations of speech into words; the best segmentation hypothesis
    is selected using the Minimum Description Length paradigm. Our
    results indicate that while there is some useful information in both
    phoneme distributions and phonotactic rules, the combination of both
    sources is most useful. Further, this combination of information
    sources is more useful for segmenting child-directed speech than
    adult-directed speech. The implications of these results for
    theories of lexical acquisition are discussed.

%TI The Effect of Syntactic Form on Simple Belief Revisions and Updates  
%AU Renee Elio
%AU Francis Jeffry Pelletier
%PU Proc. CogSci-94, pp. 260-265
%SC Tuesday, August 16, 11-12:30
%AB In this paper we report preliminary results on how people revise or
    update a previously held set of beliefs.  When intelligent agents
    learn new things which conflict with their current belief set, they
    must revise their belief set.  When the new information does not
    conflict, they merely must update their belief set.  Various AI
    theories have been proposed to achieve these processes.  There are
    two general dimensions along which these theories differ: whether
    they are syntactic-based or model-based, and what constitutes a
    minimal change of beliefs.  This study investigates how people
    update and revise semantically equivalent but syntactically distinct
    belief sets, both in symbolic-logic problems and in quasi-real-world
    problems.  Results indicate that syntactic form affects belief
    revision choices.  In addition, for the symbolic problems, subjects
    update and revise semantically-equivalent belief sets identically,
    whereas for the quasi-real-world problems they both update and
    revise differently.  Further, contrary to earlier studies, subjects
    are sometimes reluctant to accept that a sentence changes from false
    to true, but they are willing to accept that it would change from
    true to false.

%TI Distributional Bootstrapping: From Word Class to Proto-Sentence
%AU S. Finch
%AU N. Chater 
%PU Proc. CogSci-94, pp. 301-306
%SC Monday, August 15, 4-5:30

%TI Scientific Discovery in a Space of Structural Models: An Example from the History of Solution Chemistry
%AU Adrian Gordon
%AU Peter Edwards
%AU Derek Sleeman
%AU Yves Kodratoff
%PU Proc. CogSci-94, pp. 381-386
%SC Monday, August 15, 7:30-9 
%AB Much previous work in developing computational models of scientific
    discovery has concentrated on the formation of basic laws. The
    important role played by additional assumptions in this process is a
    neglected research topic. We argue that hypotheses about structure
    are an important source of such additional assumptions, and that
    knowledge of this type can be embodied in the notion of Informal
    Qualitative Models (IQMs). In this paper, we demonstrate that such
    models can be synthesised by applying a set of operators to the most
    fundamental model in a domain. Heuristics are employed to control
    this process, which forms the basis of an architecture for
    model-driven scientific discovery. Conventional data-driven
    discovery techniques can be integrated into this architecture,
    resulting in laws which depend crucially on the model that is
    applied to a problem. This approach is illustrated by an historical
    survey of eighteenth and nineteenth century solution chemistry,
    which focuses on the evolution of the models employed by
    scientists. A series of models are synthesised which reflect these
    historical developments, showing the importance of structural models
    both in understanding certain aspects of the scientific discovery
    process, and as a basis for practical discovery systems.

%TI The Origin of Clusters in Recurrent Neural Network State Space
%AU J.F. Kolen
%PU Proc. CogSci-94, pp. 508-513
%SC Monday, August 15, 7:30-9

%TI Categorization, Typicality, and Shape Similarity
%AU M.A. Kurbat
%AU E.E. Smith
%AU D.L. Medin
%PU Proc. CogSci-94, pp. 520-524
%SC Sunday, August 14, 11-12:30

%TI Variation in Unconscious Lexical Processing: Education and Experience Make a Difference
%AU G. Libben
%AU L. Sveinson
%PU Proc. CogSci-94, pp. 566-571
%SC Monday, August 15, 7:30-9

%TI Situated Cognition:  Empirical Issue, "Paradigm Shift" or Conceptual Confusion?
%AU P. Slezak
%PU Proc. CogSci-94, pp. 806-811
%SC Sunday, August 14, 4-5:30