Classroom 2000 Project Proposal

Can Personal Interfaces Enhance The Classroom?

Gregory Abowd, Chris Atkeson, and Terry Harpold

INTRODUCTION

What will be the impact on the classroom when every student brings a personal digital assistant (PDA) or notebook computer to class? These computers will have wired and wireless connections to the campus network, allowing them to be used for communication as well as computation. We are in a good position at Georgia Tech to bring the classroom to the student, not vice versa, so it is important that we do so now to determine the impact of this technology on education. Will these enviable resources be used to entertain students during dull lectures, to enhance current teaching approaches, or drive an evolution of new approaches to learning? Will we have to ask the students to put the machines away in order to preserve group interaction? Can we use these resources to enhance interaction? Will we still lecture to students, or will new forms of pedagogy evolve? Will we have formal class meetings at all, or can we use enhanced email and news groups to mediate some of the classroom interaction?

GOALS

Assuming each student has a personal networked interface during and after class, our goal is to address the following questions:
  1. CAN WE USE THE PERSONAL INTERFACE TO ALLOW REAL TIME ACCESS TO COURSE MATERIALS?

    How can we use rich hypertext representations before, during and after class? Can a student usefully access readings, lecture notes generated by the instructor, lecture notes taken by an individual student, lecture notes taken by all the students, and/or video of classroom lectures and interactions during a discussion? Can the student get the desired information efficiently? To what extent can the classroom interaction be extended beyond formal class hours by such a rich representation? What types of interaction can be facilitated if a rich representation of the course exists?

  2. CAN WE USE THE PERSONAL INTERFACE TO ALLOW REAL TIME GROUP COMMUNICATION AND CONTROL?

    Current interactions often have a control token (a piece of chalk or pen) that indicates who can update a representation owned by the group, such as a blackboard. What happens when an electronic blackboard is distributed across large displays and on the screens of the personal interfaces, and each student can update the group representation at any time? How can group input be combined to form meaningful action? What group input should be sought? What will happen to traditional (Socratic) methods of teacher-student conversation when their conversations are always interruptable by (variably long) periods of delay? How can (should?) the teacher maintain meaningful direction over the discussion among her students, when anyone on the group might be free to take over the control token?

METHODS/OVERALL APPROACH

Our first step is to develop an experimental facility. We will take advantage of the OIT electronic classrooms being provided through FutureNet if possible. There are also plans within LCC for significant expansion of undergraduate digital classrooms and we will use these if and when appropriate. Our experimental facility will have several large displays and a electronic whiteboard (Xerox Parc Liveboard, for example). We will purchase PDA or notebook computers and network them using a wired network to each chair (we may replace the wired network with a wireless network in the future). These resources will allow us to explore question B on group communication and steering. How should we distribute the display on the multiple hardware devices? What arbitration is useful for the multiple input devices? What paradigms or models will help the users deal with this rich interaction medium?

We will also need electronic course materials. Abowd already has all the materials for an introductory course on Human-Computer Interaction available on the Web as part of the teacher's packet for a book he coauthored on HCI for Prentice Hall. Through the multimedia courseware project, Guzdial, Stasko and Foley are producing other electronic repositories for various classes in the College of Computing. A recent project supervised by Abowd has resulted in a syllabus generation tool for Web-based course materials which is used to provide a more flexible interface to this repository of teaching materials. This summer, that tool will be rebuilt and significantly enhanced to handle other teaching materials, specifically indexed videotapes of lectures. It is our plan to test the HCI material out in the Winter of 1996 when Abowd next teaches a graduate introductory course on Human-Computer Interaction. We want to see how an active browser in the hands of every student will facilitate notetaking and communication among project groups. Since it will be a class on HCI, the experience will also teach the students a lot about the effect that an interface has on the work we perform. They can read about that in a book, but they will better understand it when they experience it first hand. This body of hypertext will be the beginning of an exploration of rich representations (question A) supporting asynchronous interaction. We will also examine how the availability of this rich representation helps or hinders the actual pedagogic style for such a lecture and project-based course.

We will instrument the classroom with multiple video cameras that can track and film a lecturer, capture what is written on a board, and also film student questions and comments. Building on Colin Pott's Mercury project, we will implement sufficient voice recognition to create a transcript of the lecture along with illustrations based on what was drawn or written on the board. This transcript would index into the video stream, so that selecting a point in the transcript could select the corresponding video sequence (and vice versa). We also intend to link the transcript with previous lectures and readings, to produce an "instant" multimedia textbook. These resources will be incorporated in our explorations of question A.

As part of the instrumented classroom students will use their personal interfaces to take notes that are synchronized to the multimedia record described in the above paragraph. These notes could also be linked in a content based way to previous lectures, other courses the student has taken, and readings. To what extent should these individual representations be shared, or merged in a common group representation? How should the group representations be incorporated in the individual representation? These issues are relevant to question A.

PLANS FOR FUTURE FUNDING

CURRENT ACTIVITIES

Abowd has produced a large electronic repository of teaching material for an introductory course on Human-Computer Interaction together with tools to generate and manipulate Web-based access to this information. This summer, students will work on revamping this suite of tools to run under HotJava to enable increased interaction.

Abowd has recently started a long-term interaction with the Satellite Communications Division of Motorola investigating software architectures for global information systems that will integrate wireless paging and communications technology with Internet-like information infrastructures.

Atkeson and Abowd are organizing a workshop sponsored by the NSF to explore the overlap between robotics, sensing, and ubiquitous/embedded/mobile computing.

Abowd and Atkeson have organized a discussion group on campus to focus on the development of future computing environments, with particular interest in education, intelligent mobile devices and extensions of Web technology. This group has already resulted in stronger connections to the FutureNet project and advanced interdisciplinary design activities with the College of Architecture and the Manufacturing Research Center. A recent grant from the College of Computing has funded a project this summer involving intelligent mobile devices, and the result of that work will enable us to provide the correct kind of device to students in Abowd's HCI class this winter.

Harpold is developing undergraduate and graduate courses in digital fiction that could draw on this digital classroom environment to explore fictional texts by collaborative reading and writing (asynchronously and synchronously.) Students will weave their critical contributions into a corpus (centered on canonical digital fictions and interactive games) that evolves to reflect the shifting intentions of the course over multiple successive quarters.

PLANS

We plan to apply to the NSF for funding on the basis of innovative approaches to education.

We plan to apply to ARPA, ONR, and other military funding agencies interested in more effective military training.

Harpold is working with a group of colleagues in LCC specializing in performance theory, drama and film, to develop curricula and research projects that focus on the opportunities that asynchronous, distributed environments present for the study of aesthetic performance in multiple media.

We intend to pursue equipment donations from IBM (notebook computers), Motorola (PDAs and communication equipment), Apple (PDAs), hewlett-Packard (digital tablets) and other relevant companies.

BUDGET

We will require a Graduate Research Assistant from either the College of Computing or LCC through the Interactive Design Technology program. The GRA will be responsible for assisting us in developing the technology for the experimental HCI class as well as determining how we will assess the digital classroom's effectiveness. Harpold has already identified several potential candidates from LCC and there are a number of graduate students who have been attending the Future Computing Environments discussion groups who have expressed an interest in this work. Finding a suitable student to work on this project will not be difficult.

We would like to fund a Graduate Research Assistant 1/2 time for the period July 1, 1995 to June 30, 1996. In the College of Computing, this would cost $16,000 for salary and $1670 for computing charges, a total of $17,670 for direct costs.