Teaching and Learning as
Multimedia Authoring:
The Classroom 2000 Project

Gregory D. Abowd, Christopher G. Atkeson, Ami Feinstein, Cindy Hmelo,
Rob Kooper, Sue Long, Nitin ``Nick'' Sawhney & Mikiya Tani

GVU Center, College of Computing,
EduTech Institute & Office of Information Technology
Georgia Institute of Technology, Atlanta, GA, USA
NEC Kansai C&C Research Laboratory, Osaka, JAPAN

LaTeX2HTML version

Abstract:

We view college classroom teaching and learning as a multimedia authoring activity. The classroom provides a rich setting in which a number of different forms of communication co-exist, such as speech, writing and projected images. Much of the information in a lecture is poorly recorded or lost currently. Our hypothesis is that tools to aid in the capture and subsequent access of classroom information will enhance both the learning and teaching experience. To test that hypothesis, we initiated the Classroom 2000 project at Georgia Tech. The purpose of the project is to apply ubiquitous computing technology to facilitate automatic capture, integration and access of multimedia information in the educational setting of the university classroom. In this paper, we discuss various prototype tools we have created and used in a variety of courses and provide an initial evaluation of the acceptance and effectiveness of the technology. We also share some lessons learned in applying ubiquitous computing technology in a real setting.


Table of Contents:


Keywords:

Educational technology, ubiquitous computing, pen-based computing, audio/video capture, media integration.


1 Introduction:

One way to view classroom teaching and learning is as a group multimedia authoring activity. Before class, teachers prepare outlines, slides, or notes and students read textbooks or other assigned readings. During the lecture, the words and actions of the teacher and students expound and clarify the lessons underlying the prepared materials. It is common practice to annotate the prepared material during the lecture and to create new material as notes on a whiteboard or in a student notebook. These different forms of material -- printed, written and spoken -- are all related to the learning experience that defines a particular course, and yet there are virtually no facilities provided to automatically record and preserve the relationships between them. Applying computing technology in the classroom setting to support the classroom's group multimedia authoring and review experience should lead to an enhanced teaching and learning experience.

To test this hypothesis, we initiated the Classroom 2000 project at Georgia Tech. The Classroom 2000 project is applying a variety of ubiquitous computing technologies -- electronic whiteboards, personal pen-based interfaces, and the World-Wide Web -- together with software to facilitate automatic capture and content-based access of multimedia information in the educational setting of the university classroom. The goal of the project is to evaluate and understand the effect of ubiquitous computing on the educational experience, both in terms of how it improves current practice and suggests new forms of education. This paper reports on our initial experiences developing and using a number of classroom prototypes.

Successful uses of ubiquitous technology will fundamentally alter some forms of education, but in ways that are hard to predict. To begin to explore the effects of such technology, our approach is to introduce novel technologies gradually into the traditional classroom lecture setting and see what happens. We do not intend to replace the traditional lecture-based style of pedagogy, at least not initially. However, much of the information in a lecture is inefficiently recorded or lost, and we can use simple capturing techniques to improve the situation. The result is a more complete record of what occurred during the lecture in a form that can be reviewed more easily.


Overview:

In Section 2, we provide a brief overview of related research in classroom technology, ubiquitous computing, and automated capture to support access and recall. In Section 3, we outline four different teaching and learning styles that different prototypes of Classroom 2000 are designed to support. Section 4 describes the common architectural framework used to develop various Classroom 2000 prototypes, with several examples from the actual prototypes provided. In Section 5, we summarize some initial studies of student reactions to the use of this technology in the classroom and we provide some insights on lessons we have learned in applying ubiquitous technology to a real life situation. We conclude in Section 6 with a discussion of how we have progressed toward our goal of introducing ubiquitous computing into the classroom and observing its effect on the educational experience.


2 Related Work:

Shneiderman et al. [13] discuss the effects of introducing technology into the classroom in terms of the paradigm shifts that result. All of the existing systems they discuss, and all of the attempts we know of, have one common feature that we are trying to avoid. Technology in the hands of the student usually translates into a workstation at each desk. This approach is fine, even necessary, for classes that involve computer-based activities (such as programming). We want to investigate the usefulness of alternative interfaces that are less intrusive and allow natural handwritten note-taking, such as pen-based laptops, PDAs, tablets, or palmtop PCs.

Our work has been greatly influenced by the work at Xerox PARC in ubiquitous computing [18, 19] and tools to support electronic capture and access of collaborative activities [9, 10]. We want to capture information provided by the teacher during a lecture, so electronic whiteboard capabilities provided by the Xerox LiveWorks LiveBoard [4] are inviting.gif We also wanted to provide the students with an electronic notebook with the capability to take notes during the class that could be the basis for review after class. The Marquee note-taking prototype developed at PARC [17] and the Filochat prototype developed at Hewlett-Packard Labs [20] both came close to what we wanted to have in the hands of the students. Marquee provided a simple mechanism for producing notes with a pen-based interface that also created automatic indexing into a video stream. Filochat used a pen-based PC to capture electronic annotations that served as indices into a digital audio stream. We have also investigated paper-based solutions to note-taking, similar to the work done by Stifelman [14]. The implicit connection between the note-taking device and alternate information streams (audio and/or video) is a common theme that has also been explored at MIT's Media Lab [7] and at Apple [3].

With the availability of ubiquitous information technologies, such as the World-Wide Web, most universities are able to provide students with access to vast repositories of educational materials. It is quickly becoming the norm for individual courses at many universities to have their own Web page that serves as a central clearing house for all course documentation.gif While this use of the Web has some obvious advantages for both instructor and student, it does not take an active role in assisting learning and teaching. We wanted to view the whole classroom experience as a multimedia Web authoring task and provide ways to capture and relate information before, during and after an actual classroom session. A serious design decision for the Classroom 2000 prototypes, a decision that prevented the use of some existing solutions for our prototypes, was to make all information accessible via the Web. The hardships incurred by this decision were far outweighed by the ease of cross-platform distribution, a must for our student population. This more active use of the Web infrastructure is in tune with some recent applications of WWW technology in education [8, 12, 5]. Our major contribution beyond this existing work is the concentration on in-class capture of information that is to be augmented through coordination with other classroom information via the Web.

Several of the research prototypes cited above have been subjected to some form of evaluation to determine both usability and usefulness. The two most substantial evaluation studies have been conducted at PARC and Hewlett-Packard. For a two-year period at PARC, a suite of tools for capture, automated indexing and integration, and access [9] was used to support a process of intellectual property management [10]. At Hewlett-Packard, the Filochat system was evaluated in field and laboratory studies, generating both qualitative information on the reaction of users to the technology and also quantitative information comparing the accuracy, efficiency and confidence of Filochat with paper-based and dictophone systems [20]. The initial evaluation we report in this paper is based on a 10-week experiment in a live classroom and provides both quantitative and qualitative evaluation of student reaction to the technology. This evaluation is more of a feasibility study than a proper usability or utility study as was done at PARC.

Many researchers investigate the effect of technology in education. There is an important distinction for research in this area, based on whether the research is focused on education or on technology. We have taken a technology focus in our work so far, as evidenced by the way we describe our work and evaluate its impact. Before we can honestly assess the educational impact of ubiquitous computing technology, we must first develop robust, though not necessarily perfect, prototypes that have been tested in real environments. This is a serious challenge, especially when dealing with off-the-shelf ubiquitous computing technology that is anything but robust. Our technology-driven approach to educational technology contrasts with a more education-driven focus, as demonstrated by Wan and Johnson [16] or by Guzdial et al. [6], in which the purpose of the research is to understand and inform theories on learning. In the wider arena of educational technology, there must be both forms of research.


3 Supporting Multiple Teaching and Learning Styles:

A long-range Classroom 2000 goal is to be able to provide augmented classroom support for all courses at a university such as Georgia Tech. So we must be able to support many different teaching and learning styles.


3.1 Teaching styles:

We have examined several common teaching styles with respect to the challenges they present for multimedia capture and access. One of the main distinguishing features of a teaching style is the form of materials, if any, that are made available to students, either before or after the lecture. If material is made public, then we can augment it (using audio or video links, for example) with information captured during the lecture. If no material is made public, then we can only augment information that is produced as part of the lecture itself. The styles we have so far identified are:

Presentation
The teacher prepares a set of slides (the presentation) before class, and the lecture proceeds similar to a prepared talk at a conference. The slides are displayed during the lecture, and copies of the slides are available to students before or after the lecture. During the lecture, the teacher may make annotations on the slides to emphasize or clarify certain points.
Public notes
The teacher prepares the content of the lecture before class, but in the form of a paper or set of organized notes. These notes are available to the students before or after the lecture. The teacher lectures to the class, using the notes as a guide, and may also use a whiteboard to write down certain points. It is the difference in the format between the discrete slides of the presentation style and the continuous notes in this style that impacts the capture and access problem.
Private notes
The teacher prepares only a private set of notes as a means to prompt the lecture, but this material is not made available to the students at any point.
Discussion
The three previous styles emphasized a didactic approach to the lecture in which the teacher is the principal speaker, interrupted occasionally by questions or comments from the students. In this style, the classroom session is more of a discussion in which all participants contribute more or less equally to the speaking. There may be a publicly available agenda for the class discussion that serves to highlight the topics that will be discussed.

We do not suggest that this is a complete categorization of teaching styles, and we also recognize that some teachers may choose to combine teaching styles within a course or even within a single class session. Attempting to provide support for each of these teaching styles in simultaneously developed prototypes allows us the opportunity to identify general features of an ideal system that can support all classes.


3.2 Learning styles:

Just as teachers have different styles for teaching, so too students have different styles for learning. Rather than address these different general learning styles, we focused on one of the primary student activities -- recording information. We identified several different recording, or note-taking, styles that a student could employ, each distinguished by the amount of recording that goes on.

Verbatim
The student acts like a court stenographer, busily writing down as much of what they experience from the class as possible.

Highlighting
The student writes down only key parts of what is said in class.

None
The student writes nothing, relying on memory or provided materials as the only written record of the classroom experience.

Different teaching styles provide more or less support for the various recording styles. For example, the highlighting student probably receives good support in a presentation style lecture in which they can annotate their own copy of the slides during the lecture.


4 A Common Framework:

With this crude understanding of the expected population of users for Classroom 2000 prototypes, we attempted to construct prototypes of systems to support different teaching styles for classes that we were currently teaching. To date, we have supported three different courses within the College of Computing at Georgia Tech. Details on each of these classes is summarized in Table 1.

We built three separate prototypes to suit different teaching styles and to allow us to experiment with different technology in the hands of the students and teacher. One prototype used Apple MessagePadsgif for note-taking and another used pen-based PCs. One class used an electronic whiteboard (the LiveBoard) while others simply used a projector attached to a workstation that the instructor used to display slides or notes.

To control the engineering problem of designing and maintaining distinct prototypes, we devised an overall common architecture or organization that each prototype would obey. The inspiration for this common architecture came from the movie industry, in which the development of a single movie is divided into three distinct phases -- pre-production, live recording and post-production. Table 1 summarizes the main differences between the prototypes by the activities and technology used in the various phases of production. We will now describe what each of these phases means in the context of our development.

 


HCI

Human-Computer Interaction
AI

Artificial Intelligence
FCE

Future Computing Environments
teaching style presentation public notes discussion
enrollment 25 grad students 60 undergrad CS majors 15 grad students
live recording (teacher) ClassPad on LiveBoard captured navigation and annotation (Fig. 1) LCD projector to display Web notes; no capture LCD projector to display outline and Web pages; no caputre
live recording (students) ClassPad on pen-based PC captured navigation and annotation (Fig. 1) paper notes; no capture outline annotator on MessagePad to capture outline entry notes (right side of Fig. 1)
live recording (classroom) single digital audio stream recording single analog audio-video stream recording single analog audio-video stream recording
post-production log file, annotated slides and keyword text used by PERL script to create audio-enhanced, searchable Web notes (Fig. 2) audio and video links added to HTML notes manually (Fig. 3); video digitized to QuickTime packets PERL script transforms Newton data into audio-enhanced outline with notes (Fig. 3)
Table 1: Summary of technology used by phase (pre-production, live recording and post-production) in three Classroom 2000 prototypes. In all cases, the products of post-production were a collection of Web pages with various external helper applications to hear audio and play video.


4.1 Pre-production phase:

We assume that the teacher does some preparation for each lecture. The purpose of the pre-production phase is to transform this prepared material into the desired form for use during the lecture. Any prepared materials that the teacher wishes to display during the lecture or make available to students in class must be transformed into a format that can reside on the available technology (e.g., the electronic whiteboard or the student's electronic notebook).

We currently support only the presentation of static information within the lecture. That means we support lectures that include writing on a whiteboard, using overhead transparencies or slides. We do not support the presentation of videos or other dynamic information, such as a computer simulation. Support for dynamic information is a future consideration.

Teachers are already overworked, so were very concerned with minimizing the impact the technology had on the preparation of lecture materials. Most teachers already have some form of lecture material that they will want to reuse. The more we required a lecturer to recreate that information, the less likely they were going to want to adopt the Classroom 2000 technology. To minimize preparation effort, we adopted several strategies.

For one class, Human-Computer Interaction (HCI), a presentation-style class, we adopted PostScript as the universal representation for material prepared by the teacher. We used public domain filtering programs to transform the PostScript file into whatever image format was necessary for use in the class. In another class, an introduction to Artificial Intelligence (AI), a public-notes-style class, the notes were in the form of a LaTeX document and were to be presented in class and made available to students via a Web browser. We used existing filters to convert the LaTeX source to HTML. The third class was a seminar on Future Computing Environments (FCE). Since it was a discussion-style class, a template file was provided for discussion leaders to prepare an agenda for the class. The completed template was then automatically converted into a format (Newton Book) for our own Apple MessagePad note-taking program.

 

 



Figure 1: Two versions of the student note-taking interface. On the left side is the interface for ClassPad, a Visual Basic note-taking prototype. This same interface served as the interface for the teacher's electronic whiteboard. On the right side is the interface for a student-only outline-oriented note-taking application that runs on the Apple MessagePad.


4.2 Live recording phase:

The live recording phase consists of the actual classroom lecture. The most important task to support in this phase is the capture of all relevant activity that will be useful for later review. We initially identified several relevant streams of activity in the classroom: To make the classroom an easy-to-use multimedia authoring environment, it must be equipped to capture these different streams of information without imposing extraneous work on the part of the participants -- that is, the capture of information must result as a natural by-product of normal classroom interaction.

We used cameras and microphones placed at various locations around the room to capture one or more video and audio streams, all of which were eventually digitized. This recording required no extra effort on the part of the participants.


4.2.1 Tools for the teacher:

Most classrooms provide a blackboard, whiteboard or overhead projector for public viewing of information provided by the teacher. Replacing these presentation technologies with a computerized display makes capture relatively simple. We used a high quality Xerox Liveworks LiveBoard, a PC with a pen-sensitive, 67-inch diagonal screen. There are other less costly solutions, such as a pen-based computer attached to a LCD projector, or even projecting onto an upright digitizing tablet surface. Once computerized, the public display can be instrumented via software to log the time at which significant interactions occur.

None of the existing applications we had available for the LiveBoard allowed us to easily log pen events and convert the resulting annotated slides into a form (e.g., GIF) that was easily displayed on all Web browsers. Other similar commercial products suffered the same limitation. As a result, we had to write our own electronic whiteboard application, a Visual Basic prototype called ClassPad, whose interface is shown on the left side of Figure 1. We used ClassPad in the presentation-style HCI course to present prepared slides and allow for public annotation by the teacher. ClassPad preserves all annotations made to a series of prepared slides. In addition, ClassPad creates a time-stamped log of when the user navigates between slides and when each slide was annotated with the pen (defined as a pen-down followed by a pen-up sequence). This captured information is used in the post-production phase described in the next section.

We developed ClassPad for a presentation teaching style, but it is also appropriate for private notes or discussion-style classes in which the teacher simply wanted to have a blank surface upon which to write. In fact, several lectures in the HCI class used the private notes style. However, ClassPad was not appropriate for the public notes teaching style of the AI class, because the notes were displayed as Web pages. There is a serious registration problem that must be solved by the developer of the capture tool in order to provide persistent pen annotations for a markup language such as HTML. The rendered image (including the HTML text and any associated pen annotations) depends on characteristics of the bounding window, so it is much more difficult to register the pen annotation with the underlying text. Because of this difficulty, we did not capture pen annotations in the AI class. The only captured data in that class were the audio and video streams.

 

 



Figure 2: Frame-based Web presentation of lecture notes with annotations and audio links. This figure shows the notes made by the teacher for an actual lecture in the HCI class. Student notes are similar in appearance. The top frame is a sequence of thumbnail images of all slides for that lecture. The user selects one thumbnail image, and the full-sized image is shown in the lower right frame. The lower left frame contains a list of keywords associated with the slide (none shown here), the automatically-generated audio links representing each time the slide was entered during the lecture (one time in this example), and a link to a form that allows keyword searching across all slides for the entire course.


4.2.2 Tools for the student:

Support for the capture of student notes is also important. We made a conscious decision in all prototypes not to provide a keyboard interface for note-taking. Despite the advantages of keyboard input, such as faster input rates, increased legibility and easier search capabilities, we did not want the distraction in class of a cacophony of keyclicks. All note-taking, therefore, was done either with traditional pen and paper or with pen-based computing technology.

We also used ClassPad on pen-based PCs, resulting in the electronic student notebook. This version of the electronic notebook was suitable for both the verbatim and highlighting modes of note-taking, but it was designed with the highlighting student note-taker in mind. The student would see the same information that the teacher put up on the electronic whiteboard and could annotate it with personal comments to make certain points clearer, as shown on the left side of Figure 1. Students could flip though the class notes the same way the lecturer did (though the units were not synchronized so the student was free to browse the slides at their own pace) and write whatever notes they wanted on top of the slides. The navigation between slides and student annotations were logged by ClassPad.

We also investigated the use of smaller PDA-style electronic notebooks, such as the Apple MessagePad. The MessagePad's resolution made it infeasible to use the same philosophy of note-taking used in ClassPad. The prepared slide images would not have been legible, and there would have been little space on the screen for taking notes. Instead, the MessagePad note-taking application, shown on the right side of Figure 1, provides an outline of the lecture. A time-stamped note (called a ``slide'' in the actual interface in Figure 1), is associated with each entry in the outline. Touching the entry with the pen causes a note to appear (there is one note available per outline entry) and the student then writes in the note. This outline note-taker logs the first time each note was revealed.

 

 



Figure 3: Two further examples of audio- and/or video-enhanced Web pages. On the left side is an example of automatically-generated audio links for a discussion-style class using the Apple MessagePad outline annotating application. On the right side is an example of manually-generated audio and video links for a public-notes-style lecture.


4.3 Post-production phase:

Once the lecture is completed, we enter the post-production phase. The purpose of post-production is to support the student and teacher in reviewing material across all lectures for a given course. Our goal for large-scale content generation is to automate the creation or augmentation of content. Audio and video records of what happened in the classroom should be automatically linked to the appropriate points in the prepared material. Public annotations by the teacher and private notes of the student should also be automatically integrated with the other streams of information to facilitate later review. All of this integration of various media streams should be enabled by the captured information from the live recording phase.

Recall that the ClassPad application generates a log of when the teacher or student advances from one slide to the next and when an annotation was made. When reconstructing the annotated views for later review, these logged events are used to tie the static information (prepared slides with student/teacher annotations) to the audio or video stream associated with that class. Figure 2 shows an example of an automatically-generated Web presentation from a single lecture in the HCI class with audio-enhanced links. The top frame shows thumbnail sketches of all slides from the lecture. The selected thumbnail image is magnified in the lower right frame. The lower left frame is divided into three main sections: a keywords sections shows words associated with the file to facilitate a content-based search; an audio section lists automatically-generated audio links indicating times in the lecture when that slide was visited; and a search link provides access to a search form for simple keyword search across all lecture notes. When an audio link is selected, an audio client is launched and begins playing the recorded lecture from that point in the lecture. We built our own streaming, indexable audio server and client players for this purpose.

The static nature of slides in the presentation teaching style makes it easy to automatically generate audio links. For other teaching styles, it is not always a simple matter to attach the audio links to parts of the prepared material (see the discussion of the registration problem in Section 4.2). On the left side of Figure 3 is another example of an automatically generated Web page containing audio links, generated from output in the discussion-style FCE seminar using the Apple MessagePad as the note-taking device. The Web-accessible notes show the prepared outline augmented with notes inserted at the right location. Selecting a note launches the audio player at the point in the discussion in which the note was initially generated. It is possible to hide and reveal these annotations, so that the original discussion outline can be seen alone, if desired.

We did not have tools to automatically generate audio- or video-enhanced review material for the public notes-style AI course. Instead, audio and video links were generated manually from the videotaped lecture and the analog video was digitized into a single audio file and segments of QuickTime video. On the right side of Figure 3 is an example of a lecture with audio (marked with an ``A'') and video (marked with a ``V'') links manually added. It is an interesting research question to ask how recorded information from the lecture (e.g., gestures gleaned from the video recording, segmenting the audio) can be processed to determine when audio links should be created and how they can meaningfully be attached to the material [2].


5 Experience and Initial Evaluation:

One of the major contributions of this work is its application in a large-scale educational setting. Table 1 summarizes the variety of experiences we have had so far in applying Classroom 2000 technology. Up to this point in the paper, we have focused exclusively on what technology we introduced and how we built the various prototypes. The rest of the paper now focuses on an evaluation of our experience with using this technology.


5.1 Objective evaluation:

We have had the most experience operating the Classroom 2000 prototype that was used in a graduate HCI course, and the objective and qualitative results of the next two subsections refer to that class. The 10-week class met twice a week for 90 minute lectures [1]. There were 25 graduate students in the class representing a wide variety of disciplines across the Georgia Tech campus. We were unable to supply enough units to provide each student with a pen-based electronic notebook. The number of units varied throughout the course from a minimum of 6 to a maximum of 12 working units. The technology in the class was phased in incrementally, beginning with the LiveBoard, followed by audio recording and, finally, student electronic notebooks. By the third week of class, students were taking electronic notes. Four students were selected (from among 8 volunteers) to take notes using ClassPad on the electronic notebook for the remainder of the quarter, which consisted of 10 lectures. Four students chose not to take notes electronically the entire class. The remaining students used the other units on a first-come, first-served basis, averaging 2.9 times each.

Students kept a journal of their notes and reactions to the technology throughout the course. At the end of the course, 24 of the 25 students filled out a questionnaire that investigated their reactions to the use of the technology in the class. The objective questions asked about overall impressions of the use of technology in the class and then specifically about the ClassPad note-taking application and the use of the LiveBoard with Web-based review notes. These questions were rated on the following scale: 1 (strongly disagree); 2 (disagree); 3 (neutral); 4 (agree); and 5 (strongly agree). We asked questions about overall impression, ease of use, whether the technology made aspects of the class more effective, how the technology affected class participation and whether the technology contributed to learning the particular subject matter of the course (in this case HCI). Table 2 summarizes the results of the objective portion of the questionnaire.

  

Topic (# of questions) Avg. (sigma)
O Was desirable technology (11) 3.67 (.98)
Was easy to use (2) 3.02 (1.23)
Increased effectivenss of class (9) 3.62 (.99)
Improved class participation (2) 3.40 (.88)
Contributed to learning subject (2) 3.94 (.86)
N Was desirable technology (1) 3.13 (1.03)
Was easy to use (3) 3.13 (1.14)
Increased effectiveness of class (1) 2.88 (.90)
Helped me take fewer notes (2) 2.85 (.87)
L Was desirable technology (2) 3.87 (.82)
Was easy to use (3) 3.68 (1.09)
Increased effectiveness of class (1) 3.29 (1.04)
Helped me take fewer notes (2) 2.88 (1.00)
Table 2: Reaction of students to overall technology (O), electronic notebook (N) and LiveBoard with Web notes (L).

The results show an overall positive reaction to the prototype. The strongest positive reaction is in how the prototype was perceived to contribute to learning the particular subject matter, and this is not surprising. The course was on HCI and the students were themselves experiencing a new interface in the classroom. In addition, the project work was based on developing and evaluating ideas for new Classroom 2000 prototypes, and the students appreciated the authenticity of redesigning a system they were currently using.

One of the initial goals of Classroom 2000 was to examine the effect of personal interfaces in the classroom. Our initial observations show that the students were most negative toward the personal electronic notebooks (see the next section for qualitative justification). The LiveBoard and Web notes together comprised the most desirable technology from the students' perspective.


5.2 Qualitative evaluation:

The post-course questionnaire also provided an opportunity for the students to provide more detailed reactions. Students provided details on their positive and negative reactions to all aspects of the technology.

Use of the LiveBoard for several in-class usability evaluation exercises and the group presentations at the end of the class were both very popular. Both of these activities involved more than just the teacher interacting with the LiveBoard. But many students did not feel the LiveBoard was any better than an overhead or chalkboard when used exclusively by the teacher in a lecture mode.

The majority of the electronic notebooks used in the class were palmtop PCs (specifically, Dauphin DTR-1s with a 7-inch diagonal screen) and while the students found them good for drawing pictures, in general the screens were too small. In addition, the response time of the units was slow relative to the LiveBoard. This made it difficult for students to navigate between slides as easily as the teacher. Also, a number of the palmtops were unreliable machines that would crash during lectures.

Students found the Web-based review notes interesting in their novelty and useful for examining their own notes and the teacher's notes. Several students found the notes very useful on the occasions when they missed class or did not pay close enough attention to some point during class. Despite the relative ubiquity of Web browsers on campus and in student rooms, several students still desired to have a printed copy of their notes because then they would be easier to carry around and easier to review. The Web notes were not always quickly accessible (especially over telephone lines) and sometimes hard to read.

We were unable to keep logs of use of the audio server, so we cannot give a quantitative indication of its use, but we have determined that audio annotations were not used very much. Only 4 of the students in the class noted in their journals that they had made consistent use of the audio features in more than just full playback mode. There were two reasons for the overall lack of use of the audio. First, students did not have regular access to the correct platform for listening to the audio and we were unable at that time to provide a cross-platform audio player. Second, the set-up of the audio service was too difficult for some students to bear, so they did not bother. Despite this minimal usage of the audio features, several students who did manage to use the audio found it particularly useful to clarify their own notes.

Of particular interest is how electronic notebooks promote different and possibly more effective note-taking strategies. A overview of the electronic notes taken by students reveals that initially most students would write quite a bit on the electronic slide, even if what was written was exactly what the teacher was writing on the LiveBoard. When questioned about this afterwards, several students who used the electronic notebook throughout the class noted that they felt their note-taking became more economical as the course progressed. This is in spite of an apparent lack of use of the audio features. Upon further investigation, these students revealed that even without audio services, merely having the teacher's notes available after class saved them from the sometimes mundane task of copying. One student in the class, however, stated a preference for writing down everything himself, even if what he wrote was identical to the teacher's notes that he could obtain later. This again points out the importance of recognizing different learning styles and supporting as many as possible. We are in the process of completing more significant quantitative analysis of the student notes and will report on those findings later.

All Web pages produced for this class (shown in Figure 2) were publicly viewable, a conscious choice of the students in the class. Students were unaware that they could edit their own Web notes, and we had not provided any easy way to do the editing. Several students commented on the shortcoming of these supposed static review notes, remarking that it was their habit to revise and rewrite their notes. This represents both a technical and social failure of the prototype to support long-term use of class notes. We are now concentrating on providing a better interface to revise notes.

From the instructor's perspective, there were several advantages and disadvantages. The ClassPad application running on the LiveBoard was easy to use and was responsive enough to allow for a natural level of interaction. All of the lecture material for this class was available from a previous section of the course, but as the quarter progressed, it was judged necessary to modify the format of the slides. At the request of several people, the slides were redone to increase the amount of whitespace available for making annotations. The ClassPad logging was effective, even though we had intended to provide per-annotation audio links instead of per-slide links. One drawback of our system, however, was the requirement that the teacher load all slides to be visited during one lecture prior to the beginning of the lecture. This seemingly simple requirement caused a problem twice in the course when the teacher wanted to refer to a slide from a previous lecture but was unable to do so because the capture semantics of ClassPad would not have correctly logged the remainder of the class.

One pleasant surprise came the first time the ClassPad application running on the LiveBoard misbehaved in class and would not load the slides for the lecture. The lecture proceeded more in the private notes style. In time, we began to appreciate ClassPad as being well-suited to the private notes style of teaching and we plan to take advantage of that in the future to attract other teachers into our experiments.


5.3 Evaluation of AI and FCE classes:

The prototype used in the HCI class was relatively stable before that course began. This was not the case for the other two courses (AI and FCE). Consequently, both the AI and FCE prototypes changed quite a bit during the quarter and we were unable to collect much quantitative evaluation information. In the AI class, the only capture that occurred during the live recording phase was the video and audio recording. The manual post-production activity of augmenting the HTML notes with audio and video links was so time consuming that after the fourth lecture we were no longer able to devote the resources to continue the service. We see now the advantage of having tools that use natural actions of the teacher to automate the audio and video augmentation of Web pages, as described in [2, 9]. Such actions could include long pauses in their speech or mouse movements to draw attention to some area on the page.

In the FCE seminar, we both videotaped and used the MessagePad outline-annotator. We provided a template file to prepare the outline of the class discussion; this was a useful service for the presenters. The student note-takers found it easy to understand how the outliner application worked, but did not find it all that useful to attach notes to the outline. The main problem was that discussion in the class did not follow the outline very closely. When the note-taker wanted to jot down a thought, it was hard to determine which entry in the outline to choose for annotation. Sometimes, the choice of entry was entirely arbitrary, and the resulting enhanced Web page looked somewhat confusing. As the quarter progressed, we altered the application by removing the outline and allowing the user to bring up blank, time-stamped note pages for writing whatever came to mind. This simpler interface, similar in spirit to Stifelman's audio notebook [14] was much less confusing to the user, but was only used in the class a few times.


5.4 Developer insights:

Our year of experience developing a number of Classroom 2000 prototypes has resulted in many valuable lessons and insights into ubiquitous computing and its application in an educational setting. We share some of the more significant ones here.


5.4.1 Go live:

As we mentioned in the introduction, the best way to understand the effect of ubiquitous technology in our everyday lives is to experience it. This is similar to the Moran et al. notion of evolutionary engagement, in which the evolution of a tool is informed by its early adoption in a real-life-context [10]. Our constant drive to go live with various prototypes was the single greatest challenge we faced, and it comes with some risk given our educational focus. It is not an easy decision to experiment with education, as failure can have dire consequences. We are a relatively low-budget project, which forced us to purchase affordable pen-based technology that was neither ideal nor robust. The development of pre-production, live recording and post-production tools was not difficult work when compared to the maintenance and everyday operation of the student notebooks. With the advent of a second generation of pen-based computing, we hope this situation will improve without significant cost increases.

One great advantage of our prototyping approach has been suggestions from the users. We received many suggestions from students for possible extensions of Classroom 2000 that we did not initially consider. Some of the more promising suggestions are:


5.4.2 Importance of architecture:

It might appear that we are trying to take on too much in this project in our attempts to support such a wide variety of teaching and note-taking styles, but critical architectural decisions made very early on in the project have allowed us to experiment more broadly. The simple division into phases of pre-production, live recording and post-production has been critical to our ability to field several very different prototypes. This has allowed us to support a variety of preparation tools used by teachers (ranging from LaTeX to more WYSIWYG document processing tools), different in-class presentation tools (Web browsers, PostScript previewers, and the ClassPad application) and different post-production tools to automatically or manually generate audio- and video-enhanced notes. Developing tools that served activities in different phases of the project enabled concurrent development and is now allowing us to modify and enhance certain features of the system with minimal impact elsewhere.

This architectural division is not ideal, however. One drawback has been a limited interpretation of what occurs in post-production. Up to now, post-production has simply meant the generation of media-integrated notes based on multiple streams of information captured during the live recording phase. It has not included support for the user in accessing and modifying those notes. Work at Xerox PARC has identified tools to support this separate access phase [9] and we would be wise in the future to focus more effort there as well.


5.4.3 Access is critical:

In order to evaluate the effectiveness of Classroom 2000 technology, we have to have some guarantee that it is being used. Students and teachers all need to be able to access the products of post-production. Since students have access to a variety of platforms both in university labs, offices and homes, cross-platform support is critical. This cross-platform issue has not been addressed by other researchers working in this area because either they have been able to control the computing environment of the users or the users had a limited computing platform. The emergence of the Web has made it possible to realize a limited cross-platform distribution for some media types. Students noted that access to the Web notes was easy, the only problem being some delays over traditional phone connections. Unfortunately, at the time we were beginning development of our prototypes this cross-platform support did not include streaming, indexable audio. We built our own UNIX-based audio service, including a client player and central server. This worked extremely well on laboratory workstations, but was virtually inaccessible to students working at home using non-UNIX platforms. As a result, we had relatively limited use of the audio. Again, we are fortunate that cross-platform support for indexed streaming audio is now commercially available; we are transitioning away from our own tools for that service.


5.4.4 Note-taking as image annotation is limited:

In many classroom settings, teachers provide prepared slides or notes to the students before a lecture, and many students consider this to be an advantage. In addition, many of the other tasks we perform in our daily lives are annotation tasks. Drafts of a paper are frequently annotated by reviewers, and a teacher corrects a student's report by annotating it. Support for annotation is useful for the classroom and other settings.

Annotation is simplified when the underlying image -- such as a slide used the presentation-style lecture -- does not change. This approach does not work for all lecture styles. It would be better to have prepared material for presentations that differed in form or content from the material used for review. The material for presentation must be readable when projected and fairly terse, as reading lots of text off a wall display is not effective. The material for review should be more like a user-modifiable textbook, suitable for a personal display and containing more explanations.


5.4.5 Using more than audio:

We believe that video provides added value in addition to audio. Our presenters use pronouns and point to the class a lot, saying ``you'' do this and ``they'' do that, for example. The audio track alone is difficult to interpret if the gestures are not visible. It is also much easier during review to follow the flow of the lecture (understand that the presenter is suddenly responding to a question, for example) if a view of the teacher is present.

We are aiming to be able to replay the entire lecture experience, including multiple video views and all student interactions with their computers in class. We need to handle richer media sources at a finer level of granularity. For example, the student should be able to ask during review, ``What was the lecturer saying when I wrote this?'' while pointing to some arbitrary annotation, as in [10, 14, 20]. Or the student might want to find the notes associated with a live demonstration that occurred at some point in the class. The solution we have produced for indexing and reviewing an audio stream for the class is immediately transferable to video, keyboard and mouse events, and pen strokes. A constraint at the moment is efficient storage and delivery of the richer media types.


5.4.6 The value of pen interfaces:

We have debated the merits of pen interfaces, and this paper has focused on the use of pen-based interfaces to the exclusion of other input mechanisms. There are advantages to using paper and pen or even a keyboard for student notes instead an electronic device. Paper is familiar, cheaper, more robust, and has much higher resolution than the current generation of pen-based computers. Conventions taught to note-takers can support time-stamping, but not without introducing an added burden and the chance for errors in the capture. This shortcoming could be removed through vision technology, such as demonstrated by the DigitalDesk [11], or by anchoring the paper on a digitizing tablet, similar to the prototype developed by Stifelman [14].

Some students noted that they can type faster than they can write and that the typed-in information is immediately available for content-based search mechanisms. We stayed away from keyboards because we felt the constant tapping of the keys would be a distraction in the class. We also feel that the purpose of Classroom 2000 is not to enable a student to take more notes, but rather to be more efficient note-takers.

The use of an electronic whiteboard was universally favored in the classroom, and the LiveBoard provided an excellent, albeit expensive, solution. It is still too small, both in terms of physical size and screen resolution (VGA), to consider replacing existing whiteboards. It is roughly the size of a whiteboard in an office or small meeting room and about a third the size in real estate of even the smallest classroom whiteboards. Our experience shows that the computational capability of the LiveBoard is useful for both encouraging group exercises in a class and also creating an accurate record of some class activity. We recommend that researchers investigate ways to provide larger scale interactive surfaces, both in terms of display size and resolution.


6 Conclusions:

In this paper, we described the initial work in the Classroom 2000 project. We are exploring how ubiquitous computing technology can influence the way we teach and learn by empowering both students and teachers. The main theme of Classroom 2000 is to instrument the classroom and put new technology in the hands of students and teachers so that we can capture as much of the rich exchange of information as possible. The contribution of our work so far has been the development of three separate prototypes to exercise the ideas of Classroom 2000 in support of different teaching and learning styles.

Though each Classroom 2000 prototype system is different in terms of technology provided, information captured and media-integrated review materials produced, they all follow a common organizational theme. We separate the functionality of the system into three distinct phases. Pre-production activity prepares all information leading up to the classroom interaction. Live recording captures various streams of information and actions during a lecture. Post-production activity generates multimedia-enhanced Web pages for a summary of the classroom activity.

Along with developing a variety of prototypes to support different teaching and learning styles, we have had the opportunity in one case to conduct an extended evaluation of the effect of the technology on the teaching and learning experience. Though we are not yet able to provide an assessment of how Classroom 2000 enhances learning, our preliminary evaluation does reveal a favorable student impression. Most encouraging was the response toward the use of the electronic whiteboard and Web notes. Least encouraging was the response toward the personalized electronic notebooks. We understand a lot of the misgivings with our initial prototype notebooks. They were too small and too slow and left the students feeling that their notes were unavailable for revision after class. Armed with these insights, we will continue to explore this valuable avenue of research in future computing environments for education.


Acknowledgements:

Classroom 2000 has been a group development effort. The authors would like to acknowledge the support of the several members of the Future Computing Environments Group, College of Computing and the Office of Information Technology at Georgia Tech. Specifically, we thank Savita Chandran, Yusuf Goolamabbas, Dietmar Aust, Peter Freeman, and Ron Hutchins. Abowd and Atkeson would like to thank the wonderfully patient and insightful students in their respective courses for their help in providing an honest evaluation of the Classroom 2000 concept. Finally, the authors would like to thank the referees, particularly Polle Zellweger, for many constructive comments that have resulted in a much better paper.


References:

1
G. D. Abowd. CS 6751: Human-computer interaction. Home page for College of Computing, Georgia Tech introductory level graduate course. URL http://www.cc.gatech.edu/computing/classes/cs6751_96_winter, 1996.

2
B. Arons. SpeechSkimmer: Interactively skimming recorded speech. In Proceedings of the ACM UIST'93 Symposium, pages 187-196, 1993.

3
L. Degen, R. Mander, and G. Salomon. Working with audio: Integrating personal tape recorders and desktop computers. In Proceedings of ACM CHI'92 Conference, pages 413-418, May 1992.

4
S. Elrod, R. Bruce, R. Gold, D. Goldberg, F. Halasz, W. Janssen, D. Lee, K. McCall, E. Pedersen, K. Pier, J. Tang, and B. Welch. Liveboard: A large interactive display supporting group meetings, presentations and remote collaboration. In Proceedings of ACM CHI'92 Conference, pages 599-607, May 1992.

5
J. Fowler, D. Baker, R. Dargahi, V. Kouramajian, H. Gilson, K. Brook Long, C. Petermann, and G. Gorry. Experience with the virtual notebook system: Abstraction in hypertext. In Proceedings of ACM CSCW'94 Conference, pages 133-143, 1994.

6
M. Guzdial, J. Kolodner, C. Hmelo, H. Narayanan, D. Carlson, N. Rappin, R. Hüsbscher, and J. Turns. Computer support for learning through complex problem solving. Communications of the ACM, 39(4):43-45, April 1996.

7
D. Hindus and C. Schmandt. Ubiquitous audio: Capturing spontaneous collaboration. In Proceedings of ACM CSCW'92 Conference, pages 210-217, 1992.

8
M.-C. Lai, B.-H. Chen, and S.-M. Yuan. Toward a new educational environment. In Proceedings of WWW'4 International World Wide Web Conference, December 1995. URL http://www.w3.org/pub/Conferences/WWW4/Papers/238.

9
S. Minneman, S. Harrison, W. Janssen, G. Kurtenbach, T. Moran, I. Smith, and W. van Melle. A confederation of tools for capturing and accessing collaborative activity. In Proceedings of the ACM Multimedia'95 Conference, pages 523-534, November 1995.

10
T. Moran, P. Chiu, S. Harrison, G. Kurtenbach, S. Minneman, and W. van Melle. Evolutionary engagement in an ongoing collaborative work process: A case study. In Proceedings of ACM CSCW'96 Conference, 1996.

11
W. Newman and P. Wellner. A desk supporting computer-based interaction with paper documents. In Proceedings of ACM CHI'92 Conference, pages 587-592, May 1992.

12
U. Schroeder, B. Tritsch, and A. Knierriem-Jasnoch. A modular training system for education in the WWW environment. In Proceedings of WWW'4 International World Wide Web Conference, December 1995. URL http://www.w3.org/pub/Conferences/WWW4/Papers/306.

13
B. Shneiderman, M. Alavi, K. Norman, and E. Y. Borkowski. Windows of opportunities in electronic classrooms. Communications of the ACM, 38(11):19-24, November 1995.

14
L. J. Stifelman. Augmenting real-world objects: A paper-based audio notebook. In Proceedings of ACM CHI'96 Conference, pages 199-200, April 1996. Short paper.

15
N. Streitz, J. Geissler, J. Haake, and J. Hol. DOLPHIN: Integrated meeting support across local and remote desktop environments and LiveBoards. In Proceedings of ACM CSCW'94 Conference, pages 345-358, 1994.

16
D. Wan and P. Johnson. Computer supported collaborative learning using CLARE: the approach and experimental findings. In Proceedings of ACM CSCW'94 Conference, pages 187-198, 1994.

17
K. Weber and A. Poon. A tool for real-time video logging. In Proceedings of ACM CHI'94 Conference, pages 58-64, April 1994.

18
M. Weiser. The computer of the 21st century. Scientific American, 265(3):66-75, September 1991.

19
M. Weiser. Some computer science issues in ubiquitous computing. Communications of the ACM, 36(7):75-84, July 1993.

20
S. Whittaker, P. Hyland, and M. Wiley. Filochat: Handwritten notes provide access to recorded conversations. In Proceedings of ACM CHI'94 Conference, pages 271-277, April 1994.


About this document:

Teaching and Learning as Multimedia Authoring:
The Classroom 2000 Project

Published in the Proceedings of Multimedia '96

This document was generated using the LaTeX2HTML translator Version 96.1-e (April 9, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The translation was initiated and modified by Jason Alan Brotherton on Tue Sep 10 11:08:15 EDT 1996


Future Computing Environments
College of Computing at Georgia Tech University