next up previous
Next: The value of pen Up: Developer insights Previous: Note-taking as image annotation

Using more than audio

We believe that video provides added value in addition to audio. Our presenters use pronouns and point to the class a lot, saying ``you'' do this and ``they'' do that, for example. The audio track alone is difficult to interpret if the gestures are not visible. It is also much easier during review to follow the flow of the lecture (understand that the presenter is suddenly responding to a question, for example) if a view of the teacher is present.

We are aiming to be able to replay the entire lecture experience, including multiple video views and all student interactions with their computers in class. We need to handle richer media sources at a finer level of granularity. For example, the student should be able to ask during review, ``What was the lecturer saying when I wrote this?'' while pointing to some arbitrary annotation, as in [10, 14, 20]. Or the student might want to find the notes associated with a live demonstration that occurred at some point in the class. The solution we have produced for indexing and reviewing an audio stream for the class is immediately transferable to video, keyboard and mouse events, and pen strokes. A constraint at the moment is efficient storage and delivery of the richer media types.



Future Computing Environments
College of Computing at Georgia Tech University