Recently, there has been a growing interest in using computing technologies to build systems that support our daily activities. Examples of such systems are smart rooms and homes that allow controlled access to the premises, make homes energy efficient, monitor children, and allow elderly to remain self-sufficient. In educational settings, such systems with intelligence can be used to recognize if students are interested or bored and respond accordingly. These systems don't have to be fixed to the environment, but can be mobile, even worn as part of our daily clothing. They can provide documentation and support for on-site repair technicians, memory augmentation and navigation for businesspeople, and supplement the capabilities of challenged individuals through lip-reading, sign-language recognition, translation, and visual assistance.
Research and development efforts for building such intelligent and interactive human-centric systems that support and augment our daily lives rely on the concepts of ubiquitous and aware computing. We will outline very briefly these two concepts followed by description of our attempts to build futuristic systems at Georgia Tech.
The defining characteristic of ubiquitous computing is the attempt to break away from the traditional desktop interaction paradigm and move computational power into the environment that surrounds the user. The challenge of the ubiquitous computing, however, not only involves distributing the computation and networking capabilities, but also includes providing a natural interface to the user. Ubiquitous computing advocates a complete shift from relying on the traditional interaction, where the user is forced to search out and find the computer interface, to an interaction where the system itself takes on the responsibility of locating and serving the user.
Aware computing is aimed at serving the system by providing knowledge about the user and the environment that surrounds the user. Such awareness can be achieved by incorporating perceptual abilities into the environment. This form of computational perception can be used to identify the users, locate them, determine their focus of attention, and attempt to ascertain their intentions, i.e., be aware.
We are interested in combining these ideas of ubiquitous and aware computing to achieve computational augmentation of our everyday activities. This coupling can be achieved by instrumenting the environment with computational power, networking capabilities, and sensor technologies. Such instrumentation can be used for capturing and processing audio, video and other sensory data, and control the input, output and information flow in an environment. The sensor technologies with the distributed computation will provide the system the ability to perceive the environment. This computational perception will help identity users, determine what the user is doing, and aid in prediction of user needs and interests. Present developments in computational hardware, input/output devices, and sensor technologies suggest that building of such environments will be a major focus research and development in the upcoming years.
The Future Computing Environments (FCE) Group at Georgia Tech is working to build interactive environments to augment daily activity. The research method is application-oriented, meaning that we identify the everyday activity to support before considering how to augment the environment. Our mission is to identify, investigate, and invent technologies and environments that can be prototyped quickly and evaluated in real-life situations. In the past 2 years, the FCE group has developed a number of applications that rely on the concepts of ubiquitous and aware computing. These applications have involved three different domains:
At present Classroom 2000 provides the ability to integrate different
streams of activities together. For example, words that are written on
an electronic whiteboard are automatically linked to a digital recording
of the audio and video in the class. Further analysis of the audio and
video recordings provides for content-based understanding of the lecture.
Spreading computational services around the physical classroom environment
results in a room that is more aware of what is going on within it. When
a student wishes to review a lecture, the captured experience serves as
a more effective reminder and memory cue. Future interests with the classroom
are to build more awareness in to environment to track the professor and
the students gestures, expressions, and audio interactions.
In the future, we are interested in developing more automatic ways to
communicate between the virtual and the real worlds. This will be achieved
by adding sensors to the environment that will identify the user and the
activity in the environment and update the virtual representation of this
Irfan Essa is an Assistant Professor in the College of Computing at Georgia Tech. He joined Georgia Tech one year ago and since then has been a member of the FCE group. He is also setting up a Computational Perception Lab under the GVU Center.