Ubiquitous and Aware Computing

Gregory Abowd and Irfan Essa

GVU Center and College of Computing
Georgia Institute of Technology

Recently, there has been a growing interest in using computing technologies to build systems that support our daily activities. Examples of such systems are smart rooms and homes that allow controlled access to the premises, make homes energy efficient, monitor children, and allow elderly to remain self-sufficient. In educational settings, such systems with intelligence can be used to recognize if students are interested or bored and respond accordingly. These systems don't have to be fixed to the environment, but can be mobile, even worn as part of our daily clothing. They can provide documentation and support for on-site repair technicians, memory augmentation and navigation for businesspeople, and supplement the capabilities of challenged individuals through lip-reading, sign-language recognition, translation, and visual assistance.

Research and development efforts for building such intelligent and interactive human-centric systems that support and augment our daily lives rely on the concepts of ubiquitous and aware computing. We will outline very briefly these two concepts followed by description of our attempts to build futuristic systems at Georgia Tech.

 The defining characteristic of ubiquitous computing is the attempt to break away from the traditional desktop interaction paradigm and move computational power into the environment that surrounds the user. The challenge of the ubiquitous computing, however, not only involves distributing the computation and networking capabilities, but also includes providing a natural interface to the user. Ubiquitous computing advocates a complete shift from relying on the traditional interaction, where the user is forced to search out and find the computer interface, to an interaction where the system itself takes on the responsibility of locating and serving the user.

 Aware computing is aimed at serving the system by providing knowledge about the user and the environment that surrounds the user. Such awareness can be achieved by incorporating perceptual abilities into the environment. This form of computational perception can be used to identify the users, locate them, determine their focus of attention, and attempt to ascertain their intentions, i.e., be aware.

 We are interested in combining these ideas of ubiquitous and aware computing to achieve computational augmentation of our everyday activities. This coupling can be achieved by instrumenting the environment with computational power, networking capabilities, and sensor technologies. Such instrumentation can be used for capturing and processing audio, video and other sensory data, and control the input, output and information flow in an environment. The sensor technologies with the distributed computation will provide the system the ability to perceive the environment. This computational perception will help identity users, determine what the user is doing, and aid in prediction of user needs and interests. Present developments in computational hardware, input/output devices, and sensor technologies suggest that building of such environments will be a major focus research and development in the upcoming years.

 The Future Computing Environments (FCE) Group at Georgia Tech is working to build interactive environments to augment daily activity. The research method is application-oriented, meaning that we identify the everyday activity to support before considering how to augment the environment. Our mission is to identify, investigate, and invent technologies and environments that can be prototyped quickly and evaluated in real-life situations. In the past 2 years, the FCE group has developed a number of applications that rely on the concepts of ubiquitous and aware computing. These applications have involved three different domains:

We describe these different examples below in the context of ongoing FCE projects. More information can be found at http://www.cc.gatech.edu/fce.

 The Classroom:

The Classroom 2000 project is investigating the educational experience in the classroom. Typically, a classroom session generates a number of different streams of information; people talking and demonstrating; presentations on a whiteboard; software simulations; lecturer's gesticulation and the like. Students can spend a lot of time frantically taking notes to capture their understanding of all of the information. Classroom 2000 enables the environment to assist in class record-keeping, freeing the student to engage in understanding and participating in the experience as opposed to slavishly scribing. The environment also provides the lecturer with richer modes of presentation, improving the content and spontaneity of the educational experience. We have built a special classroom that can easily capture the activities of a lecture and have been using this environment on a regular basis for 9 months.

 At present Classroom 2000 provides the ability to integrate different streams of activities together. For example, words that are written on an electronic whiteboard are automatically linked to a digital recording of the audio and video in the class. Further analysis of the audio and video recordings provides for content-based understanding of the lecture. Spreading computational services around the physical classroom environment results in a room that is more aware of what is going on within it. When a student wishes to review a lecture, the captured experience serves as a more effective reminder and memory cue. Future interests with the classroom are to build more awareness in to environment to track the professor and the students gestures, expressions, and audio interactions.

The Home

The Domisilica project is aimed at producing a virtual community that mirrors and supports some real physical community. Our initial efforts are targeted toward the home and the extended family. We have built a prototype virtual home environment that is tied to a home setting of a number of researchers in FCE. We are making the two worlds, physical and virtual, work in concert with each other. So, for example, when some produce is placed inside the physical refrigerator in the kitchen, the contents of a virtual refrigerator, CyberFridge, is automatically updated as well. We are also experimenting with how activity in the virtual world can affect the physical world. For example, when multiple people virtually visit a room in Domisilica that is associated to a physical room, say a living room, the physical environment is enabled to produce more ambient noise to inform the physical occupants of the room of the presence of the virtual visitors.

In the future, we are interested in developing more automatic ways to communicate between the virtual and the real worlds. This will be achieved by adding sensors to the environment that will identify the user and the activity in the environment and update the virtual representation of this environment.

Personal space

The previous two examples dealt with fairly well defined physical spaces, the classroom and the home. We are also interested in pursuing the concepts of ubiquitous and aware computing in environments where the physical space is defined as the unfamiliar territory that surrounds a mobile user. The Cyberguide project is aimed at developing mobile assistants, more specifically, tour guides that are aware of the location and orientation of their user and provide information about the surrounding space. Our initial work in this area has relied on using different forms of hand-held computers with position sensors. So far we are concentrating more on the software development issues with the hope of keeping it platform independent. We are also pursuing some research on wearable computers within this context.


Ubiquitous and aware computing will play a dominant role in defining how we think about computing in the upcoming years. It represents a complete shift from thinking about computers as extensions of the desktop to the extensions of the environment we live in. Applications of aware and ubiquitous computing are many and include systems that can support our daily activities, monitor children, make the elderly self-sufficient and augment challenged individuals. It is our intention to pursue these applications and the extensions of the above-described projects in the upcoming years.

About the Authors:

Gregory Abowd is an Assistant Professor in the College of Computing and an Associate Director of External Affairs of the GVU Center at Georgia Tech. He founded the Future Computing Environments group at Georgia Tech in 1995 along with Professor Chris Atkeson.

Irfan Essa is an Assistant Professor in the College of Computing at Georgia Tech. He joined Georgia Tech one year ago and since then has been a member of the FCE group. He is also setting up a Computational Perception Lab under the GVU Center. 

This article appeared in the GVU Newsletter, Fall 1997