fce.jpg (15906 bytes)

line.jpg (4382 bytes)

The Smart Carpet: A Mechanism for User Identification and Location Tracking

Robert J. Orr, College of Computing
Gregory Abowd, College of Computing
Chris Atkeson, College of Computing
Irfan Essa, College of Computing
Robert Gregor, Department of Health and Performance Sciences

 

INTRODUCTION

Intelligent Environments are systems in which a room or environment can change and react to individual users or provide users with personalized information. In intelligent environment systems, user identity and location are usually basic pieces of knowledge that the system needs in order to provide these augmented services to the user. A number of different techniques and forms of instrumentation have been used in these systems to establish user identity and location: Active Badges (wearable infrared beacons), face recognition, and ID cards have all been employed in intelligent environments. However, no one has yet instrumented that part of an intelligent environment with which everyone interacts: the floor. This project aims to instrument a section of flooring and utilize a user's footfall force data and video of the person walking to reliably determine identity. We also aim to track the user's location around an occupied space.

Each method of determining identity has its advantages and disadvantages. In our system, the user's identity will be determined from data collected passively by the system while the user participates in everyday activities. The user will not have to actively identify herself, other than by stepping on a force plate or by walking in front of a conveniently placed camera. The data used is unique biometric data; footfall force profiles, weight, stride length and period, limb length, and joint angles are as uniquely part of who we are as are fingerprints. No artificial devices or identifiers need be employed directly by the user. Our approach also avoids problems with occlusions that hamper many video-only systems.

GOALS & OPEN QUESTIONS

The goals and questions of this project can be divided into three areas: techniques, privacy, and applications.

1) Techniques

How can we robustly establish identity from gait signals? To what extent can we enhance reliability by combining different recognition methods (e.g., methods using footfall data and gait video)? How can we successfully install this system in a working environment and maintain a transparent interaction with the system? How can the price of components be reduced to the degree that room sized systems can be deployed, thereby adding position tracking capability? Can we reliably track multiple people with this system? If so, how many people can we track?

2) Privacy

In an installation, how can we give people the option not to use the system if they prefer not to have their identity known? E.g., can we design an installation that has an "identification path" and a parallel "non-identification path"? How can we establish intermediate levels of identity? For example, if the user prefers not to have their name associated with their footfall data, we should be able to give the user an anonymous ID. Can we provide the user with compelling benefits so that they are willing to establish their identity in the system? How can we establish confidence in the user that his or her privacy is safeguarded?

3) Applications

In what situations will this technology be most useful? Can we successfully integrate the Smart Carpet into Classroom 2000, using student identity to enhance the services available to students? Student identification will require accuracy of approximately 1 in 50. While the basic, unenhanced components of this system cannot provide such accuracy alone, we will investigate ways in which we can combine them (and add enhancements) to yield higher accuracies. How can we use this technology in the Smart Space in the Computational Perception Lab? In addition, we have held discussions with students in the Interactive Design and Technology program about using this technology in entertainment spaces. For example, an interactive video installation would react differently based on the identity or location of the user.

METHODS

The first step in constructing the proposed system is to acquire footfall force data and train the system to recognize individual users. We have already purchased load cells and data acquisition electronics and are currently constructing a force plate. In the meantime, in collaboration with the Center for Human Movement Studies in the Department of Health and Performance Sciences at Georgia Tech, we have gathered high accuracy footfall force data. We are currently working on training a Hidden Markov Model (HMM) system to differentiate between the footfall force profiles of different users. (This concept has been proven in recent work [Addlesee, et al., 1997].)

In parallel with the HMM work, we are developing software to perform recognition based on video of a person walking. We have already completed the foundation of this system, and can recognize if a walking person is in a scene, and can fit a rough stick figure model to the walker. We are basing this system on other recent work [Niyogi & Adelson, 1994]. Ultimately, this system will fit an accurate stick figure to the walker and we will be able to extract the limb lengths and the time-varying joint angles from this model. These features will be used to recognize the identity of the user.

When these two systems are functioning, the next step will be to combine them into a multimodal recognition system. It is our hope that this multimodal system will provide higher accuracy than either system alone. Additional extensions to the system will include weight, stride distance, and stride period as additional features for the HMM system to use in training and recognition. Another approach we will consider is the use of HMMs on the video data itself. A recent attempt at this approach using tennis swings was moderately successful [Yamato, Ohya, & Ishii, 1992]. It is our intention to investigate which of these techniques in combination will yield the most reliable recognition results.

Finally, we have already developed a system that uses force sensitive resistors (FSRs) to track the location of a single person over a small area. FSRs are a cheap force-sensing technology but they are not suited to measuring the relatively high forces of footfalls (they were designed for laptop trackpads). We will investigate ways of tracking multiple people across a space and ways to reduce the cost of the system so that deployment of this tracking system into a useful space will not be prohibitive. We will also explore how to combine the identification and tracking aspects and track particular people accurately in a space that contains multiple people.

In parallel to the development of the techniques of the project, we will be investigating the privacy concerns raised by this project. We believe that in most situations it will be important that the user have the option not to use the system if he or she so desires. This option must be designed into the system from the start, and not simply be a poorly designed afterthought. There also may be various levels of identity that can be established in the system. For example, if a user does not wish his name to be associated with his biometric digital identity, we may be able to establish anonymous IDs, simply noting that "we have seen this person before", but not being able to provide the person with the full range of customized services. Furthermore, we will investigate how privacy preserving techniques, such as those presented in [Smith & Hudson, 1996], may be applied to our system.

PLANS FOR FUTURE FUNDING

Future funding for this project may come from a number of sources. The National Science Foundation has two programs under which this falls: the Robotics and Human Augmentation program and the Interactive and Intermedia Technology program. The Future Computing Environments group here in the GVU has recently submitted several grants to the NSF for funding of the next generation intelligent environment. DARPA has also recently accepted whitepapers for its Smart Spaces and Interactive Environments program. Again, the FCE group has submitted a whitepaper to this program. In the industrial arena, many companies are becoming increasingly interested in intelligent environments as a possible future computing paradigm. Microsoft, Xerox PARC, IBM, DEC, and Hewlett-Packard all have active research programs in this area. As this project matures, we will be actively soliciting these organizations for further support. GVU seed grant funding will go a long way in helping us develop a prototype application that can then be used to acquire future funding in this area.

line.jpg (4382 bytes)

fce-logo.jpg (2851 bytes)gatech.jpg (3126 bytes)