Thad Starner   

Professor | Contextual Computing Group | College of Computing | Georgia Institute of Technology
Interfaces for Augmenting
Face-to-Face Conversation >>
  • Mobile Text Entry
  • Dual-Purpose Speech
  • Augmenting Conversation between
    the Deaf and Hearing Community


  • Gesture Recognition &
    Activity Discovery >>
  • Sign Language
  • Activity
  • Gesture


  • Previous Work >>
  • Face & Handwriting Recognition
  • Augmented Reality & Virtual Reality
  • Power & Heat
  • Agents & Ubiquitous Computing
  • Miscellaneous

  • Thad Starner is the director of the Contextual Computing Group and is also a Technical Lead/Manager on Google's Project Glass. In general, our academic research creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human computer interaction (HCI). Recently, we have been designing [ assistive technology with the deaf community.] One of our main projects is [ CopyCat ], a game which uses American Sign Language recognition to help young deaf children acquire language skills. We continually develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. Currently, we are exploring mobile interfaces that are fast to access, like wristwatches.

    Our members are some of the oldest and most active supporters of the wearable computing academic community, helping to establish and contributing to the annual International Symposium on Wearable Computers, the IEEE Wearable Information Systems Technical Committee (TCWEAR), IEEE Pervasive Computing magazine, various workshops and mailing lists, and hardware and software resources for industry and research.

    Notices

    Press Inquiries

    To reach me for a time sensitive matter for the press or media or to get images or video of my work, please contact Jason Maderer, Media Relations (maderer@gatech.edu) 404-385-2966.

    Animal Computer Interaction Lab

    The May 2015 issue of National Geographic is covering our work with the [Wild Dolphin Project], where we are raising funds for creating the next iteration of our hardware and analysis tools. Our research making wearable computers to enable two way communication with dogs (FIDO), dolphins (CHAT), and other species is part of our [Animal Computer Interaction Lab], where I serve as Technical Director.

    Passive Haptic Learning and Passive Haptic Rehabilitation

    Our discovery and development of [Passive Haptic Learning] allows wearable computer users to learn complex manual skills like playing the piano or typing Braille with little or no attention on the learning. Our preliminary studies with people with partial spinal cord injury suggests that the same system might be used for hand rehabilitation.

    Brainsign

    We have shown that we can read American Sign Language signs directly from the signer's motor cortex using fMRI. Here is an early version of the [paper] published at ICPR. One potential application is to create an interface for people who are "locked-in" due to Amyotrophic Lateral Sclerosis (ALS). Attempted movements by people with ALS causes similar brain signals as actual movements by neurotypical people. The hope is to teach sign to people with ALS before they are fully locked-in and then recognize their attempted movements for communication using more mobile sensors (like fNIR).

    Center for Accessible Technology in Sign

    Through our [Center for Accessible Technology in Sign], we develop a computer-based automatic sign language recognition system and use it to create the [Copycat] sign language game that helps young deaf children of hearing parents acquire language skills. We also are creating the [SMARTSign app] for Android, iOS, and Google Glass that allows hearing parents learn sign in a convenient fashion.

    Google Glass

    For the past 5 years I've been serving as a Technical Lead/Manager on [Google's Glass] which has been promoted from a Google[x] experimental project to a Google product under Tony Fadell, famous for his work on another wearable computer, Apple's iPod.

    Teaching Artificial Intelligence (6601) Spring 2015

    I am teaching AI this semester. See the [website].

    IEEE STC on Wearable and Ubiquitous Technology.

    I am also the Chair of the IEEE STC on Wearable and Ubiquitous Technology. Please consider participating in the [International Symposium on Wearable Computers (ISWC)]. and joining the [Wearable Computing Google+ community].

    Gartner interview

    I've been [interviewed for the Gartner Fellows program] which investigates my view of the wearable computer as a helper in the user's everday life and what has changed over the past 20 years. I've also recently given a couple Google Tech Talks ["Reading Your Mind: Interfaces for Wearable Computing"] which looks at our recent and upcoming research from a particular perspective suggested by one of my colleagues Beth Mynatt.

    Potential Students

    If you are a Georgia Tech graduate or undergraduate student interested in working with me, please review our publications and web pages to see what is interesting to you and send an ASCII resume to both me and the lead graduate students listed on the project.