Thad Starner   

Professor | Contextual Computing Group | College of Computing | Georgia Institute of Technology
Interfaces for Augmenting
Face-to-Face Conversation >>
  • Mobile Text Entry
  • Dual-Purpose Speech
  • Augmenting Conversation between
    the Deaf and Hearing Community

  • Gesture Recognition &
    Activity Discovery >>
  • Sign Language
  • Activity
  • Gesture

  • Previous Work >>
  • Face & Handwriting Recognition
  • Augmented Reality & Virtual Reality
  • Power & Heat
  • Agents & Ubiquitous Computing
  • Miscellaneous

  • Thad Starner is the director of the Contextual Computing Group and is also a Technical Lead/Manager on Google's Project Glass. In general, our academic research creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human computer interaction (HCI). Recently, we have been designing [ assistive technology with the deaf community.] One of our main projects is [ CopyCat ], a game which uses American Sign Language recognition to help young deaf children acquire language skills. We continually develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. Currently, we are exploring mobile interfaces that are fast to access, like wristwatches.

    Our members are some of the oldest and most active supporters of the wearable computing academic community, helping to establish and contributing to the annual International Symposium on Wearable Computers, the IEEE Wearable Information Systems Technical Committee (TCWEAR), IEEE Pervasive Computing magazine, various workshops and mailing lists, and hardware and software resources for industry and research.


    Press Inquiries

    To reach me for a time sensitive matter for the press or media or to get images or video of my work, please contact Jason Maderer, Media Relations ( 404-385-2966.

    [Animal Computer Interaction Lab]

    [CNN just released a story] on our Facilitating Interactions for Dogs with Occupations ([FIDO]) research. Our team has invented several wearable devices which a dog can use to communicate what it perceives.

    [National Geographic has a story in their print magazine] covering our work with the [Wild Dolphin Project]. We invented a wearable computer system called [CHAT that marine mammalogists can use for two way communication experiments with dolphins.] We are raising funds for a new version that operates at the higher frequencies dolphin use to communicate.

    Passive Haptic Learning and Passive Haptic Rehabilitation

    Our discovery and development of [Passive Haptic Learning] allows wearable computer users to learn complex manual skills like playing the piano or typing Braille with little or no attention on the learning. Our preliminary studies with people with partial spinal cord injury suggests that the same system might be used for hand rehabilitation.


    We have shown that we can read American Sign Language signs directly from the signer's motor cortex using fMRI. Here is an early version of the [paper] published at ICPR. One potential application is to create an interface for people who are "locked-in" due to Amyotrophic Lateral Sclerosis (ALS). Attempted movements by people with ALS causes similar brain signals as actual movements by neurotypical people. The hope is to teach sign to people with ALS before they are fully locked-in and then recognize their attempted movements for communication using more mobile sensors (like fNIR).

    Center for Accessible Technology in Sign

    Through our [Center for Accessible Technology in Sign], we develop a computer-based automatic sign language recognition system and use it to create the [Copycat] sign language game that helps young deaf children of hearing parents acquire language skills. We also are creating the [SMARTSign app] for Android, iOS, and Google Glass that allows hearing parents learn sign in a convenient fashion.

    Google Glass

    For the past 5 years I've been serving as a Technical Lead/Manager on [Google's Glass] which has been promoted from a Google[x] experimental project to a Google product under Tony Fadell, famous for his work on another wearable computer, Apple's iPod.


    Mobile and Ubiquitous Computing (CS7470/CS4605/ID8900)

    Mobile and Ubiquitous Computing (affectionately known as MUC) is a creation of Gregory Abowd (CS), Clint Zeagler (ID), and me and is a way for industrial design and computer science students to work together (as they do at companies like Google, Apple, etc.). Officially, Clint and I are teaching, but students will see guest lectures by Gregory, Zane Cochrane, and Betsy DiSalvo.

    Artificial Intelligence (OMSCS6601) Spring 2016

    I am teaching AI for the OMSCS program this semester. See the [website]. Or check out the Fall 2015 on-campus [version]

    Office hours: Monday 4:30-5:30pm & Wednesdays 9:30-10:30am, TSRB Room 239

    IEEE STC on Wearable and Ubiquitous Technology.

    I am also the Chair of the IEEE STC on Wearable and Ubiquitous Technology. Please consider participating in the [International Symposium on Wearable Computers (ISWC)]. and joining the [Wearable Computing Google+ community].

    Potential Students

    If you are a Georgia Tech graduate or undergraduate student interested in working with me, please review our publications on [Google Scholar] and send an ASCII resume to both me and the lead graduate students listed on the project.