Thad Starner   

Professor | Contextual Computing Group | College of Computing | Georgia Institute of Technology
Interfaces for Augmenting
Face-to-Face Conversation >>
  • Mobile Text Entry
  • Dual-Purpose Speech
  • Augmenting Conversation between
    the Deaf and Hearing Communities


  • Gesture Recognition &
    Activity Discovery >>
  • Sign Language
  • Activity
  • Gesture


  • Previous Work >>
  • Face & Handwriting Recognition
  • Augmented Reality & Virtual Reality
  • Power & Heat
  • Agents & Ubiquitous Computing
  • Miscellaneous

  • Thad Starner is the director of the [Contextual Computing Group (CCG)] and is also the longest-serving Technical Lead/Manager on [Google's Glass]. He is a founder of the [Animal Computer Interaction Lab] and works with the [Georgia Tech Ubicomp Group] and the [Brainlab].

    For various biographies, see my [CV section].

    CCG creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human-computer interaction (HCI). Currently, we are renewing our efforts on [assistive technology with the Deaf community]. One of our main projects is [CopyCat], a game which uses computer vision recognition to help young deaf children acquire language skills with American Sign Language. We develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. We also explore mobile interfaces that are fast to access, like wristwatches.

    Our members and graduates are some of the oldest and most active supporters of the wearable computing academic community, helping to establish and contributing to the annual [International Symposium on Wearable Computers] and the [IEEE Pervasive Computing magazine.]

    Notices

    Former Google+ Wearable Computing community members:

    Tony Havelka of Tekgear is putting up a service at [http://www.wearhard.info] for us to use.

    Press Inquiries

    To reach me for a time-sensitive matter for the press or media or to get images or video of my work, please contact Ann Claycombe Media Relations (ann.claycombe@cc.gatech.edu)

    Spring 2022 office hours:

    I will not be holding any office hours on the upcoming dates due to travel:

    General (priority to faculty, class students, and visiting OMSCS students): Weds. 5-7pm TSRB Room 239.

    CS3651 office hours: Tues & Thurs 1:30-2pm; 3:15-4:30pm CCB337

    OMSCS office hours: announced on Ed

    Potential Students

    If you are a Georgia Tech graduate or undergraduate student interested in working with me, please review our publications on [Google Scholar] and send an ASCII resume to both me and the lead graduate students listed on the project.

    Teaching

    Artificial Intelligence (OMSCS6601) Spring 2018

    I am teaching the on-line AI class this semester. See the [syllabus]. Or check out the [schedule]. Canvas access is required.

    The materials and projects from this course are now part of [Udacity's nanodegree in AI]!

    The Art of Prototyping Intelligent Appliances (CS3651) Spring 2019

    CS3651 is a "maker" class that teaches enough electronics and shop skills to allow prototyping wearable and ubiquitous computing devices from scratch. See last year's [ syllabus ]. Or check out last year's [ schedule ].

    Wearable Computing [inventions] and [Google Glass]

    For the past 8 years I have been at Google as either a Staff Research Scientist or a Technical Lead/Manager working on wearable computers. One example is [Google's Glass], which was the highest grossing head worn display up to that time. While the experimental Glass Explorer Edition (XE) was sold publicly for nine months until January 2015, [Glass Enterprise] started sales in 2015 and continues to be sold today. I am one of Google's most prolific inventors, with 80 United States utility patents (102 worldwide) issued to date.

    I have been prototyping wearable computers in academia since 1990 and own one of the [largest private collections of head-worn displays], which has been shown in museums worldwide.

    [Learning manual tasks and manual rehabilitation without effort]

    Our discovery and development of Passive Haptic Learning and Passive Haptic Rehabilitation, [best described in this short documentary by FreeThink], allows wearable computer users to learn complex manual skills like playing the piano or typing Braille with little or no attention on the learning. Our preliminary studies with people with partial spinal cord injury and stroke suggests that the same system might be used for hand rehabilitation.

    October 20, 2016: Just released a [GVU technical report called "Perception in Hand-Worn Haptics: Placement, Simultaneous Stimuli, and Vibration Motor Comparisons"] which describes the best places to put vibration motors in tactile gloves based on a series of quantitative user studies.

    June 17, 2016: Our technique for teaching Morse code with passive haptic learning using Google Glass will be presented at ISWC2016 in the paper "Tactile Taps Teach Rhythmic Text Entry: Passive Haptic Learning of Morse Code."

    June, 2015: Our method of passively teaching two-handed, chorded piano songs like Mozart's Turkish March was presented at IEEE World Haptics in the published paper ["Towards Passive Haptic Learning of Piano Songs."]

    [Phrase-level communication with brain signals]

    We can distinguish individual signs and phrases of American Sign Language directly from the motor cortex using fMRI. Details are in our [Brainsign paper] published at ICPR. One potential application is to create an interface for people who are "locked-in" due to Amyotrophic Lateral Sclerosis (ALS). Movements attempted by individuals with ALS generate brain signals similar to actual movements by neurotypical people. Our hope is to teach sign language to people with ALS before they are fully locked-in and then recognize their attempted movements for communication using more mobile sensors (like fNIR).

    May 30, 2016: Our earbud with a brain-computer interface is described in ["Towards Mobile and Wearable Brain-Computer Interfaces"] and presented at the biannual BCI Meeting.

    [Communicating with animals using wearables]

    Our [Animal Computer Interaction] lab is one of the leaders in this new field. Our current projects include [Facilitating Interactions for Dogs with Occupations (FIDO)] and [Cetacean Hearing and Telemetry (CHAT)]. The CHAT project involves a collaboration with Dr. Denise Herzing at the [Wild Dolphin Project].

    September 11, 2016: Presenting our paper "Feature Learning and Automatic Segmentation for Dolphin Communication Analysis" at Interspeech 2016.

    June 17, 2016: We are creating a method for working dogs to communicate with their handlers through gesture. Our progess to date will be presented at ISWC2016 in the paper "Creating Collar-Sensed Motion Gestures for Dog-Human Communication."

    June 14, 2016: [PBS News Hour covered our work with wild dolphins, describing our underwater CHAT wearable computers and UHURA pattern discovery methods.]

    June 10, 2016: Our paper on "Feature Learning and Automatic Segmentation for Dolphin Communication Analysis" has been accepted to INTERSPEECH 2016.

    May 18, 2016: [CNN released a story] on our Facilitating Interactions for Dogs with Occupations ([FIDO]) research. Our team has invented several wearable and IoT devices that a dog can use to communicate what it perceives.

    April 13, 2016: The International Journal of Human-Computer Studies just published our article ["A method to evaluate haptic interfaces for working dogs."]

    May 2015: [National Geographic has a story in their print magazine] covering our work with the [Wild Dolphin Project]. We invented a underwater wearable computer system called [CHAT that marine mammalogists can use for two-way communication experiments with dolphins.] We are [raising funds for a new version that operates at the higher frequencies dolphin use to communicate.]

    [Center for Accessible Technology in Sign (CATS)]

    Through our [Center for Accessible Technology in Sign], we develop a computer-based automatic sign language recognition system and use it to create the [CopyCat] sign language game that helps young deaf children of hearing parents acquire language skills. We also are creating the [SMARTSign app] for Android, iOS, and Google Glass that allows hearing parents to learn sign in a convenient fashion.

    IEEE STC on Wearable and Ubiquitous Technology.

    I am also the Chair of the IEEE STC on Wearable and Ubiquitous Technology. Please consider participating in the [International Symposium on Wearable Computers (ISWC)] and joining the [Wearable Computing Google+ community].