Thad Starner   

Professor | Contextual Computing Group | College of Computing | Georgia Institute of Technology
Interfaces for Augmenting
Face-to-Face Conversation >>
  • Mobile Text Entry
  • Dual-Purpose Speech
  • Augmenting Conversation between
    the Deaf and Hearing Communities

  • Gesture Recognition &
    Activity Discovery >>
  • Sign Language
  • Activity
  • Gesture

  • Previous Work >>
  • Face & Handwriting Recognition
  • Augmented Reality & Virtual Reality
  • Power & Heat
  • Agents & Ubiquitous Computing
  • Miscellaneous

  • Thad Starner is the director of the [Contextual Computing Group (CCG)] and is also the longest-serving Technical Lead/Manager on [Google's Glass]. He is a founder of the [Animal Computer Interaction Lab] and works with the [Georgia Tech Ubicomp Group] and the [Brainlab].

    For various biographies, see my [CV section].

    CCG creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human-computer interaction (HCI). Currently, we are renewing our efforts on [assistive technology with the Deaf community]. One of our main projects is [CopyCat], a game which uses computer vision recognition to help young deaf children acquire language skills with American Sign Language. We develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. We also explore mobile interfaces that are fast to access, like wristwatches.

    Our members and graduates are some of the oldest and most active supporters of the wearable computing academic community, helping to establish and contributing to the annual [International Symposium on Wearable Computers] and the [IEEE Pervasive Computing magazine.]


    Press Inquiries

    To reach me for a time-sensitive matter for the press or media or to get images or video of my work, please contact Jason Maderer, Media Relations ( 404-385-2966.

    Fall 2016 office hours: Tuesday 4:00-6:00pm TSRB Room 239.

    OMSCS office hours: TBD via Google Hangout On Air

    I will not be holding office hours Tuesday, October 4 due to travel

    Potential Students

    If you are a Georgia Tech graduate or undergraduate student interested in working with me, please review our publications on [Google Scholar] and send an ASCII resume to both me and the lead graduate students listed on the project.


    [Artificial Intelligence (OMSCS6601) Fall 2016]

    I am teaching AI both for the on-campus students and the on-line Master of Science program this semester. See [syllabus]. Or check out the [schedule].

    Wearable Computing [inventions] and [Google Glass]

    For the past 6 years I've been serving as a Technical Lead/Manager on [Google's Glass], which has been promoted from a Google[x] experimental project to a Google product. I've been prototyping wearable computers in academia since 1990 and own one of the largest private collections of head-worn displays, parts of which are currently on display at Clint Zeagler's [Museum of Design Atlanta exhibit "On You"] until October 2.

    Sept 10, 2016: We are finishing the camera ready version of our [ICMI] paper on a method to reliably select between choices in a wearable interface with eye movement sensed by three electrodes in a commercially available device.

    August 30, 2016: My 63rd U.S. utility patent issued: #9,429,990 Point-of-view object selection.

    August 16, 2016: My 62th U.S. utility patent issued: #9,418,617 Methods and systems for receiving input controls.

    August 2, 2016: My 61th U.S. utility patent issued: [#9,405,977 Using visual layers to aid in initiating a visual search.]

    July 22, 2016: It was quite an honor when the National Academy of Sciences asked me to give a talk on the [History of Wearables]. Just discovered that the talk is available on-line.

    July 7, 2016: My 60th U.S. utility patent issued: [#9,383,919 Touch-based text entry using hidden Markov modeling.]

    June 26, 2016: [On You: Wearing Technology], a revised version of our traveling wearable computing exhibit visits the Museum of Design Atlanta. The exhibit presents the challenges in creating a consumer wearable computer and the inventions and devices that have addressed these challenges over the past five decades. A guide helps teachers relate the exhibit to Georgia's grade school curricula. Previous installations were shown at the [Deutsches Museum in Munich], the [Computer History Museum in California], and the [World Economic Forum in Tianjin.]

    June 17, 2016: Our method of assisting order picking with a combination of a head-up display (Glass), pick-by-light, and scales for error correction will be presented at ISWC2016 in "A Comparison of Order Picking Methods Augmented with Weight Checking Error Detection."

    June 17, 2016: Our method of controlling a smartwatch by blowing on it will be presented at ISWC2016 in "Whoosh: Non-Voice Acoustics for Low-Cost, Hands-Free, and Rapid Input on Smart Devices."

    June 17, 2016: One of our most recent inventions "MoodLens: Restoring Non-Verbal Communication with an In-Lens Fiber Optic Display," which is aimed at helping people with ALS better communicate, will be presented at ISWC2016.

    June 17, 2016: "WatchOut: Extending Interactions on a Smartwatch with Inertial Sensing" will present our new work on using taps and swipes on a watch’s case, bezel, and band at ISWC2016.

    May 31, 2016: My 59th U.S. utility patent issued: [#9,354,445 Information processing on a head-mountable device.]

    May 6, 2016: We submitted a paper to [ICMI] on a method to reliably select between choices in a wearable interface with eye movement sensed by three electrodes in a commercially available device.

    May 3, 2016: [Time magazine named Google Glass one of the "The 50 Most Influential Gadgets of All Time."]

    May 1, 2016: [Google Scholar] shows >20,000 citations to my work, mostly involving wearables.

    April 23, 2016: We submitted 9 papers to the [International Symposium on Wearable Computers], involving 8 interface inventions and 7 user studies.

    April 19, 2016: My 58th U.S. utility patent issued: [#9,316,481 Sensor for measuring tilt angle based on electronic textile and method thereof.]

    March 29, 2016: My 57th U.S. utility patent issued: [#9,298,256 Visual completion.]

    March 22, 2016: My 56th U.S. utility patent issued: [#9,292,082 Text-entry for a computing device.]

    March 1, 2016: Two new patents issued today: [#9,274,599 Input detection] and [#9,277,334 Wearable computing device authentication using bone conduction.]

    [Communicating with animals using wearables]

    September 11, 2016: Presenting our paper "Feature Learning and Automatic Segmentation for Dolphin Communication Analysis" at Interspeech 2016.

    June 17, 2016: We are creating a method for working dogs to communicate with their handlers through gesture. Our progess to date will be presented at ISWC2016 in the paper "Creating Collar-Sensed Motion Gestures for Dog-Human Communication."

    June 14, 2016: [PBS News Hour covered our work with wild dolphins, describing our underwater CHAT wearable computers and UHURA pattern discovery methods.]

    June 10, 2016: Our paper on "Feature Learning and Automatic Segmentation for Dolphin Communication Analysis" has been accepted to INTERSPEECH 2016.

    May 18, 2016: [CNN released a story] on our Facilitating Interactions for Dogs with Occupations ([FIDO]) research. Our team has invented several wearable and IoT devices that a dog can use to communicate what it perceives.

    April 13, 2016: The International Journal of Human-Computer Studies just published our article ["A method to evaluate haptic interfaces for working dogs."]

    May 2015: [National Geographic has a story in their print magazine] covering our work with the [Wild Dolphin Project]. We invented a underwater wearable computer system called [CHAT that marine mammalogists can use for two-way communication experiments with dolphins.] We are [raising funds for a new version that operates at the higher frequencies dolphin use to communicate.]

    [Learning manual tasks like playing piano without attention]

    Our discovery and development of [Passive Haptic Learning, best described in this talk at TEDxSalford] allows wearable computer users to learn complex manual skills like playing the piano or typing Braille with little or no attention on the learning. Our preliminary studies with people with partial spinal cord injury suggests that the same system might be used for hand rehabilitation.

    October 20, 2016: Just released a [GVU technical report called "Perception in Hand-Worn Haptics: Placement, Simultaneous Stimuli, and Vibration Motor Comparisons"] which describes the best places to put vibration motors in tactile gloves based on a series of quantitative user studies.

    June 17, 2016: Our technique for teaching Morse code with passive haptic learning using Google Glass will be presented at ISWC2016 in the paper "Tactile Taps Teach Rhythmic Text Entry: Passive Haptic Learning of Morse Code."

    June, 2015: Our method of passively teaching two-handed, chorded piano songs like Mozart's Turkish March was presented at IEEE World Haptics in the published paper ["Towards Passive Haptic Learning of Piano Songs."]

    [Phrase-level communication with brain signals]

    We can distinguish individual signs and phrases of American Sign Language directly from the motor cortex using fMRI. Details are in our [Brainsign paper] published at ICPR. One potential application is to create an interface for people who are "locked-in" due to Amyotrophic Lateral Sclerosis (ALS). Movements attempted by individuals with ALS generate brain signals similar to actual movements by neurotypical people. Our hope is to teach sign language to people with ALS before they are fully locked-in and then recognize their attempted movements for communication using more mobile sensors (like fNIR).

    May 30, 2016: Our earbud with a brain-computer interface is described in ["Towards Mobile and Wearable Brain-Computer Interfaces"] and presented at the biannual BCI Meeting.

    [Center for Accessible Technology in Sign (CATS)]

    Through our [Center for Accessible Technology in Sign], we develop a computer-based automatic sign language recognition system and use it to create the [CopyCat] sign language game that helps young deaf children of hearing parents acquire language skills. We also are creating the [SMARTSign app] for Android, iOS, and Google Glass that allows hearing parents to learn sign in a convenient fashion.

    IEEE STC on Wearable and Ubiquitous Technology.

    I am also the Chair of the IEEE STC on Wearable and Ubiquitous Technology. Please consider participating in the [International Symposium on Wearable Computers (ISWC)] and joining the [Wearable Computing Google+ community].