Thad Starner is the director of the Contextual Computing Group and is also a Technical Lead/Manager on Google's Project Glass. In general, our academic research creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human computer interaction (HCI). Recently, we have been designing [ assistive technology with the deaf community.] One of our main projects is [ CopyCat ], a game which uses American Sign Language recognition to help young deaf children acquire language skills. We continually develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. Currently, we are exploring mobile interfaces that are fast to access, like wristwatches.
Our members are some of the oldest and most active supporters of the wearable computing academic community, helping to establish and contributing to the annual International Symposium on Wearable Computers, the IEEE Wearable Information Systems Technical Committee (TCWEAR), IEEE Pervasive Computing magazine, various workshops and mailing lists, and hardware and software resources for industry and research.
To reach me for a time sensitive matter for the press or media or to get images or video of my work, please contact Jason Maderer, Media Relations (email@example.com) 404-385-2966.
The May 2015 issue of National Geographic is covering our work with the [Wild Dolphin Project], where we are raising funds for creating the next iteration of our hardware and analysis tools. Our research making wearable computers to enable two way communication with dogs (FIDO), dolphins (CHAT), and other species is part of our [Animal Computer Interaction Lab], where I serve as Technical Director.
For the past 5 years I've been serving as a Technical Lead/Manager on [Google's Glass] which has been promoted from a Google[x] experimental project to a Google product under Tony Fadell, famous for his work on another wearable computer, Apple's iPod.
Our discovery and development of [Passive Haptic Learning] allows wearable computer users to learn complex manual skills like playing the piano or typing Braille with little or no attention on the learning. Our preliminary studies with people with partial spinal cord injury suggests that the same system might be used for hand rehabilitation.
[Center for Accessible Technology in Sign], we develop a computer-based automatic sign language recognition system and use it to create the [Copycat] sign language game that helps young deaf children of hearing parents acquire language skills. We also are creating the [SMARTSign app] for Android, iOS, and Google Glass that allows hearing parents learn sign in a convenient fashion.
I am teaching Mobile and Ubiquitous Computing this semester. See the [website].
I am also the Chair of the IEEE STC on Wearable and Ubiquitous Technology. Please consider participating in the [International Symposium on Wearable Computers (ISWC)]. and joining the [Wearable Computing Google+ community].
I've been [interviewed for the Gartner Fellows program] which investigates my view of the wearable computer as a helper in the user's everday life and what has changed over the past 20 years. I've also recently given a couple Google Tech Talks ["Reading Your Mind: Interfaces for Wearable Computing"] which looks at our recent and upcoming research from a particular perspective suggested by one of my colleagues Beth Mynatt.
If you are a Georgia Tech graduate or undergraduate student interested in working with me, please review our publications and web pages to see what is interesting to you and send an ASCII resume to both me and the lead graduate students listed on the project.