I have moved to a new position as assistant professor in EECS at Vanderbilt University. Please visit my new website here.
I was first inspired to study visual thinking in autism when I read Thinking in Pictures, an autobiography by a woman with autism named Temple Grandin who feels that she is a visual thinker. Temple describes how her predisposition towards visual thinking gives her significant benefits in her work as an animal scientist and designer of humane livestock equipment but has also caused problems for her in other areas, such as in understanding abstract concepts that are hard to visualize. One goal of this project is better understand how individuals who are visual thinkers process information and experience the world around them, towards the twin goals of advancing cognitive theories of autism as well as designing better day-to-day supports for individuals on the spectrum and their caregivers.
For more information, please visit the project website.
Visual attention impacts virtually every aspect of intelligent behavior in humans, from perception and learning to communication and social interaction; in addition, atypical patterns of visual attention are hallmarks of many neuropsychological conditions, including autism. Only very recently has technology been able to provide detailed, objective measurements of human visual attention in naturalistic settings, through the development of head-mounted cameras and eye-trackers such as the SMI Glasses, Pivothead, GoPro, Looxcie, and Google Glass. This project leverages these technologies to open new avenues for measuring and modeling human visual attention in a variety of situations.
Some biologically-inspired approaches to the problem of artificial intelligence (AI) aim to emulate the neural structure of the brain, since there is considerable evidence that the neural systems of humans (and of many other organisms as well) are fine-tuned for certain kinds of thinking and acting. However, the human brain does not learn to think and act in a vacuum. In addition to the constraints imposed by neural architecture, there are substantial external constraints on the processes of human learning and development. The physical environment, the emergence and maturation of motor and attentional skills, and interactions with social actors all play an enormous role in defining the learning scenarios that humans experience. This project investigates the role of developmental aspects of visual learning and cognition using AI models.
- Robust Robotics Group at MIT: Path planning algorithms, user interface design, techniques for human-machine collaboration.
- Aerospace industry: design, implementation, and flight-testing of integrated architectures for autonomous sensing and guidance in small UAVs.
- Environmental Sciences Division at Oak Ridge National Lab: mathematical modelling of carbon and energy cycle impacts on climate change, image processing for studying plant biomechanics.