Areas: Interpretability of Deep Models, Vision & Language.
I am currently working on generating discriminative visual explanations for image classification models, i.e., answering the question "Where does an image classification model look at in an image to predict class c1 instead of class c2, and how should the image be different for the model to predict class c2 instead?".
Previously, I have worked on problems at the intersection of vision and language. Specifically, I have worked on tackling language biases in existing Visual Question Answering (VQA) datasets, and introducing larger and more balanced VQA datasets for real images and abstract scenes (cartoon images) to help push the progress in VQA.
In parallel, I have worked on interpretability of VQA models -- specifically, we studied what parts of the inputs (image and question) VQA models focus on while making a prediction, and introduced a new counter-example explanation modality for better understanding of VQA models.
We counter the language priors present in the popular Visual Question Answering (VQA) dataset (Antol et al., ICCV 2015) and make vision (the V in VQA) matter! Specifically, we balance the VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset will be publicly released as part of the 2nd iteration of the Visual Question Answering Challenge (VQA v2.0).
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016
We present an approach to simultaneously perform semantic segmentation and prepositional phrase attachment resolution for captioned images. We show that our vision and language modules have complementary strengths, and that joint reasoning produces more accurate results than any module operating in isolation.
International Conference on Machine Learning (ICML) Workshop on Visualization for Deep Learning, 2016
[Best Student Paper]
Interactive Visualizations: Question and Image
In this paper, we experimented with two visualization methods -- guided backpropagation and occlusion -- to interpret deep learning models for the task of Visual Question Answering. Specifically, we find what part of the input (pixels in images or words in questions) the VQA model focuses on while answering a question about an image.
We balance the existing VQA dataset so that VQA models are forced to understand the image to improve their performance. We propose an approach that focuses heavily on vision and answers the question by visual verification. Dataset and Code will be available soon!