Aishwarya Agrawal

me

Aishwarya Agrawal

Assistant Professor
Department of Computer Science and Operations Research
University of Montreal

Core Member and Canada CIFAR AI Chair
Mila -- Quebec Artificial Intelligence Institute

Research Scientist
DeepMind

Email: aishwarya -dot- agrawal -at- mila -dot- quebec


Highlights and News


Dec 2020:
Joined University of Montreal and Mila as an Assistant Professor.
Nov 2020:
Selected as a runner-up of the 2019 AAAI / ACM SIGAI Dissertation Award.
May 2020:
Selected as a recepient of the Georgia Tech 2020 College of Computing Dissertation Award.
Apr 2020:
Selected as a recepient of the Georgia Tech 2020 Sigma Xi Best Ph.D. Thesis Award.
Older items.

About Me

I am an Assistant Professor in the Department of Computer Science and Operations Research at University of Montreal. I am also a Canada CIFAR AI Chair and a core academic member of Mila -- Quebec AI Institute. I also spend one day a week at DeepMind (Montreal office) as a Research Scientist.

I am recruiting graduate students. Please submit your application via Mila application process. Unfortunately, I will not be able to respond to individual emails.

From Aug 2019 - Dec 2020, I was a full time Research Scientist at DeepMind (London office). I completed my PhD in Aug 2019 from Georgia Tech, advised by Dhruv Batra and closely collaborating with Devi Parikh.

My research interests lie at the intersection of Computer Vision, Deep Learning and Natural Language Processing, with a focus on developing Artificial Intelligence (AI) systems that that can 'see' (i.e. understand the contents of an image: who, what, where, doing what?) and 'talk' (i.e. communicate the understanding to humans in free-form natural language).

I co-organize the annual VQA challenge and workshop.

In my spare time, I also consult informally for a startup in the pre-employment testing space.

Short Bio.

Publications

Visual Question Answering and Beyond
Aishwarya Agrawal
PhD Dissertation, 2019
[PDF]
Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning
Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, Tejas Kulkarni
Visually-Grounded Interaction and Language workshop (spotlight), NIPS 2018
Learning by Instruction workshop, NIPS 2018
[ArXiv]
Overcoming Language Priors in Visual Question Answering with Adversarial Regularization
Sainandan Ramakrishnan, Aishwarya Agrawal, Stefan Lee
Neural Information Processing Systems (NIPS), 2018
[ArXiv]
Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
Aishwarya Agrawal, Dhruv Batra, Devi Parikh, Aniruddha Kembhavi
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018
[ArXiv | Project Page]
Resolving Language and Vision Ambiguities Together: Joint Segmentation & Prepositional Attachment Resolution in Captioned Scenes
Gordon Christie*, Ankit Laddha*, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, Dhruv Batra

*equal contribution

Computer Vision and Image Understanding (CVIU), 2017
[Arxiv | Project Page]
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset
Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, Devi Parikh
CoRR, abs/1704.08243, 2017
[ArXiv]
VQA: Visual Question Answering
Aishwarya Agrawal*, Jiasen Lu*, Stanislaw Antol*, Margaret Mitchell, Larry Zitnick, Devi Parikh, Dhruv Batra

*equal contribution

Special Issue on Combined Image and Language Understanding, International Journal of Computer Vision (IJCV), 2017
[ ArXiv | visualqa.org (data, code, challenge) | slides | talk at GPU Technology Conference (GTC) 2016]
Analyzing the Behavior of Visual Question Answering Models
Aishwarya Agrawal, Dhruv Batra, Devi Parikh
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016
[Arxiv | slides | talk at Deep Learning Summer School, Montreal, 2016]
Resolving Language and Vision Ambiguities Together: Joint Segmentation & Prepositional Attachment Resolution in Captioned Scenes
Gordon Christie*, Ankit Laddha*, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, Dhruv Batra

*equal contribution

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016
[Arxiv | Project Page]
Measuring Machine Intelligence Through Visual Question Answering
Larry Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, Devi Parikh
AI Magazine, 2016
[Paper | ArXiv]
Visual Storytelling
Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, Larry Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), 2016
[Arxiv, Project Page]
VQA: Visual Question Answering
Stanislaw Antol*, Aishwarya Agrawal*, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Larry Zitnick, Devi Parikh

*equal contribution

International Conference on Computer Vision (ICCV), 2015
[ ICCV Camera Ready Paper | ArXiv | ICCV Spotlight | visualqa.org (data, code, challenge) | slides | talk at GPU Technology Conference (GTC) 2016]
A Novel LBP Based Operator for Tone Mapping HDR Images
Aishwarya Agrawal, Shanmuganathan Raman
International Conference on Signal Processing and Communications (SPCOM-2014)
[Paper |Poster]
Optically clearing tissue as an initial step for 3D imaging of core biopsies to diagnose pancreatic cancer
Ronnie Das, Aishwarya Agrawal, Melissa P. Upton, Eric J. Seibel
SPIE BiOS, International Society for Optics and Photonics, 2014
[Paper]

Videos and Talks