James James Hays
Associate Professor, School of Interactive Computing, College of Computing, Georgia Institute of Technology
Principal Scientist, Argo AI

My research interests span computer vision, robotics, and machine learning. I work on problems such as object detection, tracking, and localization. My research often involves finding new data sources to exploit (e.g. geotagged imagery) or creating new data sets where none existed (e.g. sketches or grasp contact maps).

Before joining Georgia Tech, I was the Manning Assistant Professor of Computer Science at Brown University. I was a postdoc at MIT with Antonio Torralba, completed my Ph.D. at Carnegie Mellon University with Alexei Efros, and received my B.S. from Georgia Tech. I am the recipient of the Alfred P. Sloan Fellowship and the NSF CAREER award.

Gatech Logo contact
email: hays@gatech.edu
office: CODA 11th floor
Office hours: TBD
mail: 756 West Peachtree st NW, Suite 12E
Atlanta, GA 30308

Teaching

Students and Collaborators

Ph.D. Students

Graduated Ph.D. Students

Previous Postdoc

Visiting Students

Master's Student Researchers

  • Akash Kumar, Shenhao Jiang, Kapilan Baskar, Vishwas Uppoor alumni: Nitin Kodialbail, Jianan Gao, Govin Vatsan, Vasavi Gajarla, Laura Jeyaseelen, Varun Agrawal, Nate Burnell, Xiaofeng Tao, Chao Qian, Chen Xu, Yipin Zhou, Hang Su, Vibhu Ramani, Paul Sastrasinh, Vazheh Moussavi, Yun Zhang, David Dufresne, Sirion Vittayakorn

Undergraduate Researchers

  • alumni: Wenqi Xian, Cusuh Ham, Lawrence Moore, Sonia Phene, Eric Jang, Hari Narayanan, Sam Birch, Leela Nathan, Eli Bosworth, Jung Uk Kang, Reese Kuppig, Fuyi Huang, Travis Webb

Recorded Talks

Highlighted Recent Papers

PVA: Pixel-aligned Volumetric Avatars.
Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi.
arXiv preprint arXiv:2101.02697, January 2021.

Project page, arXiv
Scene Flow from Point Clouds with or without Learning.
Jhony Kaesemodel Pontes, James Hays, Simon Lucey.
3DV 2020 Oral.

Project page, arXiv
3D for Free: Crossmodal Transfer Learning using HD Maps.
Benjamin Wilson, Zsolt Kira, and James Hays.
arXiv preprint arXiv:2008.10592, August 2020.

arXiv
Tide: A general toolbox for identifying object detection errors
Daniel Bolya, Sean Foley, James Hays, Judy Hoffman.
ECCV 2020 spotlight.

Project page, arXiv
ANR: Articulated Neural Rendering for Virtual Avatars.
Amit Raj, Julian Tanke, James Hays, Minh Vo, Carsten Stoll, Christoph Lassner.
arXiv preprint arXiv:2012.12890, December 2020.

Project page, arXiv
Computational discrimination between natural images based on gaze during mental imagery.
Xi Wang, Andreas Ley, Sebastian Koch, James Hays, Kenneth Holmqvist, and Marc Alexa.
Scientific Reports, August 2020.
Open Access Article

Related earlier conference paper:

The Mental Image Revealed by Gaze Tracking.
Xi Wang, Andreas Ley, Sebastian Koch, David Lindlbauer, James Hays, Kenneth Holmqvist, and Marc Alexa.
CHI 2019.

Project Page, ML@GT blog post
ContactPose: A Dataset of Grasps with Object Contact and Hand Pose.
Samarth Brahmbhatt, Chengcheng Tang, Chris Twigg, Charlie Kemp, and James Hays
ECCV 2020.

Project page, arXiv
MSeg: A Composite Dataset for Multi-domain Semantic Segmentation.
John Lambert, Zhuang Liu, Ozan Sener, James Hays, and Vladlen Koltun.
CVPR 2020.

Project page, Paper
ContactGrasp: Functional Multi-finger Grasp Synthesis from Contact
Samarth Brahmbhatt, Ankur Handa, James Hays, and Dieter Fox
IROS 2019.

Project page, arXiv paper
Towards Markerless Grasp Capture.
Samarth Brahmbhatt, Charlie Kemp, and James Hays.
CVPR 2019 CV for AR/VR Workshop.

Project page, arXiv
Argoverse: 3D Tracking and Forecasting With Rich Maps.
Ming-Fang Chang*, John Lambert*, Patsorn Sangkloy*, Jagjeet Singh*,
Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, and James Hays.
*co-first authors
CVPR 2019 oral.

Paper, Argoverse project page and data, API code (Github)
ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging.
Samarth Brahmbhatt, Cusuh Ham, Charlie Kemp, and James Hays.
CVPR 2019 oral and best paper finalist.

Project page, Blog post
Composing Text and Image for Image Retrieval - An Empirical Odyssey.
Nam Vo, Lu Jiang, Chen Sun, Kevin Murphy, Jia Li, Fei-Fei Li, and James Hays.
CVPR 2019 oral.

Paper (arXiv), Code (Github)
Generalization in Metric Learning: Should the Embedding Layer be the Embedding Layer?
Nam Vo and James Hays.
WACV 2019

Paper (arXiv), Code (Github)
Revisiting IM2GPS in the Deep Learning Era.
Nam Vo, Nathan Jacobs, and James Hays.
ICCV 2017.

Project Page, Paper (arXiv)
Scribbler: Controlling Deep Image Synthesis with Sketch and Color.
Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays.
CVPR 2017.

Project Page, Paper (arXiv), Adobe Max Demo

The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies.
Patsorn Sangkloy, Nathan Burnell, Cusuh Ham, James Hays.
Siggraph 2016.

Project Page, Paper

Earlier Papers (click to expand)

Support

My research has been funded by a Sloan Fellowship, NSF Career award (1149853), NSF medium 1563727, IARPA's Finder program (FA8650-12-C-7212), and gifts from Intel, Google, Microsoft, Pixar, Adobe, and Argo AI.