Advancing Computer Vision with Humans in the Loop (ACVHL)

in conjunction with CVPR 2010

Motivation

There are several venues in which development of machine algorithms to analyze images can benefit from human involvement. Beginning to understand the human visual recognition system can provide valuable insights for advancing machine vision. Humans can be involved in the task of collecting and labeling data, especially with the increasing popularity of tools like LabelMe and Amazon Mechanical Turk. Human studies can be performed to evaluate algorithms for super-resolution, segmentation, matting, etc. There are perhaps other avenues to exploit human involvement that have not been explored yet. This workshop provides a forum for works that actively keep humans in the loop for advancing computer vision.

Call for Papers

Papers must describe high-quality, original, and novel research. Areas of interest include all areas where human interaction is exploited to advance computer vision through improved algorithm design, data collection, evaluation, etc.

Some specific areas of interest include, but are not limited to:

Paper submissions will be electronic and in PDF, compliant with standard CVPR format with a maximum of eight (8) pages. Paper submission is now open!

Important Dates

March 24, 2010 Deadline for paper submission (11:59 pm EDT)
April 8, 2010 Notification of decision
April 21, 2010 Camera-ready copies due (5:00 pm EDT)
June 14, 2010 Workshop (Full Day)

Workshop Organizers
Devi Parikh dparikh@ttic.edu Toyota Technological Institute at Chicago
Andrew Gallagher andrew.c.gallagher@gmail.com Eastman Kodak Company
Tsuhan Chen tsuhan@ece.cornell.edu Cornell University

Program Committee

Serge Belongie    
Piotr Dollar    
Rob Fergus    
Kristen Grauman    
Derek Hoiem    
Gang Hua    
Ashish Kapoor    
Svetlana Lazebnik    
Jiebo Luo    
Pietro Perona    
Bryan Russell    
Greg Shakhnarovich    
Noah Snavely    
Alexander Sorokin    
Rahul Sukthankar    
Sinisa Todorovic    
Antonio Torralba    
Larry Zitnick    

 

Program

 

08:30 am to 08:35 am: Opening remarks

08:35 am to 09:20 am: Keynote: Mine is Bigger than Yours: Big Datasets in Computer Vision (David Forsyth)

09:20 am to 09:45 am: The Benefits and Challenges of Collecting Richer Object Annotations (Ian Endres, Ali Farhadi, Derek Hoiem and David Forsyth)

09:45 am to 10:10 am: The HPU (James Davis, Joan Arderiu, Henry Lin, Zeb Nevins, Sebastian Schuon, Orazio Gallo and Ming-Hsuan Yang)

 

10:10 am to 10:30 am: Coffee break 

 

10:30 am to 11:15 am: Keynote: What do you See and Remember when you Glance at a Scene? (Aude Oliva)

11:15 am to 11:40 pm: Hands by Hand: Crowd-sourced Motion Tracking for Gesture Annotation (Ian Spiro, Graham Taylor, George Williams and Christoph Bregler)

11:40 am to 12:05pm: Online Crowdsourcing: Rating Annotators and Obtaining Cost-effective Labels (Peter Welinder  and Pietro Perona)

 

12:05 pm to 02:20 pm: Lunch 

 

02:20 pm to 03:05 pm: Keynote: Title TBD (Fei-Fei Li)

03:05 pm to 03:30 pm: Interactive Semantic Camera Coverage Determination Using 3D Floorplans (Ish Rishabh and Ramesh Jain)

 

03:30 pm to 04:00 pm: Coffee break

 

04:00 pm to 04:45 pm: Keynote: Beware of the Human in the Loop (Antonio Torralba)

04:45 pm to 05:15 pm: Indoor-Outdoor Classification with Human Accuracy: Image or Edge Gist? (Christina Pavlopoulou and Stella Yu)

05:15 pm to 06:00 pm: Keynote: Visual Recognition with Humans in the Loop (Serge Belongie)