Head Image

Zhaoyang Lv  


Research Assistant, Ph.D. in Robotics, Georgia Institute of Technology
Wall Lab & BORG Lab, Institute for Robotics and Intelligent Machines

Previous Education:
M.Sc., Artificial Intelligence in Computing, Imperial College London
B.Sc., Electrical Engineering in Aeronauntics, Northwestern Polytechnical University


I am a Ph.D. student in Robotics at Georgia Tech, School of Interactive Computing, jointly advised by Prof. James Rehg, and Prof. Frank Dellaert. Before I started my Phd at Gatech, I finished my Master thesis under the supervision of Prof. Andrew Davison at Imperial College London.

Currently, I am a Phd intern working with Prof. Andreas Geiger at Autonomous Vision Group (AVG) Max Planck Institute.

My research interests cover computer vision researches as the perception for robot systems, particularly in the 3D scene understanding that can bridge perception to planning and control. My current focus is to explore efficient approaches for dense 3D motion (scene flow) from videos, which is the core of my thesis topic. My strenghts are in the following fields:

  • 3D Scene Flow, Optical Flow and Stereo.
  • Semantic Scene Understanding
  • Structure from Motion, Simultaneous Localization and Mapping

  • Office Address:
    College of Computing Building /
    Robotics & Intelligent Machines Center
    801 Atlantic Drive, Rm. 273B
    Atlanta, Georgia, U.S., 30308

    Email: lvzhaoyang1990 at gmail dot com
    zhaoyang dot lv at gatech dot edu

    Mobile: 404-3458841

    Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation

    Zhaoyang Lv, Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M. Rehg, Jan Kautz
    European Conference on Computer Vision 2018 , arXiv 1804.04259
    A blog post on Machine Learning @ Georgia Tech
    Project Page , Video , Code: Rigidity and scene flow ; RefRESH dataset creation toolkit
    Nvidia GTC 2018 Presentation Slides (Credit to Kihwan)

    A Continuous Optimization Approach for Efficient and Accurate Scene Flow

    Zhaoyang Lv, Chris Beall, Pablo F. Alcantarilla, Fuxin Li, Zsolt Kira, Frank Dellaert
    European Conference on Computer Vision 2016 , arXiv 1607.07983
    Project Page

    Learning to Cluster in Order to Transfer across Domains and Tasks

    Yen-Chang Hsu, Zhaoyang Lv, Zsolt Kira
    International Conference on Learning Representations 2018 (arXiv:1711.10125)
    Code , A blog post on Machine Learning @ Gerogia Tech

    Deep Image Category Discovery using a Transferred Similarity Function

    Yen-Chang Hsu, Zhaoyang Lv, Zsolt Kira
    arXiv:1612.01253

    Georgia_tech_bird_view

    Large-Scale Collaborative Semantic Mapping using 3D Structure from Motion Data

    Advisors: Prof. James Rehg, Prof. Frank Dellaert, Dr. Zsolt Kira

    A NSF project I am currently working on. My focus is dense 3D scene flow and dynamic 3D mapping.


    KinfuSeg System Image

    KinfuSeg: A Dynamic SLAM Approach Based on KinectFusion

    Master Thesis, Imperial College London
    Distinguished Thesis in Department of Computing (3 among 71), Top 5%
    Advisor: Prof. Andrew Davison

    Traditional SLAM methods works under the assumption that the evironment is totally static. When the scene is dynamic, both tracking and mapping will fail. In this project, this system is able to achieve:

    • Tracks the static scene, while segment out the dynamic object.
    • The first solution to real-time fuse dense 3D map for both static and dynamic scenes.



    Bachelor Quadrotor

    Quadrotor Design and its Navigation

    Bachelor Thesis, Northwestern Polytechnical University
    Advisor: Prof. Zhenbao Liu, Prof. Shuhui Bu, Prof. Xiaojun Tang

    The goal of this project is to build up a quad-rotor, with basic navigation and flight control system. The quad-rotor is able to achieve a stable flight with joystick control and hover autonomously.



    Autonomous Vision Group, Max Planck Institute for Inteliigent System, Tubingen, Now

    Advisor: Prof. Andreas Geiger

    Nvidia Research, Santa Clara, May 2017 - Aug. 2017

    Learning Dense Scene Flow Fields from RGB-D Images
    Director: Jan Kautz
    Mentors: Kihwan Kim, Deqing Sun, Alejandro Troccoli
    I work in the Visual Computing and Machine Learning Research group, about the research problem learning scene flow from RGB-D images. Our learning based approach outperforms all existing approaches in terms of accuracy and robustness, and can generalize well to different challenging scenes.

    Qualcomm Research, Greater San Diego, May 2016 - Aug. 2016

    Sensor Fusion & Planning in Autonomous Vehicle
    Manager: Dr. Ali Agha
    I work in the Sensor Fusion and Motion Planning group. I proposed a factor graph representation for joint intention prediction and motion planning algorithm.
  • Submitted two first author patents about joint intention prediction and motion planning, which have been approved.
  • I also participated a 16-hour Qualcomm Hack-Mobile event. And our proposed distaster rescue on-site system won finalists (3/50 teams) in the hackthon event.

  • Zhejiang University, Hangzhou, Dec. 2013 - July 2014

    RGB-D Real-time Reconstruction in Extended Scale
    Mentor: Prof. Guofeng Zhang

    Teaching assistant for CS 7643 Deep Learning, Georgia Tech, Fall 2017

    Instructor: Prof. Dhruv Batra

    Teaching assistant for CS 4476 / 6476 Computer Vision, Georgia Tech, Fall 2016

    Instructor: Prof. James Hays

    Vice President in Public Relation for RoboGrads, Georgia Tech, Fall 2016 - Spring 2017


    Organizer for GT Computer Vision Reading Group, Georgia Tech, Spring 2015 - Present

    I started to organize the CPL reading group as a computer vision research discussion group across Computational Perception Lab (CPL) since 2015, and now there have been an active particaption from students in computer vision research in different labs across the campus. If you are interested to join or receive future notifications, please join our google group (It's open access, you can enroll yourself with your gmail account).


    Back to top

    © Zhaoyang Lv. · Contact ·