[NEW] Scene spatial disorder detection and reduction
I am a graduate student at the Georgia Institute of Technology in Atlanta, GA. I am currently earning my PhD in Robotics from the College of Computing. My research focus is on developing integrated perception for robots that leverage their interaction. Previously I was a Robotics Engineer at Intelligent Automation, Inc. in Rockville, MD. There I focused on finding autonomous vehicle and smart device solutions under DoD research grants, specializing in creating sliding autonomy systems for teleoperators to interactively teach robots tasks and using computer vision to improve teleoperation of field robots.
Before that, I earned my Masters Degree in Robotics from the University of Pennsylvania in Philadelphia, PA, and my Bachelors of Science Degree in Physics and Mathematics from Georgetown University in Washington, D.C. For more details, check out my C.V.!
Just a few things that I have been up to:
Here are a few projects that I worked on!
The ability of a robot to reason about the geometry and semantics of its environment is fundamental to interactive robot behaviors, but often challenging due to perception frameworks that are trained on too little data or data not representative of the robots environment. In this work, we investigate the potential gains of using synthetic data to augment the training process of convolutional neural networks designed to enable real-time semantic segmentation for robots with limited real-world training data. We investigate the degree to which a larger amounts of data improves performance when training such a model, the relationship between the way a deep neural network is trained using multiple sources of synthetic segmentation data to pretrain standard segmentation datasets that apply to robotics and autonomous driving, and show that our method outperforms both training from scratch and standard data augmentation practices like pretraining on ImageNet. We show that synthetic data does continue to improve these models in spite of real-time model architectures having many fewer parameters than typical deep neural networks, and therefore hypothetically less representational power. Finally, we show how this approach generalizes to small purpose-built robot vision datasets on data acquired using an HRI robot.
If you have any questions for me please feel free to e-mail me at firstname.lastname@example.org.