Research Topics

Below I provide details on different topics in my research:

  • Multi-Robot Robust Inference

    Distributed Multi-Robot Robust Inference and Data Association
    We consider multi-robot inference over variables of intereset from unknown initial robot poses and undetermined data association. This problem is relevant for different multi-robot collaborative applications, such as cooperative mapping, localization, tracking, and surveillance. Collaboration, however, requires the robots to share a common world model and to be able to correctly interpret information communicated with each other. We show that establishing this collaboration first requires inferring concurrently a common reference frame between the robots and resolving data association, and formulate this problem within Expectation-Maximization (EM) framework.

    Below is a demonstration of this approach for two quadrotors (colored blue and red) sharing informative laser scans. As the robots do not have a common reference frame established, their initial poses are set to arbitrary values while in practice both robots start operating from the same location. Multi-robot candidate correspondences are generated by ICP-matching the shared laser scans. After some time (36 seconds and also 1 minute in the movie below), our approach successfully estimates the initial relative pose between the robots and determines data association (in terms of inlier and outlier multi-robot correspondences). As seen, the estimated initial relative pose correctly positions the robot trajectories, that now indeed start from the same location. From that moment, it becomes possible for the robots to robustly infer variables of interest, in this case each other's trajectories, and to identify the inlier correspondences in newly arriving data (denoted in black). The movie shows the process from the perspective of each robot (first red and then blue robot).

    Related publications:
    [ICRA 2014]

    Back to top

  • GBS

    Planning and Control in the Generalized Belief Space
    Planning is an important component in robot navigation and manipulation, and it is crucial in application endeavors in which the robot operates in full or partial autonomy, e.g., multi-robot exploration, autonomous surveillance, and robotic surgery.
    The complexity of the planning problem stems from the fact that (i) the robot dynamics are stochastic; and (ii) in most practical applications, the state of the robot is not directly observable, and can only be inferred from observations. Planning under these sources of uncertainty is a problem known as partially observable Markov decision process (POMDP). 

    Here we introduce the concept of generalized belief space (GBS) that represents both the state of the robot and the state of the surrounding environment in which the robot operates. Such a representation allows to relax the typical assumption of known environments (e.g. a given map) and facilitates planning in lack of sources of
    absolute information (e.g. GPS) in uncertain and partially unknown environments.

    Below is a demonstration of this concept in a robotic scenario where the robot has to visit a number of goals (denoted by green color) while operating in unknown environment and in lack of absolute information (e.g. no GPS). The planning algorithm tradeoffs between reaching the goals, control effort and uncertainty. Whenever uncertainty exceeds a threshold, the robot is guided towards previously observed regions to perform loop closure, thereby reducing uncertainty.

    Related publications:
    [ICRA 2014, ISRR 2013]

    Back to top

  • Incremental Light Bundle Adjustment

    In incremental light bundle adjustment (iLBA) we combine the following two key ideas: a) 3D points are algebraically eliminated using multi-view constraints, which greatly reduces the number of variables in the optimization. If required, all or some of the 3D points can be reconstructed using the optimized cameras. b) Incremental smoothing is applied to efficiently incorporate new incoming images - typically only a small part of camera poses needs to be updated. The iLBA framework [BMVC 2012] allows to greatly reduce computational complexity while maintaining comparable accuracy to conventional bundle adjustment.

    Probabilistic aspects of iLBA are analyzed in [WORV 2013], that demonstrates that the first two moments of the probability distribution of iLBA and the true distribution, calculated from conventional bundle adjustment, are very similar. In other words, in addition to high-accuracy results, iLBA also produces reliable covariance estimates.

    Code and datasets are publicly available (Software page).

    Demonstration: Indoor and outdoor datasets

    iLBA can also be used in robotics navigation applications where other sensors, in addition to monocular camera, are available. In our recent paper [IROS 2013] we discuss a method to integrate iLBA with high-rate IMU measurements. Using an equivalent IMU factor (as discussed here) allows to add variables to the optimization only at camera rate, while the LBA framework eliminates the necessity in having 3D points as variables to support loop closures. These two elements lead to improved computational complexity compared to state-of-the-art methods.

    Related publications: [IROS 2013, WORV 2013, BMVC 2012, PLANS 2012]

    Back to top

  •  Factor Graph based Incremental Smoothing in Navigation Systems
    In this DARPA-funded project, we collaborate with SRI International ltd. to develop a plug and play framework for navigation. The goal is to produce the best possible solution in real time based on different multi-rate and asynchronous sensors that may become inactive and\or resurrected at any time. A factor graph formulation is used as a representation of the joint probability function, and an efficient inference algorithm is used to calculate the MAP estimate given measurements from different sensors.

    Following a recently-developed IMU pre-integration theory, an equivalent IMU factor is introduced to summarize consecutive IMU measurements into a non-linear factor, which can be re-linearized if required. This factor is then incorporated into the optimization whenever measurements from other sensors are received, while high-rate navigation solution is contentiously obtained by composing the last navigation state in the factor graph with the current summarized IMU measurements. This is in contrast to the commonly used navigation-aiding approach where IMU measurements are processed outside of the estimator, without being able to perform re-linearization of past IMU measurements. The below figure illustrates a factor graph that accommodates factors from different sensors, and in particular equivalent IMU factors. See [RAS 2013, Fusion 2012a] for further details.

    Code is publicly available (Software page).

    Factor Graph

    To produce high-frequency results also in the presence of loop closure observations, concurrent filtering and smoothing is being developed [IJRR 2014, Fusion 2012b]: high-rate measurements are processed by a filter, while expensive measurements (e.g. loop closures) are processed by the smoother. The two processes represent a single optimization problem and are kept consistently synchronized.

    publications: [IJRR 2014, RAS 2013, Fusion 2012a, Fusion 2012b]

    Back to top

  • Distributed Consistent Cooperative Localization and Navigation

    A multi-agent scenario is considered, in which the different robots share information to improve localization\navigation and extend sensing. A graph-based approach was developed to guarantee a consistent information fusion between the different robots assuming a general multi-robot measurement model. Using the graph structure, separately maintained by each robot, appropriate correlation terms are calculated upon-demand and used within the update step of the filter. The method [IJRR 2012] is also applicable to implicit measurement models and in particular when using three-view geometry constraints (more details). Such an approach was developed in [RAS 2012], where the three-view constraints are applied whenever the robots observe a common scene. One thing to note is that the scene does not necessarily have to be observed by the robots at the same time.

    Related publications: [IJRR 2012, RAS 2012, PLANS 2012, ICRA 2011]

    A consistent decentralized data fusion (DDF) is studied within the smoothing and mapping framework as well [ICRA 2013
    ]. Here, the robots share certain variables of choice, such as observed 3D points, to both extend sensing horizon and improve localization and mapping. Consistent information fusion is guaranteed by explicitly avoiding using the same observation more than once (i.e. double counting), via information down-dating that is expressed in graphical models by anti-factors. Information summarization techniques are developed to efficiently retrieve the probabilistic information to-be-shared from the local factorized joint probability distribution, represented by the Bayes net.

    publications: [ICRA 2013]

    Back to top

  • Three-View Geometry for Navigation-Aiding

    In this research, we introduced the usage of three-view constraints for navigation-aiding in a monocular camera configuration. A new formulation of three-view constraints was developed, that includes in addition to the well-known epipolar constraints, a new constraint that allows to maintain a consistent scale even in co-linear camera configurations (which is not possible with only epipolar constraints). Given three overlapping images and the associated navigation solutions, the three-view constraints allow to reduce position errors in all axes to the levels present while the first two images were captured. The developed approach [TAES 2012] eliminates the need in the intermediate step of 3D point estimations and can be used for navigation-aiding based on nearby overlapping imagery as well as loop closures. In the latter case, only 3 images are required to incorporate loop closure information into the navigation system, as opposed to processing all images in the loop chain as conventionally done in other methods.

    Three-view geometry constraints are also applied in distributed cooperative navigation (more details) and in incremental light bundle adjustment (iLBA) (more details).

    publications: [TAES 2012, Aerospace 2011]

    Back to top

  • Online Mosaicking & Navigation-Aiding

    This research investigated how online mosaicking, based on imagery captured by an onboard camera, can be used for navigation aiding. In particular, introducing a coupling between a gimballed camera, that scans ground regions in flight vicinity, and the mosaicking process resulted in improved image-based motion estimations when operating in low-texture environments. The latter were fused with an inertial navigation system thereby leading to improved performance.

    One of the
    generated mosaic images in [JGCD 2010]:

    Related publications: [JGCD 2010, PLANS 2010, GNC 2009a, GNC 2009b]

    Back to top