Geometric
Computing Challenges in Micro and Nano Manipulation Using Optical
Tweezers
Presenter: S.K. Gupta,
University of Maryland at College Park
Collaborators: Tao Peng, University of Maryland at College Park
Tom LeBrun and Arvind Balijepalli, National Institute for Standards and
Technology
Our group is currently investigating the use of laser-based optical
tweezers to perform nano and micro manipulation. Our plan is use
optical tweezers to perform autonomous assembly. Basically, we are
trying to trap very small components using lasers and move them into
certain geometric configurations autonomously. The eventual goal is to
use this method to prototype nanoscale electronic and photonic
components. This method can also be used to study cell and drug
interactions.
People have successfully manually operated optical tweezers to
manipulate a wide variety of objects. Manual operation requires
considerable expertise in running optical tweezers as well as limits
the complexity of the device that can be successfully assembled.
Therefore, our main emphasis is to achieve a very high degree of
autonomy in manipulation tasks. The basic idea being that the human
operator defines the tasks and the system actually performs in
autonomous manner. In our pursuit of autonomous assembly, we have
identified a number of challenges in the areas of shape modeling and
representation that need to be addressed.
Assembly manipulation tasks are performed in a fluidic medium and due
to the Brownian motion everything constantly moves. Therefore, to
achieve autonomy we need to first build an on-line monitoring system
that can construct and update the 3D workspace at least at video rate
(we would like to achieve speeds of 50Hz). This on-line
monitoring system will need to (1) construct 3D shapes of components,
(2) identify them, and (3) track their 3D locations and positions of
components in the workspace. Optical section microscopy is a promising
technique for accomplishing this. It may not be possible to image the
entire cross section of the workspace simultaneously due to the
limitations of the camera resolution. Therefore, there are many
challenging questions that need to be answered. What is the optimal way
to image the workspace? How to use out of focus images to recover
depth using an optical model? How to calibrate the imaging system? How
to process information so that we achieve updates at 50Hz? How to
perform recognition of highly compliant and translucent structures?
Due to constant motion of components, inherent compliance of components
(e.g., cells, viruses), limited resolution of imaging techniques, and
optical effects observed at small scale, the model of the workspace
constructed by 3D imaging system contains significant uncertainties. We
will need to figure out how to model uncertainties in shapes and
locations. Constructing a very detailed and accurate model of
uncertainties may be very time consuming. On the other hand a highly
simplified model of uncertainties may prove to be of very little use in
autonomous path planning. Therefore, we will need to figure out
the appropriate models of uncertainties. We will also need to model
compliance in shapes of components to support geometric reasoning. A
very detailed mechanics-based model of the shape compliance will not be
suitable for real-time path planning. Therefore, we will need to
develop reduced order models to represent compliance. We will
also need to develop models that can determine the error bounds on
these reduced order models.
Components continuously move in the workspace. Therefore, collisions
are unavoidable. Therefore, it is highly likely that during the
manipulation task, the manipulator (i.e., optical tweezers) will loose
the component. Therefore, path planning will have to include recovery
strategies as a part of the basic planning. The trapping laser can also
be time shared to move multiple components. Hence the laser can also be
used to move components that are in the way of the target component and
hence clear the path. The physics of trapping imposes constraints on
the speed at which the laser can move a trapped particle through the
space. Moreover, there are also constraints on the shape of the
trap and clearance that need to be maintained between the trap and the
other components in the workspace. In order to perform planning, we
will need to identify and model relevant constraints in a geometric
framework. This will require us to reformulate path-planning
problem with the suitable modifications to goals and constraints. The
new problem formulation may require new geometric algorithms as well.