The goal of the project is to map the 3D world coordinates to image coordinates. The next steps are to compute the fundamental martix that relates points in one scene to epipolar lines in another. The camera projection matrix and the fundamental matrix are estimated using point correspondances.
Fig1: Actual and Projected Points
The estimated location of camera is: <-1.5127, -2.3517, 0.2826>
Fig2: Camera Center
Fig3: Epipolar Lines
Fig4: Epipolar Lines
%Finding the mean of the points
mean_u = mean(points_norm(1,finite_indices));
mean_v = mean(points_norm(2,finite_indices));
%Offset by the mean
new_u(1,finite_indices) = points_norm(1,finite_indices)-mean_u; % Shift origin to centroid.
new_v(2,finite_indices) = points_norm(2,finite_indices)-mean_v;
%Find the standard deviation of the points
dist1 = std(new_u,0,2);
dist2 = std(new_v,0,2);
%the distance std matrix
distance = [dist1;dist2];
%Computing scale
std_dist = mean(std(distance));
%The initial scale assumed is 1
scale = 1/std_dist;
%Computing the Transformation matrix and the normalized points
T_matrix = [scale 0 -scale*mean_u;0 scale -scale*mean_v;0 0 1];
normalized_points = T_matrix*points_norm;
Threshold value: 0.005 - Rushmore
Threshold value: 0.05 - Notre Dam
Threshold value: 0.005 - Gaudi
Threshold value: 0.05 - Dorm
There are a couple of matches that are not correct as observed from the Results section. However, most of the matches are right due to the fact that they represent the inliers. This is a drastic improvement over computing matches using the interest points and SIFT features concept.
Observing the difference before and after normalizing the points
The threshold value used for both cases is 0.05. Clearly before normalization, there are many matches that are off by a small distance/value. There are a couple that are way off. But after implementing the normalization portion the matches are configured from the exact spots rather than being around the same "locality". Hence, normalizing points gives a better representation of the matches.
Note: When the code is run with the given threshold values, it may sometimes crash because it may not be able to find the required inliers for those sample points. Although this happens rarely, running the code once more should get it working.The threshold values I have mentioned are strict. Relaxing these values may also help, especially for the Dorm image. Results for this image can be obtained between threshold values of 0.05 to 0.5