Single-camera stereoaerophotogrammetry using UAV sensors

11514597 · 2022-11-29

    Inventors

    Cpc classification

    International classification

    Abstract

    The disclosure presents novel methods to conduct aerial surveying, inspection and measurements with higher accuracy in a fast and easy way, comprising: (1) flying a drone with an accelerometer, gyro, and camera sensors over a target object; (2) capturing a first aerial image at a first position; (3) capturing a second aerial image at a second position, wherein the second position has a horizontal and vertical displacement from the first position; (4) calculating the displacements between the first and second location using a sensor fusion estimation algorithm from the position sensors' data; (5) solving for the pixel depth information of the aerial images by using a single-camera stereophotogrammetry algorithm with the relative altitude and horizontal distance; (6) deriving the ground sample distance (GSD) of each pixel from the calculated depth information; (7) using the image pixel GSD values to compute any geometric properties of the target objects.

    Claims

    1. A method of conducting aerial survey, comprising: flying an unmanned aerial vehicle over a target object, wherein the unmanned aerial vehicle has a camera, gyro, and accelerometer; using the camera to take a first aerial image at a first location; and a second aerial image at a second location, wherein the second location has a non-zero vertical and horizontal distance from the first location; calculating the relative vertical and horizontal distance using a fusion estimation algorithm on the gyro and accelerometer readings, wherein the fusion estimation algorithm comprises: collecting real-time sensor readings from the gyro and accelerometer; filtering the real-time sensor readings with a pre-processing filter; calculating the integration of the filtered accelerometer readings regulated by minimizing the mean squared error between the filtered gyro readings and the gyro readings estimated from the filtered accelerometer readings; solving the pixel depth information of the aerial images by using a stereophotogrammetry algorithm and the calculated relative vertical and horizontal distances; deriving the ground sample distance of each pixel from the pixel depth information; using the ground sample distance values to compute a geometric property of the target object.

    2. The method of claim 1, wherein the unmanned aerial vehicle is a drone.

    3. The method of claim 1, wherein the pre-processing filter is a Kalman filter.

    4. The method of claim 1, wherein the stereophotogrammetry algorithm includes a motion refinement step, image rectification step, stereo matching step, and triangulation step.

    5. The method of claim 4, wherein the image rectification step involves using a homography matrix.

    6. The method of claim 4, wherein the motion refinement step includes a feature selection step, feature matching step, outlier rejection step, and nonlinear motion estimation step.

    7. The method of claim 6, wherein the feature is the result of a Harris Corner, SIFT, or Forstner interest operator.

    8. The method of claim 6, wherein the feature matching step involves using a hierarchical multi-resolution search.

    9. The method of claim 6, wherein the nonlinear motion estimation step involves using the Levenberg-Marquardt method.

    10. The method of claim 6, wherein a disparity is calculated for each pixel from the rectified image.

    11. The method of claim 1, wherein the pixel depth information is solved by finding the difference of the space vectors of the first and the second locations.

    12. The method of claim 1, wherein the altitudes of the first and second locations are estimated from the horizontal and vertical distances between the first and second locations.

    13. The method of claim 1, wherein the ground sample distance is derived from the camera's focal distance and sensor size.

    14. The method of claim 1, wherein the method of computing the geometric property using the ground sample distance values, comprising: selecting a path between a first and second pixel on one of the aerial images; collecting all the pixels along the path; estimating the path distance by calculating the integration of the ground sample distance values of all the pixels along the path.

    15. An apparatus of conducting aerial survey, comprising: an unmanned aerial vehicle that has a processor and a plurality of sensors; wherein the plurality of sensors comprises a camera, gyro, and accelerometer; wherein the unmanned aerial vehicle can fly over a target object from a first location to a second location, wherein the second location has a non-zero vertical and horizontal distance from the first location; wherein the camera takes a first aerial image at the first location and a second aerial image at the second location; wherein the processor first calculates the relative vertical and horizontal distance using a fusion estimation algorithm on the gyro and accelerometer readings, wherein the sensor fusion estimation algorithm comprises: collecting real-time sensor readings from the gyro and accelerometer; filtering the real-time sensor readings with a pre-processing filter; calculating the integration of the filtered accelerometer readings regulated by minimizing the mean squared error between the filtered gyro readings and the gyro readings estimated from the filtered accelerometer readings; wherein the processor then solves the pixel depth information of the aerial images by using a stereophotogrammetry algorithm and the calculated relative vertical and horizontal distances; wherein the processor then derives the ground sample distance of each pixel from the pixel depth information; wherein the processor lastly uses the ground sample distance values to compute a geometric property of the target object.

    16. The apparatus of claim 15, wherein the unmanned aerial vehicle is a drone and the pre-processing filter is a Kalman filter.

    17. The apparatus of claim 15, wherein the stereophotogrammetry algorithm includes a motion refinement step, image rectification step, stereo matching step, and triangulation step.

    18. The apparatus of claim 17, wherein the image rectification step involves using a homography matrix.

    19. The apparatus of claim 17, wherein the motion refinement includes a feature selection step, feature matching step, outlier rejection step, and nonlinear motion estimation step.

    20. The apparatus of claim 15, wherein the processor is a remote processor wirelessly connected to the unmanned aerial vehicle.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    (1) FIG. 1 illustrates a preferred embodiment of the overall system design of the present disclosure.

    (2) FIG. 2 illustrates a preferred embodiment of the accurate sensor fusion altitude estimation method and apparatus of the present disclosure.

    (3) FIG. 3 illustrates an exemplary embodiment of the camera ground sample distance (GSD) estimation method of the present disclosure.

    (4) FIG. 4 illustrates an exemplary embodiment of the fast two-image wide-baseline stereo depth estimation method of the present disclosure.

    (5) FIG. 5 illustrates a preferred embodiment of the single-camera stereo rectification method using 3D projective transformation of the present disclosure.

    (6) FIG. 6 illustrates an exemplary work flow diagram of the preferred stereoaerophotogrammetry methods of the present disclosure for aerial surveying.

    DETAILED DESCRIPTION

    (7) The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well as the singular forms, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one having ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. In describing the invention, it will be understood that a number of techniques and steps are disclosed. Each of these has individual benefit and each can also be used in conjunction with one or more, or in some cases all, of the other disclosed techniques. Accordingly, for the sake of clarity, this description will refrain from repeating every possible combination of the individual steps in an unnecessary fashion. Nevertheless, the specification and claims should be read with the understanding that such combinations are entirely within the scope of the invention and the claims.

    (8) In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one ordinarily skilled in the art that the present invention may be practiced without these specific details. The present disclosure is to be considered as an exemplification of the invention, and is not intended to limit the invention to the specific embodiments illustrated by the figures or description below. The present invention will now be described by referencing the appended figures representing preferred or alternative embodiments.

    (9) The disclosure presents several novel methods to conduct aerial surveying, inspection, and measurements with higher accuracy in a fast and easy way, comprising: (1) flying a drone with an accelerometer sensor, gyro sensor, and camera over a target object; (2) capturing a first aerial image at a first position with a first altitude; (3) capturing a second aerial image at a second position with a second altitude, wherein the second position has a horizontal displacement from the first position; (4) calculating the relative altitude and horizontal distance between the first and second location using a sensor fusion estimation algorithm from the position sensors' data; (5) solving for the pixel depth information of the aerial images by using a single-camera stereophotogrammetry algorithm and the relative altitude and horizontal distance; (6) deriving the ground sample distance (GSD) of each pixel from the calculated depth information; (7) using the image pixel GSD values to compute any geometric properties of the target objects.

    (10) The disclosure also presents novel apparatus to conduct aerial surveying, inspection, and measurements with higher accuracy in a fast and easy way, comprising: (1) a drone with an accelerometer sensor and camera that can fly on top of the target object; (2) an onboard or offline sensor data processor that computes any geometric properties of the target objects from the captured image and sensor data according the methods above.

    (11) FIG. 1 illustrates a preferred embodiment of the overall system design of the present disclosure. A drone (102), or referred to as an UAV, is equipped with at minimum a gyro, accelerometer and digital camera (104). The camera (104) may be installed on a gimbal and can be adjusted to face down to shoot the ground target object at a certain image resolution. The digital image resolution achievable today is 4K, that is, 4096-by-2160 pixels, and RGB three 8 bit channels per pixel. The gyro can provide real-time rotation speed information of the drone (102) around the 3-axis, for 6-degrees of freedom, in 3D space. The accelerometer can provide real-time acceleration information of the drone (102) along the 3-axis.

    (12) A ground target object (108) is typically a house or building located on ground level (100). Each part of the target object (108) has a relative height or altitude from the ground level (100) if the height of the ground level (100) is assumed to be zero.

    (13) The drone (102) is controlled to fly to a first location (120) and stays there until the camera (104) takes a first image of the ground target object (108). The first location (120) is above the ground target object (108). The height, or altitude from the first location (120) to the ground level (100) is (114). Then the drone (102) flies to a second location (122) and stays there until the camera (104) takes a second image of the ground target object (108). The second location (122) is above the ground target object (108). The height, or altitude from the first location (122) to the ground level (100) is (116). The relative height, or relative altitude from the first location (120) to the second location (122) is (110). The relative horizontal displacement from the first location (120) to the second location (122) is (112).

    (14) In a preferred apparatus of the present disclosure, an onboard processor (106) with the software implementation of the said methods and algorithms of the present disclosure is embedded in the UAV or drone (102). In yet another embodiment of the apparatus of the present disclosure is a cloud server processor, or client-side PC software processor, implemented with the said methods and algorithms of the present disclosure. Simply by directing the drone (102) from one location (120) to another (122) and taking two images at each location, an inspection measurement collection operation is completed. The two aerial images and all the accelerometer sensor data during the movement from the first location (120) to the second location (122) are recorded and saved for onboard real-time or offline processing. The sensor data may be saved locally inside the drone (102) for onboard CPU processing (106), and the computation results may be saved locally or sent over the wireless channel to a mobile device, cloud storage, a cloud server, or a client-side PC. In the case of offline processing, the sensor data may be transferred to a mobile device, cloud server, or client-side PC. The mobile device or the remote processor processes the sensor data and computes the measurement results.

    (15) A new single-camera stereophotogrammetry algorithm combined with a high-accuracy, sensor fusion-based relative altitude estimation from the position sensors data will be applied to measure or estimate any geometric properties in the captured image, such as the distance between any two points or the area of an arbitrary closed shape.

    (16) The accuracy of the said measurement can be to the centimeter or better. The two-step measurement collection is fast and simple, and may be automated by a program in the onboard software or the software in the mobile or cloud server. In one of the exemplary embodiments of the present disclosure, a user only needs to press one button on the software UI to finish the complete measurement data collection. After that the user only needs to instruct the system to define the measurements of the captured image; it will provide immediate measurement estimations with high accuracy.

    (17) FIG. 2 illustrates the method and apparatus of a preferred embodiment of a more accurate sensor fusion-based relative altitude estimation of the present disclosure. Sensor fusion methods combine data from disparate sources of information in a way that should give better performance than can be achieved when each source of information is used alone.

    (18) The drone (102) is controlled to fly to and stay at a first location (120) until the camera (104) takes a first image. The first location (120) is above the ground level (100). The height, or altitude, from the ground level (100) to the first location (120) is (114). The optical axis (208) of the camera (104) at the first location (120) forms an angle (210) with the height (114). Then the drone (102) flies to and stays at a second location (122) until the camera (104) takes a second image of the ground target object (108). The second location (122) is above the ground target object (108). The height, or altitude, from the first location (122) to the ground level (100) is (116). The optical axis (206) of the camera (104) at this second location (122) forms an angle (210) with the height (110). The relative height, or relative altitude, from the first location (120) to the second location (122) is (110). The relative horizontal displacement from the first location (120) to the second location (122) is (112).

    (19) As the drone (102) travels from the first location (120) to the second location (122), the gyro and accelerometer sensor readings are read out at a real-time frequency 1/T, a.k.a. refreshing rate, and saved into storage. Let us assume that all the readings from the gyro are represented as G.sub.k, where k=1, 2, 3, . . . , N; N being the last reading index when the drone (102) arrives at the second location (122). All the readings from the accelerometer are represented as A.sub.k, where k=1, 2, 3, . . . , N. The read-out rate can range from 1 Hz to 1000 Hz, with a typical rate of 100 Hz.

    (20) The gyro reading measures the drone's (102) rotating speeds along the X, Y, and Z axis. Gx, Gy, and Gz are three output vectors representing the rotating speeds in radians per second. The G.sub.x,y,z readings may be different at any moment. During the time interval dt.sub.k,k+1 between the two consequent readings of G.sub.k and G.sub.k+1, the angle of rotation R.sub.k can be estimated by c*G.sub.k*dt.sub.k,k+1. The total rotation R from t.sub.k1 to t.sub.k2 can be estimated by calculating the summation of all the angles of rotation, R.sub.k=c*Gk*dt.sub.k,k+1 of each time interval, where k=1, 2, 3, . . . , N; c is a constant.

    (21) The accelerometer reading measures the acceleration of the drone's (102) movement along the X, Y, and Z axis. Ax, Ay, and Az are three output vectors representing the acceleration in meters per second squared. The A.sub.x,y,z readings may be different at any moment. During the time interval dt.sub.k,k+1 between the two consequent readings of A.sub.k and A.sub.k+1, the speed Sk can be estimated by c*A.sub.k*dt.sub.k,k+1. The total speed S from t.sub.k1 to t.sub.k2 can be estimated by calculating the summation of all the speeds, S.sub.k=c*A.sub.k*dt.sub.k,k+1 of each time interval, where k=1, 2, 3, . . . , N; c being a constant. Similarly, during the time interval dt.sub.k,k+1 between the two consequent speeds S.sub.k and S.sub.k+1, the displacement D.sub.k can be estimated by c*S.sub.k*dt.sub.k,k+1=C*A.sub.k*(dt.sub.k,k+1).sup.2. The total displacement D from t.sub.k1 to t.sub.k2 can be estimated by calculating the summation of all the speeds, D.sub.k=c*S.sub.k*dt.sub.k,k+1=C*A.sub.k*(dt.sub.k,k+1).sup.2 of each time interval, where k=1, 2, 3, . . . , N, c being a constant.

    (22) When the drone (102) is at the first location (120), the center axis of the camera lens is pointing downward to the ground with a space vector (208) V.sub.L1=(V.sub.1x, V.sub.1y, V.sub.1z). The space vector (114) V.sub.g1=(0, 0, g), where g is the earth's gravity acceleration of 9.8 Newtons per second squared. The vector (114) is pointing straight to the center of the earth. Therefore we can derive the space angle (210) B.sub.L1 between the two vectors (114 and 208):
    B.sub.L1=arccos((V.sub.L1*V.sub.g1)/(∥V.sub.L1∥*∥V.sub.g1∥)
    Similarly, the space angle (204) B.sub.L2 between the camera lens optical axis vector (206) V.sub.L2=(V.sub.Nx, V.sub.Ny, V.sub.Nz) and the ground-facing vector (110) V.sub.g2=(0, 0, g) at the second location (122) is given by
    B.sub.L2=arccos((V.sub.L2*V.sub.g2)/(∥V.sub.L2∥*∥V.sub.g2∥)
    The space angle B.sub.L12 between the V.sub.L1 and V.sub.L1 is arccos ((V.sub.L1*V.sub.L2)/(∥V.sub.L1∥*∥V.sub.L2∥). So ∥BL12∥ will be the total rotation R, as previously calculated and measured using the gyro sensor's data.

    (23) The vector V.sub.L1 is also related to another vector from the accelerometer sensor at the first location (120). The vector is formed from the readings of the accelerometer in all three (X, Y, Z) axis, namely A.sub.x1, A.sub.y1, and A.sub.z1, at the first location (120). So A.sub.L1=(A.sub.x1, A.sub.y1, A.sub.z1). Similarly, V.sub.L2 is related to vector A.sub.L2=(A.sub.x2, A.sub.y2, A.sub.z2) based on the accelerometer readings at the second location (122). In one of the embodiments of the present disclosure, V.sub.L1=A.sub.L1 and V.sub.L2=A.sub.L2. In yet another embodiment of the present disclosure, V.sub.L1=c*A.sub.L1 and V.sub.L2=c*A.sub.L2, where c is a constant.

    (24) However, both gyro and accelerometer signals and readings are prone to noise and drift to the extent that good tracking of both orientation and position are difficult to achieve. For instance, a tracking system relying on stand-alone measurements from the gyro or accelerometer suffers from integration drifts that may lead to positioning errors of 1 meter after 10 seconds. The present disclosure uses three approaches to increase accuracy. The first is to use an extended Kalman filter (EKF) on each sensor data to reduce noise; the second is to use sensor fusion algorithms to combine the multiple sensors; the last is to greatly reduce the overall operation time, thus reducing the time for drifting. For example, complete the measurement operations under one second.

    (25) From above, a more accurate sensor fusion method to measure or estimate the relative altitude between two locations of the present disclosure can be: (1) collect the high-refresh rate readings from the accelerometer and gyro sensors; (2) integrate the filtered accelerometer data with the regulation of the integration of the filtered gyro data. The algorithm combines both the accelerometer and gyro sensor measurements to improve measurement accuracy. One exemplary embodiment of such an algorithm is described in detail below.

    (26) The higher the sensors' refresh rate, or read-out rate, the more accurate the measurement will be. The typical read-out rate is 100 Hz, meaning the sensor data is read out 100 times per second. There are high-end sensors with read-out rates of 1000 Hz or even 4000 Hz. For example, if a 4000 Hz refreshing rate accelerometer is used in the UAV algorithm of the present disclosure, the altitude measurement can, in theory, be as accurate as within 2 mm.

    (27) Assume the drone (102) starts from the first location (120) with the camera optical axis vector V.sub.L1 (V.sub.1x, V.sub.1y, V.sub.1z). Before the drone (102) arrives at the second location (122) with the camera optical axis vector V.sub.L2 (V.sub.2x, V.sub.2y, V.sub.2z), we collect all readings from the onboard gyro and accelerometer in the device memory. At any read-out time point t.sub.k, vector G.sub.k.sup.0=(G.sub.kx.sup.0, G.sub.ky.sup.0, G.sub.kz.sup.0) is the gyro reading; vector A.sub.k.sup.0=(A.sub.kx.sup.0, A.sub.ky.sup.0, A.sub.kz.sup.0) is the accelerometer reading.

    (28) Now construct an Extended Kalman filter to filter the gyro and accelerometer readings, which are generally represented as x.sub.k as in the following formulas:

    (29) (1) Model the input reading data x.sub.k in a nonlinear system f, h, with additive noise w.sub.k and measurement error v.sub.k.
    x.sub.k=f(x.sub.k−1)+w.sub.k−1
    z.sub.k=h(x.sub.k)+v.sub.k
    Q.sub.k=E[w.sub.kw.sub.k′],R.sub.k=E[v.sub.kv.sub.k′]
    The initial optimal estimate x.sub.0.sup.a=μ.sub.0=E[x.sub.0] with error covariance P.sub.0=E[(x.sub.0−μ.sub.0)(x.sub.0−μ.sub.0).sup.T].
    (2) From the Taylor expansion and the Jacobian of f

    (30) J f = [ f 1 x 1 f 1 x 2 .Math. f 1 x n f n x 1 f n x 2 .Math. f n x n ]
    The prediction part of x.sub.k will be:
    x.sub.k.sup.f≈f(x.sub.k−1.sup.a)
    The prediction error is
    e.sub.k.sup.f=x.sub.k−x.sub.k.sup.f=f(x.sub.k−1)+w.sub.k−1−f(x.sub.k−1.sup.a)≈J.sub.f(x.sub.k−1.sup.a)e.sub.k−1+w.sub.k−1
    P.sub.k.sup.f=E[e.sub.k.sup.f(e.sub.k.sup.f)′]=J.sub.f(x.sub.k−1.sup.a)P.sub.k−1J.sub.f(x.sub.k−1.sup.a)′+Q.sub.k−1
    (3) The new optimal estimation of the sample x.sub.k with Kalman gain is
    x.sub.k.sup.a≈x.sub.k.sup.f+K.sub.k(z.sub.k−h(x.sub.k.sup.f))
    where
    K.sub.k=P.sub.k.sup.fJ.sub.h′(x.sub.k.sup.f)(J.sub.h(x.sub.k.sup.f)P.sub.k.sup.fJ.sub.h′(x.sub.k.sup.f)+Q.sub.k−1).sup.−1
    P.sub.k=(I−K.sub.kJ.sub.h(x.sub.k.sup.f))P.sub.k.sup.f

    (31) After the Kalman filtering, the filtered sensor readings are G.sub.k=(G.sub.kx, G.sub.ky, G.sub.kZ) for the gyro; A.sub.k=(A.sub.kx, A.sub.ky, A.sub.kZ) for the accelerometer.

    (32) The new space vector of the drone (120) with the drone camera optical axis as the vector's direction will be calculated as follows:

    (33) V L k ( V k x , V k y , V k z ) = V L 1 ( V 1 x , V 1 y , V 1 z ) + .Math. 1 k c A j ( A Jx , A J y , A Jz ) ( dt j , j + 1 2 )
    where c is a constant. Let V.sub.Lk=V.sub.Lk(V.sub.kx, V.sub.ky, V.sub.kz), V.sub.L1=V.sub.L1(V.sub.1x, V.sub.1y, V.sub.1z), and A.sub.j=A.sub.j (A.sub.jx,A.sub.jy,A.sub.jz). The calculation will be further regulated by the gyro reading data R.sub.k as follows:

    (34) V L k _ = V L 1 _ + .Math. 1 k α j .Math. A J ¯ .Math. ( dt j , j + 1 2 )

    (35) E r r k = arg min α j ( .Math. j = 1 k [ R J ¯ - arccos ( α j .Math. A J ¯ .Math. dt j , j + 1 2 ) ] 2 )
    where the error is minimized in terms of the mean squared error between the two sensors' measurements.

    (36) From the algorithm above, one can compute the space vector of the drone (102) at the second location (122) as below:

    (37) V L 2 _ = V L 1 _ + .Math. 1 N α j .Math. A J ¯ .Math. ( dt j , j + 1 2 )
    So the relative altitude (110) from the first location (120) to the second location (122) is the vertical difference of the two space vectors:
    ΔAltitude=(V.sub.L2V.sub.L1).sub.z
    The relative horizontal displacement (112) from the first location (120) to the second location (122) is the horizontal distance between the two space vectors:

    (38) Δ Placement = .Math. ( V L 2 _ - V L 1 _ ) x y .Math. = .Math. ( V L 2 _ - V L 1 _ ) x .Math. 2 + .Math. ( V L 2 _ - V L 1 _ ) y .Math. 2

    (39) FIG. 3 illustrates an exemplary embodiment of the camera ground sample distance (GSD) estimation method of the present disclosure. The drone (102) has a digital camera (104). Assume the camera sensor (302) size is D.sub.c, the camera lens focal distance (306) is f.sub.c; the camera image resolution is W×H pixels. The altitude (114) is from the camera lens (308) to the ground level (100). then the GSD at altitude (114) is calculated as below:

    (40) G S D x = D c f c .Math. W .Math. Altitude GSD y = D c f c .Math. H .Math. Altitude
    So the GSD is proportional to the height of the drone (102) from the ground (100). If the camera (104) is not perfectly perpendicular to the ground level (100), the GSD will vary slightly from one side of the image to the other according to the distance from the object location to the camera lens (308).

    (41) FIG. 4 illustrates an exemplary embodiment of the fast two-image wide-baseline stereo depth estimation method of the present disclosure. Once an accurate estimation of the relative altitude difference DA (110), and the horizontal displacement DP (112) between the two drone locations (120) and (122) has been made, one can try to estimate the depth information of a point on the target object (408). In the exemplary embodiment of the present disclosure, the depth estimation can be carried out using a fast two-image wide-baseline stereo matching algorithm discussed in the present disclosure.

    (42) The first image (410) of the object point (408) is captured by the drone camera at the first location (120); the second image (412) of the object point (408) is captured by the drone camera at the second location (122). The pixel (404) of the first image (410) corresponds to the point (408) in the first image (410); the pixel (406) of the second image (412) corresponds to the point (408) in the second image (412). The pixel (406) of the first image (410) corresponds to its location in the first image (410). The difference (402) between the pixel location (404) and the pixel location (406) in the first images (410) is called disparity P in stereopsis. The depth information Z of the point (408) can then be estimated from the estimation of the disparity (402) P as below:

    (43) Z = DP .Math. f c P
    FIG. 4 illustrates the case when the drone camera (104) is perfectly aligned between the two shootings at the first location (120) and the second location (122) respectively. So, the captured two images (410) and (412) are considered to be rectified and need no further modification to calculate the 3D pixel depth information.

    (44) To use the formula above to calculate the depth of each pixel in the image, the disparity of each pixel from the image pair must be calculated first. The disparity estimation starts from the stereo matching process. The whole concept of stereo matching is based on finding correspondences between the two input images. In one embodiment of the present disclosure, the correspondence between two points is determined by inspecting a block (n×n) of the neighborhood N pixels around both points. The pairing that has the lowest sum of absolute differences,

    (45) S A D = .Math. x N .Math. "\[LeftBracketingBar]" L ( x ) - R ( x ) .Math. "\[RightBracketingBar]"
    is selected as a corresponding point pair. In practice, a matching block is located for each pixel in an image. The relative difference in the location of the points on the image planes is the disparity of that point. Due to the assumption of being constrained within a 1-dimensional search space, these disparities can be represented as a 2D disparity map that is the same size as the image. This is essentially a block-matching scheme familiar from video compression (a.k.a. motion estimation), although in this application the search space can be constrained to be horizontal only (with certain assumptions). The matching block size is one of the most important parameters that affects the outcome of the estimation. Smaller blocks can match finer detail, but are more prone to errors, while large blocks are more robust, but destroy detail. In the preferred embodiment of the present disclosure, a symmetrical square block of radius r has (2r+1).sup.2 pixels.

    (46) FIG. 5 illustrates a preferred embodiment of the new single-camera stereo rectification method using 3D projective transformation of the present disclosure. In this case, the two camera locations (120) and (122) are far from each other (wide-baseline, at different distances) and the two camera directions (angle position) are not aligned at all (unrectified). Therefore, the first image plane (410) and the second image plane (412) are not coplanar, and they are apart vertically with a relative altitude DA. The real object point (506) is captured in the first image (410) as the pixel (516), and captured in the second image (412) as the pixel (518). The neighborhood block (510) around the pixel (516) is used to match the neighborhood block (512) around the pixel (518) for stereo matching. They all correspond to the real-world neighborhood object point block (504) around the point (506).

    (47) In one embodiment of the present disclosure, the first step of the stereo rectification is to transform the second image plane 412 to be at the same altitude of the first image plane 410 by a projective projection. This is normally done by multiplying the pixel 3D coordinates with a 3-by-3 homography matrix. In one simplified exemplary embodiment of the present disclosure, this projective projection can be done by simply translating the estimated GSD of the second image plane 412 into the GSD of the first image plane 410. So, the second image is resized by the ratio below:

    (48) 0 r s c a l e = ( D A + Z 0 ) Z 0 = 1 + D A Z 0
    where DA is the accurate sensor fusion estimation of the relative altitude difference from the first drone location (120) to the second drone location (122). Z.sub.0 is the depth estimation (114) of the second location (120).

    (49) Since Z.sub.0 is not known at this time, in one of the embodiments of the present disclosure, a rough estimation of Z.sub.0 will be used for the image rescaling. First calculate all the feature points in the first and second image using the Harris Corner Detector. Then select the 10 strongest feature points from each image. Calculate the SIFT feature detector values for each of the 10 feature points. Locate two pairs of the points—P11 in the first image matches P21 in the second image, and P12 in the first image matches P22 in the second image. Calculate the Euclidean distances ED1 between P11 and P12, and the Euclidean distances ED2 between P21 and P22. So

    (50) Z 0 = DA .Math. ED 2 E D 1 - E D 2

    (51) Stereo error generally increases with the square of the distance to the target object, but decreases with the baseline distance (the distance between the cameras). Wide-baseline stereo is based on the idea that an arbitrarily large baseline can be achieved using two images from the same camera, but at different positions. While this improves the quality of the stereo range estimates for a distant target object, it introduces two new problems: stereo calibration can no longer be performed in advance, since the relative positions and orientations of the cameras are not fixed. In addition, performing stereo matching between the images is more difficult, since the target object is seen from different viewpoints.

    (52) In the preferred embodiment of the present disclosure, the methodology is able to address these issues by performing the following steps:

    (53) Motion refinement. Use an estimate of the relative camera positions from the previous gyro and accelerometer readings, but this estimate is refined using matches detected between the images. This process uses several sub-steps:

    (54) (a) Feature selection. Features in one of the images are selected using the Forstner interest operator.

    (55) (b) Feature matching. Matches for the selected features are detected using a hierarchical search over the entire image. Note that both images are high-pass filtered to remove some illumination effects.

    (56) (c) Outlier rejection. Outliers are rejected by finding matches where the vertical and/or horizontal disparity is not in agreement with nearby points.

    (57) (d) Nonlinear motion estimation. Optimize the motion parameters using the Levenberg-Marquardt method with a robust objective function that minimizes the distances between the matched point locations and the locations back-projected using the estimated motion parameters.

    (58) 2. Image rectification. The images are rectified so that the stereo match for each point lies on the same scan line. This reduces the search space for each point to one dimension.

    (59) 3. Stereo matching. Dense stereo matches are detected using a maximum-likelihood image-matching formulation. Efficient stereo search techniques are used to reduce the search time. Finally, sub-pixel disparities are computed and outliers are rejected using a technique that eliminates small regions of nonconformity.

    (60) 4. Triangulation. The position of each pixel relative to the camera is computed from the disparity using standard triangulation techniques.

    (61) Motion estimation requires a set of feature matches between the images. To accomplish this, first select up to 256 features that appear to be matchable from one of the images using the Forstner interest operator. For each image window, this operator considers both the strength and the isotropy of the gradients in the window. The features are selected in sub-regions of the image, to ensure that the features are well distributed in the image. They are also subject to a threshold, so that completely featureless regions do not contribute.

    (62) For each selected feature, we use a hierarchical multi-resolution search to locate the feature in the second image. This search first selects candidate matches at a low resolution. Each candidate match is then refined at a higher resolution. Optionally, affine matching is performed to further improve the quality of the matches. Finally, the best candidate is selected according to the scoring function (sum-of-absolute-differences in our implementation).

    (63) After matching has been performed, prune some matches from consideration. Two tests are used for pruning. First, if there is another candidate with a similar score to the top candidate, prune the match, since either could be correct. Second, estimate the standard deviation of the match position by examining how quickly the score changes as the match is moved. If the standard deviation is large, prune the match. Also remove outliers by comparing the disparities of the nearby points.

    (64) Once matches have been determined between the images, refine the estimated motion between the camera positions by enforcing geometrical constraints. Create a state vector consisting of the five recoverable motion parameters (the scale is not recoverable from this information) and the depth to each of the feature points with respect to the first camera position. The objective function uses the distance from each matched feature position to the back-projected position using the motion parameters and feature depth. The distances are combined in an M-estimator, such as described by Forsyth and Ponce, although a larger estimate for the scale yields convergence to the correct result more often. Given the state vector and object function, use a variation of Levenberg-Marquardt optimization in order to refine the input motion estimate.

    (65) Subsequent to the motion estimation, rectify the images using the method of Fusiello et al. After rectification, the correspondences between the images will lie on the same scan line, so that the search for each match can be performed in one dimension. Find dense matches between the images using a robust template matching method. This method has two advantages over typical matching methods, such as normalized correlation or the sum-of-squared-differences (SSD). Given a potentially matching location, many template matching methods assume that the mapping between pixels in the template and the search image is simply a translation, with no warping or nonlinearity, such as that caused by occlusion. Our method is more general and allows pixels in the template to match the search image at locations other than the implicit translation given by the relative position of the template. Use a measure that combines distance in the image with distance in intensity through a linear relationship.

    (66) The second improvement is that this measure explicitly accounts for the possibility of a pixel in the template with no corresponding match in the other image (for example, due to occlusion, which is common in wide-baseline stereo pairs). Accomplish this using a maximum-likelihood measure that incorporates the probability distribution function of both pixels with matches and pixels without matches.

    (67) Matching is performed efficiently using this measure by adapting a common stereo search strategy. This strategy prevents redundant computations from being performed for adjacent templates at the same displacement in the search image by saving information. The cost is a small amount of memory.

    (68) Estimate the sub-pixel disparity and standard deviation by fitting the scores near the maximum in the most likely match. Pixels are pruned from the output if the likelihood of the best match is low, the standard deviation is too large, or if the surrounding pixels do not form a large enough coherent block.

    (69) The techniques described above are able to handle inaccurate motion estimates and images captured from different viewpoints.

    (70) Once the disparity map P is estimated from the stereo-matching methods above, the relative depth estimation Z of the scene can be derived as below:

    (71) Z = DP .Math. f c P
    where DP is the horizontal displacement between the two drone locations.
    From depth estimation Z, the accurate GSD value of each image pixel can be calculated by the following

    (72) G S D x = D c f c .Math. W .Math. Z GSD y = D c f c .Math. H .Math. Z
    With the dense GSD map, any measurement of the geometric properties in the captured survey image can be derived. The geometric properties include, but are not limited to, distance between any two points, the perimeter or area of any convex or concave shapes, etc.

    (73) FIG. 6 illustrates an exemplary work flow diagram of the preferred stereoaerophotogrammetry methods of the present disclosure for aerial surveying. In one exemplary workflow of the present disclosure, the following steps cover most of the operations of the method one needs to perform to conduct the stereo-aero-photogrammetric measurements. It includes the data collection and algorithm calculation.

    (74) As the first step in (602), fly a UAV or drone above a target object to be measured. For example, a house or building. Normally the UAV will be several times higher than the building height. The UAV's camera is facing down towards the building, and its gyro and accelerometer sensors are turned on and recording the data into memory or a database in the firmware. When the UAV flies to the first location with a first height (604), control the UAV to take a first image. Fly the UAV to the second location with a second height (606), control the UAV to take a second image. In the preferred embodiment of the present disclosure, the second location has a higher height than that of the first location. However, it is obvious to the ordinarily skilled in the art that the second location can be anywhere from the first location. The step (608) actually happens during the step (604) and (606), controlling the UAV to record the position sensors' (gyro and accelerometer) readings between the first and second locations. All the data is saved in step (608). By this step, the data collection phase is completed. One can see the operations are simple and fast—the whole process can be completed in a couple of seconds.

    (75) Next, based on the sensor data, calculate the vertical and horizontal displacements in the step (610) using the methods described in the present disclosure. Calculate the projective projection between the two captured images and get the rectified stereo image pair from the previously collected data in step (612). In (614) one can use the preferred method described in the present disclosure to estimate the dense depth map from the stereo images. Equipped with the derived dense depth map, solve for the ground sample distance (GSD) of each pixel in the image based on both the depth and the relative heights to the camera in (616). Finally, measure any geometric properties of the target object in (618). In the end, as an option, land the UAV on the ground in (622) and a cycle of the single-camera stereo-aero-photogrammetry using UAV sensors is completed.