IMPROVEMENTS IN OR RELATING TO PHOTOGRAMMETRY
20220335733 · 2022-10-20
Assignee
Inventors
Cpc classification
G01C11/02
PHYSICS
International classification
G01C11/02
PHYSICS
Abstract
Photogrammetric analysis of an object is carried out by capturing images of the object. Photogrammetric analysis requires the capture of multiple overlapping images of the object from various camera positions. These images are then processed to generate a three-dimensional (3D) point cloud representing the object in 3D space. A 3D model of the object is used to generate a model 3D point cloud. Based on the modelled point cloud and camera optics, the visibility of each point in the 3D point cloud is determined for a range of possible camera positions. The radial component of the camera is fixed by defining a shell of suitable camera positions around the part and for each position on the defined shell, the quality as a function of camera position is calculated. This defines a density function over the potential camera positions. Initial camera positions are selected based on the density function.
Claims
1. A method of determining optimum camera positions for capturing images of an object for photogrammetry analysis, the method comprising the steps of: providing a three-dimensional (3D) model of the object; generating a model 3D point cloud from the 3D model; defining a radial shell of possible camera positions around the model 3D point cloud; calculating a quality parameter for the possible camera position; selecting a number of camera positions from the possible camera positions in response to the calculated quality parameter.
2. A method as claimed in claim 1 wherein the method includes the step of providing a model of the camera optics and/or a model of the achievable range of motion of the camera.
3. A method as claimed in claim 1 wherein the shell radius is selected so as to maximise the number of points on the object surface that are in focus.
4. A method as claimed in claim 1 wherein the method utilises more than one shell.
5. A method as claimed in claim 1 wherein the quality parameter is calculated by a quality function comprising a weighted summation of one or more calculated terms.
6. A method as claimed in claim 5 wherein one or more of the calculated terms involves calculation of an uncertainty parameter uncert(θ, α, n) for a point n from
uncert(θ,α,n)=vis(θ,α,n)×({right arrow over (ray)}(θ,α,n).Math.{right arrow over (norm)}(θ,α,n)) where θ is the azimuth, α is the elevation, {right arrow over (ray)}(θ, α, n) is a normalised ray vector from the camera to point n, {right arrow over (norm)}(θ, α, n) is the surface normal of point n and vis(θ, α, n) is a logical visibility matrix.
7. A method as claimed in claim 6 wherein one calculated term is an accuracy parameter calculated from
8. A method as claimed in claim 6 wherein one calculated term is an occlusion parameter calculated from
9. A method as claimed in claim 6 wherein the quality function used for calculating the quality parameter (Quality(θ, α)) is defined by
10. A method as claimed in claim 6 wherein the method includes the step of selecting initial camera positions using the quality function and subsequently carrying out optimisation analysis to select optimum camera positions.
11. A method as claimed in claim 10 wherein the optimisation analysis is carried out by optimising a cost function, the cost function comprising a weighted sum of one or more calculated terms.
12. A method as claimed in claim 11 wherein one calculated term is a quality term calculated from
13. A method as claimed in claim 11 wherein one calculated term is a minimum visibility term calculated from
14. A method as claimed in claim 11 wherein one calculated term is a triangulation term calculated from
15. A method as claimed in claim 14 wherein the triangular logical matrix is calculated from
common(k,l,n)=vis(θ.sub.k,α.sub.k,n).Math.vis(θ.sub.l,α.sub.l,n)
16. A method as claimed in claim 11 wherein the cost function is defined by
17. A method as claimed in claim 11 wherein the optimisation analysis is carried out with respect to a temporal budget defined by the maximum scan time or reconstruction time available for photogrammetric analysis.
18. A method as claimed in claim 17 wherein the temporal budget optimisation analysis includes the step of selecting a preset number of initial camera positions and refining the initial camera positions by minimisation of the cost function.
19. A method as claimed in claim 11 wherein the optimisation analysis is carried out with respect to a geometric budget defined by the level of accuracy required from the photogrammetric analysis.
20. A method as claimed in claim 19 wherein the geometric budget optimisation analysis includes the step of determining the proportion of points n that meet or exceed accuracy criteria.
21. A method as claimed in claim 20 wherein the accuracy criteria are satisfied by points where
22. A method as claimed in claim 20 wherein the accuracy criteria are satisfied by points where
23. A method as claimed in claim 22 wherein the threshold value ω is determined by a computational model of the photogrammetry system or by a model of the camera optics.
24. A photogrammetry apparatus comprising: one or more cameras, each camera provided on a movable mounting bracket; an analysis engine operable to conduct photographic analysis of images captured by the or each camera; and a camera position engine operable to calculate optimum camera positions for capturing images of an object for photogrammetry analysis, the camera position engine operable according to the method of claim 1.
Description
DETAILED DESCRIPTION OF THE INVENTION
[0041] In order that the invention may be more clearly understood one or more embodiments thereof will now be described, by way of example only, with reference to the accompanying drawings, of which:
[0042]
[0043]
[0044] Turning now to
[0045] The present invention is directed to a method of determining optimum camera positions for capturing images of an object for generation of a 3D point cloud for photogrammetry analysis. This allows automation of the camera position determination. This is more efficient and accurate than manual position selection by an operator. It can also be tailored to ensure an appropriate level of accuracy and/or to conform to a desired maximum scan time.
[0046] In the method, the first step is to provide a 3D model of the object. Typically, this can be achieved by using STL/CAD data. The STL/CAD information is used to generate a model 3D point cloud in order to accelerate the calculations discussed in the coming steps. The density of the 3D point cloud will affect the speed and accuracy of the algorithm (higher density is more accurate but slower).
[0047] In addition, the properties of the camera optics and the permitted range of motion of the cameras are provided. For the sake of simplicity, the latter steps assume a single photogrammetry system able to move through a range of radii on a spherical polar coordinate system, the positions defined by radius, elevation (a) and azimuth (0) as shown in
[0048] Based on the modelled point cloud and camera optics, the visibility of each point in the 3D point cloud is determined for a range of possible camera positions. This calculation can take into account both line-of-sight visibility and depth-of-field, requiring each point to be both visible and in focus.
[0049] In order to substantially reduce the complexity of the problem, the radial component of the camera is fixed by defining a shell 20 of suitable camera positions around the part that will maximise the visible number of points according to the focal plane of the camera. In other words, for each elevation and azimuth, this process will define a radius in which the maximum number of points on the object surface are in focus. For systems with a particularly low depth-of-field, multiple shells may be defined in order to cover the entirety of the object.
[0050] For each elevation and azimuth on the defined shell, a quality parameter is calculated from a quality function in order to quantify how important is the camera position. The quality parameter relates to the likely quality of images captured at each potential camera position. The quality is higher when the image will contribute more to accurate photogrammetric reconstruction of the object. The quality as a function of camera position is calculated and this defines a density function over the potential camera positions.
[0051] The density function is used as a probability distribution to select initial camera positions. The number of camera positions required is set by the chosen scan time (lower scan time requires fewer camera positions) or based on the required accuracy (greater accuracy requires more camera positions). The initial camera positions are selected based on the density function. The initial positions are then subject to refinement by optimisation analysis. This analysis is carried out by minimising a cost function which is based on the overall expected quality of the photogrammetric analysis from images captured from the selected camera positions. The process can be repeated for different numbers of camera positions so that the optimum balance of time and accuracy is achieved.
[0052] Turning to the quality function, this comprises a weighted sum of calculated terms defining an accuracy parameter and an occlusion parameter. In the particular example discussed herein, the quality function takes the form:
[0053] where N is the number of points, w.sub.1 and w.sub.2 are weighting variables, uncert(θ, α, n) is a parameter quantifying uncertainty, and Gaussian(γ, σ) is a gaussian function at radius γ with width σ.
[0054] The uncertainty parameter is calculated from
uncert(θ,α,n)=vis(θ,α,n)×({right arrow over (ray)}(θ,α,n).Math.{right arrow over (norm)}(θ,α,n))
[0055] where {right arrow over (ray)}(θ, α, n) is normalised ray vector from the camera to point n, {right arrow over (norm()}θ, α, n) is the surface normal of point n, and vis(θ, α, n) is a logical visibility matrix.
[0056] The first term in the weighted sum defining the quality function is the accuracy parameter. This term quantifies the accuracy to which the visible points can be localised. This term is at a minimum when the object surface is parallel to the optical axis of a particular camera position and is at a maximum when the object surface is normal to the optical axis of a camera position.
[0057] The second term in the weighted sum defining the quality function is the occlusion parameter. This term quantifies the rate at which particular points become occluded from particular camera positions and thus ensures such points are captured. The radius of the gaussian donut convolution determines the angle from the point of disappearance that the point should be captured, with the width of the gaussian determining the spread of the angles that are acceptable. The skilled man will appreciate that additional parameters can be added to the quality function in order to better represent the real measurement or to accelerate the process.
[0058] Turning now to optimisation analysis, this can be carried out by minimising a cost function, the cost function comprising a weighted sum of calculated terms related to quality, minimum visibility and triangulation. In the particular example discussed herein, the cost function takes the form:
[0059] where C.sub.1, c.sub.2 and c.sub.3 are weighting variables, and K is the number of camera positions. The first term in the cost function is a quality term calculated from a sum of the quality function over selected camera positions.
[0060] The second term in the cost function is a minimum visibility term. This term relates to ensuring that each point n is visible in from at least a minimum number of camera positions. This term is calculated from
[0061] where reqView is the minimum number of camera positions.
[0062] The third term in the cost function is a triangulation term. This term quantifies how accurately each point n will be triangulated. Effectively, this term maximises the angle between the camera positions in which a point n is visible. This term can be calculated from:
[0063] where common(k, l, n) is a triangular logical matrix which is 1 when point n is visible in both cameras k and l. common(k, l, n) is defined as:
common(k,l,n)=vis(θ.sub.k,α.sub.k,n).Math.vis(θ.sub.l,α.sub.l,n)
[0064] The optimisation analysis may be carried out with respect to a temporal budget or a geometric budget. This can allow an operator to optimise camera positions either for desired maximum scan time or for a desired level of accuracy.
[0065] If optimisation analysis is carried out with respect to a temporal budget, an operator can specify a desired maximum scan time or reconstruction time available for photogrammetric analysis. The temporal budget will effectively define the maximum number of images of the object that can be captured during the analysis. In such instances, refinement of initial camera positions determined in relation to the quality function by minimisation of the cost function allows maximisation of the scan quality for the permitted number of images.
[0066] If optimisation analysis is carried out with respect to a geometric budget, the operator may define criteria relating to the level of accuracy required from the photogrammetric analysis. Geometric budget optimisation analysis then includes the step of determining the proportion of points n that meet or exceed accuracy criteria. In such an example, the accuracy criteria are calculated using the minimum visibility parameter and/or the triangulation parameter. In particular, the accuracy criteria mare satisfied by points where
[0067] where Threshold % is the threshold for non-visible points.
[0068] In another implementation, the accuracy criteria may be satisfied by points where
[0069] where ω is a threshold value. The threshold value ω is be determined by a computational model of the photogrammetry system and may correspond to the Triang(n) value required to reach the predetermined level of accuracy.
[0070] The one or more embodiments are described above by way of example only. Many variations are possible without departing from the scope of protection afforded by the appended claims.