PURE POSE SOLUTION METHOD AND SYSTEM FOR MULTI-VIEW CAMERA POSE AND SCENE

20230041433 · 2023-02-09

Assignee

Inventors

Cpc classification

International classification

Abstract

A pure pose solution method and system for a multi-view camera pose and scene are provided. The method includes: a pure rotation recognition (PRR) step: performing PRR on all views, and marking views having a pure rotation abnormality, to obtain marked views and non-marked views; a global translation linear (GTL) calculation step: selecting one of the non-marked views as a reference view, constructing a constraint t.sub.r=0, constructing a GTL constraint, solving a global translation (I), reconstructing a global translation of the marked views according to t.sub.r and (I), and screening out a correct solution of the global translation; and a structure analytical reconstruction (SAR) step: performing analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose. The method and system can greatly improve the computational efficiency and robustness of the multi-view camera pose and scene structure reconstruction.

Claims

1. A pure pose solution method for a multi-view camera pose and scene, comprising: a pure rotation recognition (PRR) step, wherein the PRR step comprises performing a PRR on views, and marking views having a pure rotation abnormality of the views to obtain marked views and non-marked views; a global translation linear (GTL) calculation step, wherein the GTL calculation step comprises selecting one of the non-marked views as a reference view, constructing a constraint t.sub.r=0, constructing a GTL constraint, solving a global translation {circumflex over (t)}, reconstructing a global translation of the marked views according to t.sub.r and {circumflex over (t)}, and screening out a correct solution of the global translation; and a structure analytical reconstruction (SAR) step, wherein the SAR step comprises performing an analytical reconstruction on coordinates of 3D points according to a correct solution of a global pose.

2. The pure pose solution method according to claim 1, wherein the PRR step further comprises: step 11: for a view i (1≤i≤N) and a view j (j∈V.sub.i), calculating θ.sub.i,j=∥[X.sub.j].sub.xR.sub.i,jX.sub.i∥ by using image matching point pairs (X.sub.i,X.sub.j) and a relative attitude R.sub.i,j of dual views (i,j), and constructing a set Θ.sub.i,j and a set Θ i = .Math. j V i Θ i , j ,  wherein a proportion of elements, greater than δ.sub.1, in Θ.sub.i is denoted by γ.sub.i; step 12: when γ.sub.i<δ.sub.2, marking the view i as a pure rotation abnormality view, recording a mean value of elements in the set Θ.sub.i,j as θ.sub.i,j, letting l = arg min j V i { θ _ i , j } ,  and constructing a constraint t.sub.i=t.sub.l; wherein when a 3D point X.sup.W=(x.sup.W,y.sup.W,z.sup.W).sup.T is visible in n (≤N) views, for i=1, 2 . . . , n, V.sub.i is a set configured with co-views of the view i; X.sub.i and X.sub.j represent a normalized image coordinate of the 3D point X.sup.W on the view i and a normalized image coordinate of the 3D point X.sup.W on the view j, respectively; δ.sub.1 and δ.sub.2 are specified thresholds; R.sub.i and t.sub.i represent a global attitude of the view i and a global translation of the view i, respectively; R.sub.i,j (=R.sub.jR.sub.i.sup.T) and t.sub.i,j represent a relative attitude of the dual views (i,j) and a relative translation of the dual views (i,j), respectively; and [X.sub.j].sub.x represents an antisymmetric matrix formed by vectors X.sub.j; and step 13: repeating step 11 to step 12 for the views.

3. The pure pose solution method according to claim 2, wherein the GTL calculation step comprises: step 21: for a current 3D point, selecting views ( ς , η ) = arg max 1 i , j n { θ i , j } ,  wherein custom-character is a left baseline view, and η is a right baseline view; step 22: for the non-marked views, constructing the GTL constraint according to a form of Bt.sub.η+Ct.sub.i+Dcustom-character=0; wherein the normalized image coordinate of the 3D point X.sup.W on the view i comprise X.sub.i˜custom-charactercustom-charactera.sup.Tcustom-character.sub.η+custom-character.sub.,ηcustom-character.sub.icustom-characterY.sub.i, ˜ represents an equation under homogeneous coordinates, a.sup.T=−([X.sub.η].sub.xcustom-character.sub.,ηcustom-character).sup.T[X.sub.η].sub.x, and a superscript T represents a transposition of a matrix or a transposition of a vector; and different target function forms are defined to solve the global translation linearly; wherein the relative translation t.sub.i,j has different forms corresponding to the global translation, a matrix B, a matrix C, and a matrix D have different forms: (1) for a target function [X.sub.i].sub.xY.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.TR.sub.η, C=[X.sub.i].sub.xcustom-character.sub.,ηR.sub.i, D=−(B+C); (2) for a target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,icustom-charactera.sup.TR.sub.η, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,ηR.sub.i, D=−(B+C); (3) for the target function [X.sub.i]Y.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.T, C=[X.sub.i].sub.xcustom-character.sub.,η, D=−(Bcustom-character.sub.,η+Ccustom-character.sub.,i); and (4) for the target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,i custom-charactera.sup.T, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character, D=−Bcustom-character.sub.,η+Ccustom-character.sub.,i); step 23: repeating step 21 to step 22 for other 3D points, constructing a linear equation, and solving the global translation {circumflex over (t)}; step 24: reconstructing the global translation of the marked views according to t.sub.i=t.sub.l by using {circumflex over (t)} and t.sub.r; and step 25: screening out the correct solution of the global translation t according to custom-character≥0.

4. The pure pose solution method according to claim 3, further comprising a camera pose optimization step between the GTL calculation step and the SAR step, wherein the camera pose optimization step comprises: expressing image homogeneous coordinates f.sub.i of the 3D point X.sup.W on the view i, wherein
f.sub.i˜custom-charactercustom-character+custom-character.sub.,i wherein ˜ represents the equation under homogeneous coordinates, custom-character=∥[X.sub.η].sub.xcustom-character.sub.,η∥/custom-character.sub.,η, and a re-projection error is defined, wherein ε i = f i e 3 T f i - f i ~ wherein e.sub.3.sup.T=(0,0,1), {tilde over (f)}.sub.i represents image coordinates of the 3D point on the view i and a third element of {tilde over (f)}.sub.i is 1; for views of the 3D point, a re-projection error vector ε is formed; for the 3D points, an error vector Σ is formed; a target function of a global pose optimization is described as arg min Σ.sup.TΣ, and an optimization solution of the global pose is calculated; and the camera pose optimization step is further replaced with a classic bundle adjustment (BA) algorithm, wherein 3D scene point coordinates are configured by an output result of the classic BA algorithm, or the 3D scene point coordinates are obtained by using the SAR step.

5. The pure pose solution method according to claim 3, wherein the SAR step further comprises: performing an analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose; for the current 3D point, calculating a depth of field in the left baseline view custom-character, wherein z ^ ς W = .Math. 1 j n j ς ω ς , j d ς ( ς , j ) calculating a depth of field in the right baseline view, wherein z ^ η W = .Math. 1 j n j η ω j , η d η ( j , η ) wherein d.sub.η.sup.(j,η)=∥[R.sub.j,ηX.sub.j].sub.xt.sub.j,η∥/θ.sub.j,η, custom-character.sub.,j and ω.sub.j,η represent weighting coefficients; and performing the analytical reconstruction to obtain a first category of the coordinates of the 3D points; or performing the analytical reconstruction to obtain a second category of the coordinates of the 3D points by using the depth of field in the right baseline view, or calculating an arithmetic mean of the first and second categories of the coordinate values of the 3D points.

6. A pure pose solution system for a multi-view camera pose and scene, comprising: a module, wherein the PRR module is configured to perform a PRR on views, and mark views having a pure rotation abnormality of the views to obtain marked views and non-marked views; a GTL calculation module, wherein the GTL calculation module is configured to select one of the non-marked views as a reference view, construct a constraint t.sub.r=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to t.sub.r and {circumflex over (t)}, and screen out a correct solution of the global translation; and a SAR module, wherein the SAR module is configured to perform an analytical reconstruction on coordinates of 3D points according to a correct solution of a global pose.

7. The pure pose solution system according to claim 6, wherein the PRR module further comprises: a module M11, wherein the module M11 is configured to: for a view i (1≤i≤N) and a view j (j∈V.sub.i), calculate θ.sub.i,j=∥[X.sub.j].sub.xR.sub.i,jX.sub.i∥ by using image matching point pairs (X.sub.i,X.sub.j) and a relative attitude R.sub.i,j of dual views (i,j) and construct a set Θ.sub.i,j and a set Θ i = .Math. j V i Θ i , j ,  wherein a proportion of elements, greater than δ.sub.1, in Θ.sub.i is denoted by γ.sub.i; a module M12, wherein the module M12 is configured to: when γ.sub.1<δ.sub.2, mark the view i as a pure rotation abnormality view, record a mean value of elements in the set Θ.sub.i,j as θ.sub.i,j, let l = arg min j V i { θ _ i , j }  and construct a constraint t.sub.i=t.sub.l; wherein when a 3D point X.sup.W=(x.sup.W,y.sup.W,z.sup.W).sup.T is visible in n (≤N) views, for i=1, 2 . . . , n, V.sub.i is a set configured with co-views of the view i; X.sub.i and X.sub.j represent a normalized image coordinate of the 3D point X.sup.W on the view i and a normalized image coordinate of the 3D point X.sup.W on the view j, respectively; δ.sub.1 and δ.sub.2 are specified thresholds; R.sub.i and t.sub.i represent a global attitude of the view i and a global translation of the view i, respectively; R.sub.i,j (=R.sub.jR.sub.i.sup.T) and t.sub.i,j represent a relative attitude of the dual views (i,j) and a relative translation of the dual views (i,j), respectively; and [X.sub.j].sub.x represents an antisymmetric matrix formed by vectors X.sub.j; and a module M13, wherein the module M13 is configured to repeat operations of the module M11 to the module M12 for the views.

8. The pure pose solution system according to claim 7, wherein the GTL calculation module comprises: a module M21, wherein the module M21 is configured to: for a current 3D point, select views ( ς , η ) = arg max 1 i , j n { θ i , j } ,  wherein custom-character is a left baseline view, and η is a right baseline view; a module M22, wherein the module M22 is configured to: for the non-marked views, construct the GTL constraint according to a form of Bt.sub.η+Ct.sub.i+custom-character=0; wherein the normalized image coordinate of the 3D point X.sup.W on the view i comprise X.sub.i˜custom-character.sub.,icustom-charactera.sup.Tcustom-character.sub.,η+custom-character.sub.,ηcustom-character.sub.,icustom-characterY.sub.i ˜ represents an equation under homogeneous coordinates, a.sup.T=−([X.sub.η].sub.xcustom-character.sub.,ηcustom-character).sup.T[X.sub.η].sub.x, and a superscript T represents a transposition of a matrix or a transposition of a vector; wherein the relative translation t.sub.i,j has different forms corresponding to the global translation, a matrix B, a matrix C, and a matrix D have different forms: (1) for a target function [X.sub.i].sub.xY.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.TR.sub.η, C=[X.sub.i].sub.xcustom-character.sub.,ηR.sub.i, D=−(B+C); (2) for a target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,icustom-charactera.sup.TR.sub.η, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,ηR.sub.i, D=−(B+C); (3) for the target function [X.sub.i].sub.xY.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.T, C=[X.sub.i].sub.xcustom-character.sub.,η, D=−(Bcustom-character.sub.,η+Ccustom-character.sub.,i); and (4) for the target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,jcustom-charactera.sup.T, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,η, D=−(Bcustom-character.sub.,η+Ccustom-character.sub.,i); a module M23, wherein the module M23 is configured to repeat operations of the module M21 to the module M22 for other 3D points, construct a linear equation, and solve the global translation {circumflex over (t)}; a module M24, wherein the module M24 is configured to reconstruct the global translation of the marked views according to t.sub.i=t.sub.l by using {circumflex over (t)} and t.sub.r; and a module M25, wherein the module M25 is configured to screen out the correct solution of the global translation t according to a.sup.Tcustom-character≥0.

9. The pure pose solution system according to claim 8, further comprising a camera pose optimization module, wherein the camera pose optimization module is configured to: express image homogeneous coordinates f.sub.i of the 3D point X.sup.W on the view i, wherein
f.sub.i˜custom-charactercustom-character.sub.,icustom-character+custom-character.sub.,i wherein ˜ represents the equation under homogeneous coordinates, custom-character=∥[X.sub.η].sub.xcustom-character.sub.,η∥/custom-character.sub.,η, and a re-projection error is defined, wherein ε i = f i e 3 T f i - f i ~ wherein e.sub.3.sup.T=(0,0,1), {tilde over (f)}.sub.i represents image coordinates of the 3D point on the view i and a third element of {tilde over (f)}.sub.i is 1; for views of the 3D point, a re-projection error vector ε is formed; for the 3D points, an error vector Σ is formed; a target function of a global pose optimization is described as arg min Σ.sup.TΣ, and an optimization solution of the global pose is calculated; and the camera pose optimization step is further replaced with a BA algorithm, wherein 3D scene point coordinates are configured by an output result of the classic BA algorithm, or the 3D scene point coordinates are obtained by using the SAR step.

10. The pure pose solution system according to claim 8, wherein the SAR module is further configured to: perform an analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose; for the current 3D point, calculate a depth of field in the left baseline view custom-character, wherein z ^ ς W = .Math. 1 j n j ς ω ς , j d ς ( ς , j ) calculate a depth of field in the right baseline view, wherein z ^ η W = .Math. 1 j n j η ω j , η d η ( j , η ) wherein d.sub.η.sup.(i,j)=∥[R.sub.j,ηX.sub.j].sub.xt.sub.j,η∥/θ.sub.j,η custom-characterand ω.sub.j,η represent weighting coefficients; and perform the analytical reconstruction to obtain a first category of the coordinates of the 3D points; or perform the analytical reconstruction to obtain a second category of the coordinates of the 3D points by using the depth of field in the right baseline view, or calculate an arithmetic mean of the first and second categories of the coordinate values of the 3D points.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0069] Other features, objectives, and advantages of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following accompanying drawings.

[0070] FIGURE is a flowchart of the present disclosure.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0071] The present disclosure is described in detail below with reference to specific embodiments. The following embodiments will help those skilled in the art to further understand the present disclosure, but they do not limit the present disclosure in any way. It should be noted that several variations and improvements can also be made by a person of ordinary skill in the art without departing from the ideas of the present disclosure. These all fall within the protection scope of the present disclosure.

[0072] As shown in FIGURE, the present disclosure provides a pure pose solution method for a multi-view camera pose and scene, where the method uses initial attitude values of views as an input and includes the following steps:

[0073] PRR step: Perform PRR on all views and mark views having a pure rotation abnormality to obtain marked views and non-marked views.

[0074] GTL calculation step: Select one of the non-marked views as a reference view, construct a constraint t.sub.r=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to t.sub.r and {circumflex over (t)}, and screen out a correct solution of the global translation.

[0075] SAR step: Perform analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.

[0076] The PRR step includes the following steps:

[0077] Step 1: For a view i (1≤i≤N) and a view j∈V.sub.i, calculate θ.sub.i,j=∥[X.sub.j].sub.xR.sub.i,jX.sub.i∥ by using all image matching point pairs (X.sub.i,X.sub.j) and a relative attitude R.sub.i,j of dual views (i,j) and construct sets Θ.sub.i,j and

[00013] Θ i = .Math. j V i Θ i , j ,

where a proportion of elements in Θ.sub.i that are greater than δ.sub.1 is denoted by γ.sub.i.

[0078] Step 2: If γ.sub.i<δ.sub.2, mark the view i as a pure rotation abnormality view, record a mean value of elements in the set Θ.sub.i,j as θ.sub.i,j, letting

[00014] l = argmin j V i { θ ¯ i , j } ,

and construct a constraint t.sub.i=t.sub.l.

[0079] If a 3D point X.sup.W=(x.sup.W, y.sup.W,z.sup.W).sup.T is visible inn (≤N) views, for i=1, 2 . . . , n, V.sub.i is a set composed of all co-views of the view i; X.sub.i and X.sub.j represent normalized image coordinates of a point X.sup.W on the view i and the view j, respectively; δ.sub.1 and δ.sub.2 are specified thresholds; R.sub.i and t.sub.i represent a global attitude and a global translation of the view i, respectively; R.sub.i,j (=R.sub.jR.sub.i.sup.T) and t.sub.i,j represent a relative attitude and a relative translation of the dual views (i,j), respectively; and [X.sub.j].sub.x represents an antisymmetric matrix formed by vectors X.sub.j.

[0080] Step 3: Repeat step 1 to step 2 for all the views.

[0081] The GTL calculation step includes the following steps:

[0082] Step 1: For a current 3D point, select views

[00015] ( , η ) = argmax 1 i , j n { θ i , j } ,

where custom-character is a left baseline view and η is a right baseline view.

[0083] Step 2: For all the non-marked views (excluding the reference view), construct a GTL constraint according to the form of Bt.sub.η+Ct.sub.i+Dcustom-character=0.

[0084] Normalized image coordinates of a 3D point X.sup.W on the view i include X.sub.i˜custom-character.sub.,icustom-charactera.sup.Tcustom-character.sub.,η+custom-character.sub.,ηcustom-character.sub.,icustom-characterY.sub.i, ˜ represents an equation under homogeneous coordinates, a.sup.T=−([X.sub.η].sub.xcustom-character.sub.,ηcustom-character).sup.T[X.sub.η].sub.x, and the superscript T represents transposition of a matrix or vector. In order to solve the global translation linearly, different target function forms are defined, for example, (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and [X.sub.i].sub.xY.sub.i=0. I.sub.3 represents a 3D unit matrix, and e.sub.3 represents a third-column vector e.sub.3=(0,0,1).sup.T of the unit matrix. In addition, because the relative translation t.sub.i,j has different forms with respect to the global translation, for example, t.sub.i,j=R.sub.j(t.sub.i−t.sub.j) and t.sub.i,j=t.sub.j−R.sub.i,j t.sub.i, matrices B, C, and D of also have different forms:

[0085] (1) for the target function [X.sub.i].sub.x Y.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.TR.sub.η, C=[X.sub.i].sub.xcustom-character.sub.,ηR.sub.i, D=−(B+C);

[0086] (2) for the target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=R.sub.j(t.sub.i−t.sub.j): B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,icustom-charactera.sup.TR.sub.η, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,ηR.sub.i, D=−(B+C);

[0087] (3) for the target function [X.sub.i].sub.xY.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=[X.sub.i].sub.xcustom-character.sub.,icustom-charactera.sup.T, C=[X.sub.i].sub.xcustom-character.sub.,η, D=−(Bcustom-character.sub.,η+Ccustom-character.sub.,i); and

[0088] (4) for the target function (I.sub.3−X.sub.ie.sub.3.sup.T)Y.sub.i=0 and the relative translation t.sub.i,j=t.sub.j−R.sub.i,jt.sub.i: B=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,icustom-charactera.sup.T, C=(I.sub.3−X.sub.ie.sub.3.sup.T)custom-character.sub.,η, D=−(Bcustom-character.sub.,η+Ccustom-character.sub.,i).

[0089] Step 3: Repeat step 1 to step 2 for other 3D points, construct a linear equation, and solve the global translation {circumflex over (t)}.

[0090] Step 4: Reconstruct the global translation of the marked views according to t.sub.i=t.sub.l by using {circumflex over (t)} and t.sub.r.

[0091] Step 5: Screen out the correct solution of the global translation t according to custom-character≥0.

[0092] An optional camera pose optimization step is added between the GTL calculation step and the SAR step:

[0093] Express image homogeneous coordinates f.sub.i of the 3D point X.sup.W on the view i as follows:


f.sub.i˜custom-charactercustom-character.sub.,icustom-character+custom-character.sub.,i

[0094] where ˜ represents an equation under homogeneous coordinates, custom-character=∥[X.sub.η].sub.xcustom-character.sub.,η∥/custom-character.sub.,η,and a re-projection error is defined as follows:

[00016] ε i = f i e 3 T f i - f i ~

[0095] where {tilde over (f)}.sub.i represents image coordinates of a 3D point on the view i and a third element is 1. For all views of the 3D point, a re-projection error vector ε is formed. For all 3D points, an error vector Σ is formed. A target function of global pose optimization is described as arg min Σ.sup.TΣ, and an optimization solution of the global pose is calculated accordingly. It should be noted that the camera pose optimization step may be replaced with another optimization algorithm, such as a classic BA algorithm; in this case, the 3D scene point coordinates may adopt an output result of the classic BA algorithm or may be obtained by using SAR step.

[0096] The SAR step includes:

[0097] performing analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose.

[0098] For a current 3D point, a depth of field in the left baseline view custom-character is calculated as follows:

[00017] z ˆ ϛ W = .Math. 1 j n j ϛ ω ϛ , j d ϛ ( ϛ , j )

[0099] A depth of field in the right baseline view is calculated as follows:

[00018] z ^ η W = .Math. 1 j n j η ω j , η d η ( j , η )

[0100] where d.sub.η.sup.(j,η)=∥[R.sub.j,ηX.sub.j].sub.xt.sub.j,η∥/θ.sub.j,η, custom-character.sub.,j and ω.sub.j,η represent weighting coefficients. For example, in analytical reconstruction of a 3D point based on the depth of field in the left baseline view, it is specified that

[00019] ω ς , j = θ ς , j / .Math. 1 j n j ς θ ς , j ,

and in this case, coordinates of the current 3D feature point are as follows:


X.sup.W=custom-character

[0101] Coordinates of all the 3D points can be obtained through analytical reconstruction. Similarly, the coordinates of the 3D points can be obtained through analytical reconstruction based on the depth of field in the right baseline view. An arithmetic mean of the foregoing two categories of coordinate values of the 3D points can be calculated.

[0102] Based on the foregoing pure pose solution method for a multi-view camera pose and scene, the present disclosure further provides a pure pose solution system for a multi-view camera pose and scene, including:

[0103] a PRR module configured to perform PRR on all views, and mark views having a pure rotation abnormality, to obtain marked views and non-marked views;

[0104] a GTL calculation module configured to select one of the non-marked views as a reference view, construct a constraint t.sub.r=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to t.sub.r and {circumflex over (t)}, and screen out a correct solution of the global translation; and

[0105] an SAR module configured to perform analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.

[0106] Those skilled in the art are aware that in addition to being realized by using pure computer-readable program code, the system and each apparatus, module, and unit thereof provided in the present disclosure can realize a same program in a form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, or an embedded microcontroller by performing logic programming on the method steps. Therefore, the system and each apparatus, module, and unit thereof provided in the present disclosure can be regarded as a hardware component. The apparatus, module, and unit included therein for realizing various functions can also be regarded as a structure in the hardware component; the apparatus, module and unit for realizing the functions can also be regarded as a software program for implementing the method or a structure in the hardware component.

[0107] The specific embodiments of the present disclosure are described above. It should be understood that the present disclosure is not limited to the above specific implementations, and a person skilled in the art can make various variations or modifications within the scope of the claims without affecting the essence of the present disclosure. The embodiments in the present disclosure and features in the embodiments may be arbitrarily combined with each other in a non-conflicting manner.