SEGMENTED APERTURE IMAGING AND POSITIONING METHOD OF MULTI-ROTOR UNMANNED AERIAL VEHICLE-BORNE SYNTHETIC APERTURE RADAR
20240319364 ยท 2024-09-26
Inventors
Cpc classification
G01S13/90
PHYSICS
G01S13/9011
PHYSICS
International classification
Abstract
A segmented aperture imaging/positioning method of a multi-rotor unmanned aerial radar. A target echo is acquired based on an unmanned aerial vehicle-borne synthetic aperture radar system. An echo signal estimated from the motion state of a manoeuvring platform is segmented. Motion compensation is performed on each echo signal segment. A two-dimensional spectrum is obtained by performing a two-dimensional Fourier transform on each compensated echo signal segment. A series inversion method to decompose the two-dimensional spectrum is used to obtain a phase filter of each segment. The two-dimensional spectrum of each segment is multiplied by the phase filter, and an image of each segment is obtained by performing two-dimensional inverse Fourier transform on the two-dimensional spectrum. A full-aperture imaging result is obtained by performing geometric corrections on the images and splicing them. The trajectory of each segment of the platform is spliced to obtain complete trajectory coordinates of the platform.
Claims
1. (canceled)
11. A segmented aperture imaging method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, for segmented aperture imaging based on a raw echo signal of the multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, comprising: performing a range pulse compression on the raw echo signal s(t, ?) to obtain a range pulse compression signal s.sub.RC(t, ?) based on a phase history ?(?) of scattering points in the range pulse compression signal s.sub.RC(t, ?), where t is a fast time in a range dimension and ? is a slow time in an azimuth dimension; calculating an estimated velocity {circumflex over (v)} and an estimated squint angle of the beam center {circumflex over (v)} of a manoeuvring platform; segmenting the range pulse compression signal SRC(t, n) into N segments based on a direction of the estimated velocity {circumflex over (v)}, each segment corresponding to a segmented pulse compression signal s.sub.RC,i(t, ?) where i=1 . . . N; calculating a phase compensation amount ?.sub.m,i of each segmented pulse compression signal s.sub.RC,i(t, ?) corresponding to said each segment based on the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (?)}; multiplying said each segmented pulse compression signal s.sub.RC,i(t, ?) by a motion error compensation filter H.sub.MC,i=exp(??.sub.m,i), where an imaginary unit j=?{square root over (?1)}, to obtain N compensated echo signal of said each segment denoted as s.sub.MC,i(t, ?); performing a two-dimensional Fourier transform on each compensated signal SMC,?(t, n) to obtain a two-dimensional spectrum s.sub.MC,i(f, f.sub.d); decomposing the two-dimensional spectrum s.sub.MC,i(f, f.sub.d ) of said each segment utilizing a series inversion method to construct an azimuth compression filter H.sub.AC,i, where f represents a frequency corresponding to the fast time in the range dimension and f.sub.d represents a Doppler frequency corresponding to the slow time in the azimuth dimension; multiplying the two-dimensional spectrum s.sub.MC,i(f, f.sub.d) by the azimuth compression filter H.sub.AC,i, and then performing a two-dimensional inverse Fourier transform to obtain N imaging results, represented as s.sub.IMG,i(t, ?) for said each segment; and sequentially, for overlapping areas in an imaging result s.sub.IMG,i(t, ?) ) corresponding to adjacent segments, aligning and coherently integrating envelopes in the range dimension where focus points are located, for non-overlapping areas, a final imaging result, denoted as S.sub.all, is obtained by splicing.
12. The segmented aperture imaging method of claim 11, wherein for the scattering points in the range pulse compression signal s.sub.RC(t, ?), performing a second-order fitting on the phase history ?(?) to obtain a phase history of the scattering points in the slow time in the azimuth dimension as ?(?)=??.sup.2+??+?.sub.0+o)?), where o(?) represents a higher order phase error and ?.sub.0 a constant phase term, based on coefficients of a second-order term ? and a first-order term a, the estimated velocity of the manoeuvring platform is calculated as
13. The segmented aperture imaging method of claim 11, wherein based on the direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal s.sub.RC(t, ?) is sequentially divided into N segments with consistent velocity directions; and determining whether a length of said each segment is less than one synthetic aperture length, if the length of said each segment is determined to be less than one synthetic aperture length, said each segment is extended on both sides to one synthetic aperture length, thereby to obtain N segmented pulse compression signals.
14. The segmented aperture imaging method of claim 11, wherein the phase compensation amount of said each segment is defined as
15. The segmented aperture imaging method of claim 11, wherein the azimuth compression filter of said each segment is defined as
16. The segmented aperture imaging method of claim 11, further comprising applying a geometric correction to the imaging result s.sub.IMG,i(t, ?) corresponding to the adjacent segments to obtain a corrected imaging result s.sub.IMG,i.sup.GC; rotating the corrected imaging result s.sub.IMG,i.sup.GC by {circumflex over (?)}??.sub.0 degrees to obtain the corrected imaging result SIMG,i perpendicular to a trajectory of the manoeuvring platform in a slant distance, where ?.sub.0 is a mean value of the estimated squint angle of beam center of said each segment; and aligning consecutive overlapping areas of the corrected imaging result s.sub.IMG,i.sup.GCcorresponding to the adjacent segments in the envelopes.
17. The segmented aperture imaging method of claim 16, wherein applying the geometric correction to the imaging result s.sub.IMG,i(t, ?) comprises: performing the two-dimensional Fourier transform in the azimuth dimension on the imaging result s.sub.IMG,i(t, ?) to obtain a range-Doppler domain image s.sub.IMG,i(t, f.sub.d) of said each segment; based on characteristics of the two-dimensional Fourier transform and a geometric structure of a target space, constructing an expression for a tilt correction filter to
18. The segmented aperture imaging method of claim 11, wherein the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (?)} calculations comprise: calculating a Doppler frequency modulation slope K.sub.a and a Doppler center f.sub.ac, as follow:
19. The segmented aperture imaging method of claim 11, wherein decomposing the two-dimensional spectrum s.sub.MC,i(f, f.sub.d) of said each segment comprises: based on a stationary phase method, obtaining an expression of the two-dimensional spectrum s.sub.MC,i(f, f.sub.d) of said each segment as
20. A segmented aperture positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, for calculating a flight trajectory of an unmanned aerial vehicle based on a raw echo signal from the multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, comprising: performing a range pulse compression on the raw echo signal s(t, ?) to obtain the range pulse compression signal s.sub.RC(t, ?) based on a phase history ?(?) of scattering points in the s.sub.RC(t, ?), where t is a fast time in a range dimension, and ? is a slow time in an azimuth dimension; calculating an estimated velocity {circumflex over (v)} and an estimated squint angle of the beam center {circumflex over (?)} of a manoeuvring platform; segmenting the range pulse compression signal s.sub.RC(t, ?) into N segments based on a direction of the estimated velocity {circumflex over (v)}, each segment corresponding to a segmented pulse compression signal s.sub.RC(t, ?), where i=1 . . . N; calculating platform trajectory coordinates [X.sub.k.sup.i, Y.sub.k.sup.i, Z.sub.k.sup.i] for i-th segment based on the estimated velocity {circumflex over (v)} and estimated the squint angle of beam center {circumflex over (?)}, where k=1 . . . M and M is a length of said each segment in an azimuth direction, as follows:
Description
DESCRIPTION OF THE FIGURES
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0031] In order to make the technical means, creative features, objectives, and effects of this invention easy to understand, the following provides a detailed explanation of a segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, in conjunction with examples and accompanying figures.
[0032] In the present invention, after the unmanned aerial vehicle takes off, the unmanned aerial vehicle-borne synthetic aperture radar transmits a linear frequency-modulated signal with a carrier frequency of f.sub.c, through a transmitted antenna. The transmitted signal, after being scattered by a target, is received by the radar through a receiving antenna, resulting in the raw echo signal s(t, ?), where t is the fast time related to range dimension, and n is the slow time related to azimuth dimension. The synthetic aperture length L.sub.s is given by L.sub.s=R?.sub.BW, where R is the reference range, and ?.sub.BW is beam width of the azimuth. The invention provides a segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar. This method is used for the segmented aperture imaging based on the raw echo signal of a unmanned aerial vehicle-carried synthetic aperture radar, and includes the following steps:
[0033] Step 1, perform range pulse compression on the raw echo signal s(t, ?) to obtain a range pulse compression signal s.sub.RC(t, ?), where t is the fast time in range dimension, and ? is the slow time in azimuth dimension. On the basis of the phase history ?(?) of strong scattering points in the range pulse compression signal s.sub.RC(t, ?), the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (?)} are calculated;
[0034] Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal s.sub.RC(t, ?) into N segments, with each segment corresponding to the echo signal s.sub.RC(t, ?), where i=1 . . . N;
[0035] Step 3, based on the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (?)}, calculate the value of the phase compensation ?.sub.m,i for echo signal of each segment s.sub.RC(t, ?). Multiply the echo signal of each segment SRC,?(t, n) by the motion error compensation filter H.sub.MC,i=exp(?j?.sub.m,i), where the imaginary unit j=?{square root over (?1)}, obtaining N compensated echo signal of each segment denoted as s.sub.MC,i(t,?);
[0036] Step 4, perform a two-dimensional Fourier transform on the compensated echo signal of each segment s.sub.MC,i(t, ?) to obtain the two-dimensional spectrum of each segment s.sub.MC,i(f, f.sub.d). Utilize the series inversion method to decompose the two-dimensional spectrum of each segment s.sub.MC,i(f, f.sub.d), constructing azimuth compression filter of each segment H.sub.AC,i, where f represents the frequency corresponding to the fast time t in range dimension and f.sub.d represents the Doppler frequency corresponding to the slow time ? in azimuth dimension;
[0037] Step 5, multiply the two-dimensional spectrum of each segment s.sub.MC,i(f, f.sub.d) by the azimuth compression filter of each segment H.sub.AC,i, and then perform a two-dimensional inverse Fourier transform to obtain N imaging results, represented as s.sub.IMG,i(t, ?) of each segment;
[0038] Step 6, sequentially, for the overlapping areas in the imaging result of each segment s.sub.IMG,i(t, ?) corresponding to adjacent segments, align and coherent integrate the envelopes in the range dimension where the strong focus points are located. For the non-overlapping areas, perform splicing to obtain the final full-aperture imaging result, denoted as S.sub.all.
[0039] Within Step 1, for the strong scattering points in the range pulse compression signal s.sub.RC(t, ?), perform a second-order fitting on the phase history ?(?) to obtain the phase history of the strong scattering points in the slow time in azimuth dimension as ?(?)=??.sup.2??+?.sub.0+0(?), where, t is the fast time in range dimension, ? is the slow time in azimuth dimension, 0 (?) represents higher order phase errors and ?.sub.0 is a constant phase term. On the basis of the coefficients of the second-order term ? and the first-order term ?, the estimated velocity of the manoeuvring platform is calculated as
and the estimated squint angle of the beam center is calculated as
where ? is the wavelength of the transmitted signal and {circumflex over (R)} is the estimated value of the reference range.
[0040] Within Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal s.sub.RC(t, ?) is sequentially divided into N segments with consistent velocity directions. Then, it is determined whether the length of each segment is less than one synthetic aperture length. If it is less, the segment is extended on both sides to one synthetic aperture length. Finally, N segmented pulse compression signals can be obtained, denoted s.sub.RC(t, ?)of each segment, where i=1 . . . N.
[0041] Within Step 3, the phase compensation amount of each segment is defined as
where R.sub.0 is the mean value of each segment {circumflex over (R)}, ?.sub.0 is the mean value of the estimated squint angle of beam center of each segment {circumflex over (?)}.
[0042] Within Step 4, the azimuth compression filter of each segment is defined as
where f corresponds to the frequency associated with the fast time t in range dimension, f.sub.a is the Doppler frequency corresponding to the slow time ? in azimuth dimension, f.sub.c is the carrier frequency of the system transmitted signal and c is the speed of light.
[0043] Within Step 6, firstly, geometric corrections are applied to the imaging result s.sub.IMG,i(t, ?) corresponding to the adjacent sections, obtaining the corrected imaging result s.sub.IMG,i.sup.GC. Subsequently, the corrected imaging result SIMG,i are rotated by {circumflex over (?)}??.sub.0 degrees, obtaining the corrected imaging result s.sub.IMG,i.sup.GC perpendicular to the trajectory of the manoeuvring platform in slant distance. Then, consecutive overlapping areas of the corrected imaging result s.sub.IMG,i.sup.GC corresponding to adjacent sections are aligned in the range dimension envelopes where the strong focus points are located, and coherent integration is performed. For the non-overlapping areas, splicing is executed, obtaining the final full-aperture imaging result Sall.
[0044] The invention also provides segmented aperture positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar. This method is used to calculate the flight trajectory of the unmanned aerial vehicle platform based on the echo signal from the synthetic aperture radar. It is characterized by the following steps:
[0045] Step S1, perform range pulse compression on the raw echo signal s(t, ?) to obtain the range pulse compression signal s.sub.RC(t, ?), and based on the phase history q(n) of strong scattering points in the s.sub.RC(t, ?), calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (?)};
[0046] Step S2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal s.sub.RC(t, ?), into N segments, with each segment corresponding to the segmented pulse compression signal s.sub.RC(t, ?), where i=1 . . . N. And calculate the platform trajectory coordinates [X.sub.k.sup.i, Y.sub.k.sup.i, Z.sub.k.sup.i] for the i-th segment based on the estimated velocity {circumflex over (?)} and estimated the squint angle of beam center {circumflex over (?)}, where k=1 . . . . M and M is the length of each segment in the azimuth direction. The coordinates are calculated as follow:
where ?.sub.in represents the angle between the beam direction of synthetic aperture radar and the normal direction of the ground plane;
[0047] Step S3, in the adjacent regions between the i-th and (i?1)-th segments, extract three strong scattering points. The coordinates of the strong scattering points in the i-th segment are denoted as [Q.sub.1.sup.i, Q.sub.2.sup.i, Q.sub.3.sup.i], and the coordinates of the strong scattering points in the (i?1)-th segment are denoted as [Q.sub.1.sup.i?1, Q.sub.2.sup.i?1, Q.sub.3.sup.i?1];
[0048] Step S4, calculate the rotation matrix ? for the i-th and (i?1)-th segments based on the coordinates of the strong scattering points [Q.sub.1.sup.i?, Q.sub.2.sup.i?1, Q.sub.3.sup.i?1] and [Q.sub.1.sup.i, Q.sub.2.sup.i, Q.sub.3.sup.i]. The rotation matrix ? is computed as follows:
[0049] Step S5, use the platform trajectory coordinates of the (i?1)-th segment [X.sub.k.sup.i?1, Y.sub.k.sup.i?1, Z.sub.k.sup.i?1] as the reference to rotate the platform trajectory coordinates of the i-th segment. This rotation aligns the platform trajectory coordinates of adjacent segments, which are given by [X.sub.k.sup.i?1, Y.sub.k.sup.i?1, Z.sub.k.sup.i?1]=?.Math.[X.sub.k.sup.i, Y.sub.k.sup.i, Z.sub.k.sup.i];
[0050] Step S6, perform coherent integration of the platform trajectory coordinates in the overlapping region of the i-th and (i?1)-th segments, and concatenate the platform trajectory coordinates in the non-overlapping region to obtain the concatenated trajectory coordinates, which are calculated by [P.sub.x, P.sub.y, P.sub.z]=[X.sub.k.sup.i, Y.sub.k.sup.i, Z.sub.k.sup.i]+[X.sub.k.sup.i?1, Y.sub.k.sup.i?1, Z.sub.k.sup.i?1];
[0051] Step S7, repeat steps S3 through S6 until the spliced trajectory coordinates [P.sub.x, P.sub.y, P.sub.z] for all segments are obtained, so as to obtain the final trajectory coordinates [P.sub.x.sup.all, p.sub.u.sup.all, P.sub.x.sup.all ] of the platform.
EMBODIMENTS
[0052] In this embodiment, the manoeuvring platform refers to a multi-rotor unmanned aerial vehicle, and the onboard synthetic aperture radar operates in the Ku band as a linear frequency-modulated continuous-wave radar. The target refers to the ground area that requires synthetic aperture imaging.
[0053] As shown in
[0054] Step 1, after the unmanned aerial vehicle takes off, the unmanned aerial vehicle-borne synthetic aperture radar transmits a linear frequency-modulated signal with a carrier frequency of f.sub.c, through a transmitted antenna. The transmitted signal, after being scattered by a target, is received by the radar through a receiving antenna, resulting in the raw echo signal s(t, ?), where t is the fast time related to range dimension, and ? is the slow time related to azimuth dimension., where t is the fast time in range dimension, and ? is the slow time in azimuth dimension. Perform range pulse compression on the raw echo signal s(t,?) to obtain a range pulse compression signal S.sub.RC(t, ?). On the basis of the phase history ?(?) of strong scattering points in the range pulse compression signal SRC(t, ?), the estimated velocity {circumflex over (?)} and estimated squint angle of the beam center {circumflex over (?)} are calculated.
[0055] Specifically perform as follows in sub-steps:
[0056] Step 1-1, based on the definitions of the Doppler frequency modulation slope K.sub.a and the Doppler center f.sub.ac, the expressions for calculating K.sub.a and f.sub.dc are obtained as follows:
where
denotes the derivation of (?) with respect to the slow time in azimuth ?, ? is the slow time in azimuth dimension and ?(?) is the phase history of any strong scattering point in ground area;
[0057] Step 1-2, designate the space constituted by the echo signal as the signal space.
[0058] For the range pulse compression signal s.sub.RC(t, ?), where t is the fast time in range dimension, perform a second-order fitting on the phase history ?(?) of the strong scattering points, obtaining the expression of such strong scatterers in the signal space as follows:
where ? is the coefficient of the second-order term, ? is the coefficient of the first-order term, ?.sub.0 is the constant phase term and o(?) represents the higher-order phase error;
[0059] Step 1-3, substitute the above Eq. <3> into Eqs. <1> and <2>, obtaining the expressions of K.sub.a and f.sub.dc in the signal space as follows:
[0060] Step 1-4, define the space constructed by the actual positions of the target and the manoeuvring platform as the target space. On the basis of spatial geometric relationships and law of cosines, the calculation formula for the range R between the target and the platform is obtained as follows:
where v represents the velocity of the manoeuvring platform, ? represents the squint angle of the beam center caused by the movement of the manoeuvring platform and R.sub.0 represents the closest range between the target and the manoeuvring platform.
[0061] Perform Taylor expansion of the above Eq. <6> at ?=0, and retain to the second-order term, and the expression of R is obtained:
[0062] On the basis of the phase calculation formula and definition, the expression for the phase history ?(?) in the target space is obtained as follows:
where ? represents the wavelength of the system transmitted signal; Step 1-5, substitute the above Eq. <8> into Eqs. <1> and <2>, respectively, obtaining the expressions of K.sub.a and f.sub.dc in the target space as follow:
[0063] Step 1-6, by comparing the right side of the above Eqs. <4> and <9>, and comparing the right side of the above Eqs. <5> and <10>, the estimated velocity, estimated squint angle of the beam center, and estimated range are obtained as follow:
[0064] Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal s.sub.RC(t, ?) into N segments, with each segment corresponding to the segmented pulse compression signal SR.sub.RC,i(t, ?), where i=1 . . . N. Specifically perform as follows:
[0065] On the basis of the positive or negative direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal s.sub.RC(t, ?) is preliminarily divided into segments consistent with the velocity direction. Next, check whether the length of each segment is less than a synthetic aperture length, which is given by L.sub.s=R??.sub.BW. If it is less, extend the segment to both sides to be one synthetic aperture length; otherwise, do not process it. Finally, the range pulse compression signal s.sub.RC(t, ?) is divided into N segments, and the echo signal corresponding to each segment is represented as s.sub.RC,i(t, ?) where i=1 . . . N. Step 3, on the basis of the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (?)}, the phase compensation ?.sub.m,i for the segmented pulse compression signal s.sub.RC,i(t, ?) is calculated. Multiply the segmented pulse compression signal s.sub.RC,i(t, ?) of each segment by the motion error compensation filter H.sub.MC,i=exp(?j?.sub.m,i), where j=?{square root over (?1)}, obtaining N compensated echo signal of each segment s.sub.RC(t, ?) . This realizes the motion compensation of the segmented pulse compression signal SRC,i(t, n). Specifically perform as follows:
[0066] Step 3-1, the influence of the slant range change ?R, caused by the change in the platform motion state, on the phase, are calculated as follows:
where v.sub.0 is the mean value of the estimated velocity ? and ?.sub.0, is the mean value of the estimated squint angle of the beam center {circumflex over (?)};
[0067] Step 3-2, the expression for the phase compensation amount corresponding to the segmented pulse compression signal s.sub.RC,i(t, ?) of each segment are derived as follow:
[0068] Step 3-3, the expression for the motion error compensation filter are derived as follow:
where j represents imaginary number, given by j?{square root over (?1)};
[0069] Step 3-4, multiply the segmented pulse compression signal s.sub.RC,i(t, ?) of each segment by the motion error compensation filter as described in Eq. <14>, obtaining the compensated echo signal s.sub.MC,i(t, ?) of each segment.
[0070] Step 4, perform a two-dimensional Fourier transform on the compensated echo signal s.sub.MC,i(t, ?) to obtain the two-dimensional spectrum of s.sub.MC,i(f, f.sub.d). Utilize the series inversion method to decompose the two-dimensional spectrum s.sub.MC,i(f, f.sub.d), constructing azimuth compression filter H.sub.AC,i, where f represents the frequency corresponding to the fast time t in range dimension and f.sub.d represents the Doppler frequency corresponding to the slow time ? in azimuth dimension. Specifically perform as follows:
[0071] Step 4-1, perform a two-dimensional Fourier transform on the compensated echo signal s.sub.MC,i(f, f.sub.d) to obtain the two-dimensional spectrum s.sub.MC,i(f, f.sub.d), where f represents the frequency corresponding to the fast time t in range dimension and f.sub.d represents the Doppler frequency corresponding to the slow time ? in azimuth dimension;
[0072] Step 4-2-1, on the basis of the stationary phase method, the expression of the two-dimensional spectrum s.sub.MC,i(f, f.sub.d) corresponding to the segment is obtained as:
where f.sub.c represents the carrier frequency of the system transmitted signal and c represents the speed of light;
[0073] Step 4-2-2, using the series inversion method, decompose the above Eq. <17> to obtain the two-dimensional spectrum expression of each segment that eliminates the coupling terms between f and f.sub.d:
[0074] Step 4-2-3, on the basis of the above Eq. <18>, derive the expression of each segment for the ideal phase filter:
[0075] Step 4-2-4, substitute the estimated velocity {circumflex over (v)}, estimated squint angle of the beam center {circumflex over (?)} and estimated reference range {circumflex over (R)} into the above Eq. expression <19> to obtained the azimuth compression filter expression of each segment:
[0076] Step 5, multiply the two-dimensional spectrum s.sub.MC,i(f, f.sub.d) by the azimuth compression filter of each segment H.sub.AC,i, and then perform a two-dimensional inverse Fourier transform to obtain N imaging results, represented as s.sub.IMG,i(t, ?) of each segment.
[0077] Step 6, refer to
[0078] Step 6-2, on the basis of the characteristics of Fourier transformation and the geometric structure of the target space, construct the expression for a tilt correction filter to correct the tilt of the image:
where i represents the range scale of a typical building.
[0079] Step 6-3, multiply the range-Doppler domain image s.sub.IMG,i(t, f.sub.d) by the tilt correction filter H.sub.cc?1, obtaining the tilt-corrected frequency domain image s.sub.IMG.sup.GC?1((t, f.sub.d);
[0080] Step 6-4, perform the inverse Fourier transform in the azimuth dimension on the tilt-corrected frequency domain image s.sub.IMG.sup.GC?1((t, f.sub.d), obtaining the tilt-corrected time domain image s.sub.IMG.sup.GC?1((t, ?);
[0081] Step 6-5, on the basis of the geometric structure of the target space, obtain the expression for the stretch/compression factor:
[0082] Step 6-6, substitute the aforementioned Eq. <22>, into the tilt-corrected time domain image s.sub.IMG,i.sup.GC?1((t, ?), obtaining the deformation-corrected time domain image s.sub.IMG,i.sup.GC?2((t, ?);
[0083] Step 6-7, perform the Fourier transform in range dimension on the deformation-corrected time domain image s.sub.IMG,i.sup.GC?1((t, ?), obtaining the deformation-corrected frequency domain image s.sub.IMG,i.sup.GC?1((f, ?);
[0084] Step 6-8, on the basis of the characteristics of the Fourier transform and the geometric structure of the target space, construct the expression for a secondary position correction filter to correct image translation:
[0085] Step 6-9, multiply the deformation-corrected frequency domain image s.sub.IMG,i.sup.GC?2(f, ?) by the position correction filter H.sub.Gc-3, as described in Eq. <23>, to obtain the geometrically corrected frequency domain image s.sub.IMG,i.sup.GC?3((f, ?);
[0086] Step 6-10, perform the inverse Fourier transform in the range dimension on the geometrically corrected frequency domain image s.sub.IMG,i.sup.GC?3(f, ?), to obtain the geometrically corrected imaging result of each segment, denoted as s.sub.IMG,i.sup.GC(f, ?);
[0087] Step 6-11, rotating the geometrically corrected time-domain image SIMG, ? (t, n) counterclockwise by {circumflex over (?)}??.sub.0 degrees to obtain the time-domain image to be spliced s.sub.IMG,i.sup.GC?p(t, ?) which is perpendicular to the trajectory of the manoeuvring platform;
[0088] Step 6-12, sequentially align the envelopes in the range dimension where the strong focus points are located in the overlap regions of the adjacent to-be-spliced time-domain images s.sub.IMG,i.sup.GC?p(t, ?)
[0089] Step 6-13, perform coherent integration in the overlapping regions and sequentially connect the non-overlapping regions, completing segments splicing and obtaining the full-aperture imaging result S.sub.all.
[0090] The position part of the segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar includes the following steps:
[0091] Step S1, perform range pulse compression on the raw echo signal s(t, ?) to obtain the range pulse compression signal s.sub.RC(t, ?), and based on the phase history q(n) of strong scattering points in the s.sub.RC(t, ?), calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (?)};
[0092] This step is identical to Step 1 in the part concerning the imaging method, and thus, it is not reiterated here.
[0093] Step S2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal s.sub.RC(t, ?) into N segments, with each segment corresponding to the echo signal s.sub.RC,i(t, ?), where i=1 . . . . N. Additionally, calculate the platform trajectory coordinates [X.sub.k.sup.i, Y.sub.k.sup.i, Z.sub.k.sup.i] for the i-th segment based on the estimated velocity {circumflex over (v)} and estimated the squint angle of beam center {circumflex over (?)}, where k=1 . . . M and M is the length of each segment in the azimuth direction. The coordinates are calculated as follow:
where ?.sub.in represents the angle between the beam direction of synthetic aperture radar and the normal direction of the ground plane;
[0094] Step S3, in the adjacent regions between the i-th and (i?1)-th segments, extract three strong scattering points. The coordinates of the strong scattering points in the i-th segment are denoted as [Q.sub.1.sup.i, Q.sub.2.sup.i, Q.sub.3.sup.i], and the coordinates of the strong scattering points in the (i?1)-th segment are denoted as [Q.sub.1.sup.i?1, Q.sub.2.sup.i?1, Q.sub.3.sup.i?1];
[0095] Step S4, calculate the rotation matrix ? for the i-th and (i?1)-th segments based on the coordinates of the strong scattering points [Q.sub.1.sup.i?1, Q.sub.2.sup.i?1, Q.sub.3.sup.i?1] and [Q.sub.1.sup.i, Q.sub.2.sup.i, Q.sub.3.sup.i]. The rotation matrix y is computed as follows:
[0096] Step S5, use the platform trajectory coordinates of the (i?1)-th segment [X.sub.k.sup.1?1, Y.sub.k.sup.i?1, Z.sub.k.sup.i?1] as the reference to rotate the platform trajectory coordinates of the i-th segment. This rotation aligns the platform trajectory coordinates of adjacent segments, which are as follows:
[0097] Step S6, perform coherent integration of the platform trajectory coordinates in the overlapping region of the i-th and (i?1)-th segments, and concatenate the platform trajectory coordinates in the non-overlapping region to obtain the concatenated trajectory coordinates [P.sub.x.sup.i, P.sub.y.sup.i, P.sub.z.sup.i], which are as follows:
[0098] Step S7, repeat steps S3 through S6 until the spliced trajectory coordinates [P.sub.x, P.sub.y, P.sub.z] for all segments are obtained, so as to obtain the final trajectory coordinates [p.sub.x.sup.all, p.sub.y.sup.all, p.sub.z.sup.all] of the platform.
[0099] The advantages of this method are further illustrated below in conjunction with specific comparative verification results. [0100] 1. Simulation Conditions:
[0101] As listed in Table 1:
TABLE-US-00001 TABLE 1 The parameters of simulation Symbol Name Value f.sub.c Carrier frequency of the transmitted signal 15.2 GHz B Bandwidth of the transmitted signal 2.5 GHz H Flight height of the platform 300 m PRF Pulse Repetition Frequency 2000 Hz ?.sub.a Beam width in azimuth 6? ?.sub.r Beam width in range 10? ?.sub.in Angle of incidence 65? v Velocity of the platform 40 m/s F.sub.s Sampling frequency 25 MHz ?.sub.a Resolution in azimuth 9.42 cm ?.sub.r Resolution in range 6 cm S Width of the swath 691.17 m L.sub.sar Synthetic aperture length 123.89 m R.sub.0 Reference range 709.86 m [0102] 2. Simulation and Experimentation
[0103] Case 1: Under the conditions listed in Table 1, the grid-points target range pulse compression signal s.sub.r(t, ?) obtained from the simulated and measured trajectory. The measured trajectory comes from the actual data measured by the inertial guidance in an experiment, and its trajectory is shown in
[0104] The grid- points target consists of 7?7 points with a distance and azimuth interval of 25m each. The spatial relationship between the grid-points target and the platform trajectory is shown in
[0105] After range pulse compression on s.sub.r(t, ?), s.sub.RD(t, ?) is obtained. The segmented aperture imaging (SAI) algorithm and traditional imaging methods are respectively used for imaging comparison on s.sub.RD(t, n) and the results are shown in
TABLE-US-00002 TABLE 2 Comparison of Image Quality Metrics Image The Image Image Image result of Image result of column of result of result of result of full-- traditional the target segment 1 segment 2 segment 3 aperture imaging method A) Comparison of Main Lobe Widening Coefficients 1 1.01 1.01 1.75 2 1.12 1.12 1.79 3 2.65 1.07 1.13 1.00 1.75 4 1.93 1.71 1.04 1.72 5 1.63 1.62 1.25 1.78 6 1.65 1.52 1.06 1.73 7 ? 1.67 1.08 1.75 B) Main-to-sidelobe ratio (dB) 1 12.40 12.56 11.11 2 13.41 12.05 10.52 3 13.12 12.82 12.54 11.98 11.12 4 12.56 13.20 12.26 11.12 5 13.10 13.08 13.25 11.16 6 12.05 13.11 12.26 10.47 7 2.20 13.35 12.21 12.45 C) Peak amplitude loss (dB) 1 0.00 0.41 0.81 2 0.87 0.87 1.04 3 0.21 0.91 1.27 0.42 0.32 4 0.00 0.00 0.61 0.24 5 0.64 1.02 0.31 0.01 6 0.72 0.76 1.06 0.21 7 11.97 0.61 0.50 0.63 D) Image entrogy The column of Image result of Image result of Image result of Image result of the target segment 1 segment 2 segment 3 full-aperture 6.98 7.00 6.96 6.92 6.95
[0106] Case 2: In the conditions listed in Table 1, the velocity of the platform is changed to 10 m/s, and the flight height of the platform was altered to 350 m. These modifications represent the experimental conditions of experiments. The platform used was a multi-rotor unmanned aerial vehicle, named KWT-65, equipped with a Ku-band miniaturized frequency-modulated continuous-wave synthetic aperture radar. This platform was flown approximately 100 times, each flight covering a route length of about 1 km, collecting multiple sets of raw echo data from a specific ground area. Imaging was performed on multiple sets of raw echo data using both traditional imaging methods and the SAI methods of this invention. The comparison of focusing effects is shown in
[0107]
[0108] Case 3: Processing the experimental data under case 2, images were obtained, and concurrently, the trajectory of the platform could be estimated. The comparison between the trajectory estimated using the SAI method of this invention and the trajectory collected using inertial navigation equipment is shown in
The Function and Effect of the Embodiment
[0109] According to the segmented aperture imaging and positioning method of a multi-rotor unmanned aerial radar, the method primarily comprising: 1) on the basis of an unmanned aerial vehicle-borne synthetic aperture radar system, acquiring a target echo; 2) on the basis of an echo signal, estimating the motion state of a manoeuvring platform; 3) on the basis of the motion state of the platform, segmenting the echo signal; 4) on the basis of the motion state of the platform, performing motion compensation on each echo signal segment; 5) performing two-dimensional Fourier transform on each compensated echo signal segment to obtain a two-dimensional spectrum, and using a series inversion method to decompose the two-dimensional spectrum to obtain a phase filter of each segment; 6) multiplying the two-dimensional spectrum of each segment by the phase filter corresponding to said segment, and then performing two-dimensional inverse Fourier transform on the two-dimensional spectrum to obtain an image of each segment; 7) performing geometric correction on the images of each segment, and then splicing the images of each segment to obtain a full-aperture imaging result; and 8) splicing the trajectory of each segment of the platform to obtain complete trajectory coordinates of the platform. In this embodiment, a method based on the estimation of platform motion parameters from the phase history of the echo signal and segmenting the echo signals is employed. This allows for precise compensation of platform motion errors, resulting in enhanced imaging focus and a high success rate in image acquisition. It improves the efficiency of data collection for multi-rotor unmanned aerial vehicle-drone platforms, enabling effective high-resolution imaging for synthetic aperture radar systems on multi-rotor unmanned aerial vehicle-drone platforms. Simultaneously, this method can calculate the three-dimensional coordinates of the platform trajectory during the imaging process, achieving effective platform positioning. This has applications in drone navigation, providing possibilities for the future development of intelligent, integrated detection systems.
Compared to Prior Art, the Following Advantages are Evident:
[0110] Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the relationship between platform motion and signal phase. As a result, it is suitable for synthetic aperture radar imaging systems on multi-rotor unmanned aerial vehicle platforms that lack inertial navigation systems or are equipped with low-precision inertial navigation systems devices.
[0111] Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the slant effects caused by variations in the attitude angles of the unmanned aerial vehicle platform and compensates for the coupled phase between range and azimuth, thereby improving image focusing quality. Consequently, this method is suitable for synthetic aperture radar imaging systems on unmanned aerial vehicle platforms that do not employ antenna servo mechanisms.
[0112] Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention utilizes the method of phase filter multiplication instead of interpolation. This approach enhances the imaging speed for each segment, thus improving the efficiency of single-pass imaging within each segment.
[0113] Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention adopts a method of parallel imaging of each segment and then splicing it into a complete image to obtain the image, which improves the imaging speed of the complete image.
[0114] Furthermore, experimental validation has demonstrated that the embodiment of the present invention, which proposes a segmented aperture imaging (SAI) method for multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, achieves high imaging resolution and fast computational speeds. Concurrently, it enables the estimation of platform trajectory, reducing the algorithm's dependence on hardware equipment, thus indicating that this invention can be effectively applied to synthetic aperture radar imaging systems on small and maneuverable platforms.
[0115] In this embodiment, a detailed derivation of the relationship between platform motion parameters and echo phase has been carried out. This allows for motion compensation and imaging without reliance on inertial navigation equipment by considering the second-order expansion of the two-dimensional spectrum in squint views. Through simulations, the point spread function curves of each segment, the complete image, and the traditional imaging algorithm were compared in this embodiment. Experimental validations contrasting the imaging results of the proposed method and traditional algorithms have been conducted, proving that this embodiment can effectively achieve high-resolution imaging for multi-rotor unmanned aerial vehicle-drone platform synthetic aperture radar systems.
[0116] The aforementioned embodiments are preferred examples of the present invention and are not intended to limit the scope of protection of the invention.