METHOD AND DEVICE FOR GENERATING AN OPTIMIZED 3D POINT CLOUD OF AN ELONGATE OBJECT FROM IMAGES GENERATED BY A MULTIPATH SYNTHETIC-APERTURE RADAR

20230384421 · 2023-11-30

    Inventors

    Cpc classification

    International classification

    Abstract

    The device (1) comprises a thresholding unit (6) for performing adaptive thresholding so as to generate a segmentation mask for images generated by a synthetic-aperture radar (2) and subjected beforehand to interferometry processing, a processing unit (7) for accumulating measurements for each of the segmentation masks so as to generate at least one accumulator and one energy profile, an alignment unit (8) for calibrating the accumulators and the energy profiles so as to obtain calibrated accumulators and calibrated energy profiles, a computing unit (9) for computing a unitary cloud for each of the segmentation masks, from the calibrated accumulators and the calibrated energy profiles, and a fusion unit (10) for fusing the unitary clouds so as to obtain said optimized 3D cloud.

    Claims

    1. A method for generating an optimized 3D point cloud illustrating an elongate object, in particular a ship, from a sequence of images of the environment of the elongate object generated by a synthetic aperture radar provided with a plurality of paths, each of the images referred to as multipath generated by the synthetic aperture radar comprising one synthetic aperture image per path, the multipath images being subjected to an interferometric processing allowing to obtain, for each multipath image, a sum path image and angular maps in azimuth and in elevation, characterised in that it comprises at least the following steps: a thresholding step (E1) consisting in carrying out an adaptive thresholding so as to generate a segmentation mask for each of the sum path images; a processing step (E2) consisting in carrying out, for each of said segmentation masks, an accumulation of measurements so as to generate, for each of said segmentation masks, at least one accumulator and one or more energy profiles; an alignment step (E3) consisting in calibrating the accumulators and the energy profiles so as to obtain calibrated accumulators and calibrated energy profiles; a computing step (E4) consisting in computing, for each of said segmentation masks, from the calibrated accumulators and the calibrated energy profiles obtained in the alignment step, a unitary cloud via a unitary merging; and a merging step (E5) consisting in merging the unitary clouds, so as to obtain said optimised 3D cloud.

    2. The method according to claim 1, characterised in that the thresholding step (E1) comprises: a sub-step (E1A) consisting in comparing the level of the intensity of each pixel to at least one minimum intensity threshold; and a sub-step (E1B) consisting in retaining only the pixels whose intensity is greater than this minimum intensity threshold in the segmentation mask which is a binary map of the same size as the sum path image in which the retained pixels are at 1 and the non-retained pixels are at 0.

    3. The method according to claim 2, characterised in that, in the thresholding step (E1), a sub-step (E1C) is also carried out, consisting in carrying out a morphological filtering.

    4. The method according to claim 1, characterised in that the processing step (E2) comprises the following sequence of successive sub-steps (E2A, E2B, E2C), which are implemented for each segmentation mask: a sub-step (E2A) consisting in implementing a principal component analysis to estimate a length axis of the elongate object, representing a principal axis; a sub-step (E2B) consisting in computing at least one accumulator sampled along the principal axis, the accumulator representing a one-dimensional grid comprising a plurality of cells, each of said cells containing the pixels of the segmentation mask that are located at the level of the cell; and a sub-step (E2C) consisting in computing at least one energy profile from the accumulator, the energy profile representing a one-dimensional vector whose values depend on the intensities of the pixels of each of the cells of the accumulator.

    5. The method according to claim 1, characterised in that the alignment step (E3) comprises the following sequence of successive sub-steps (E3A, E3B), which are implemented for each segmentation mask: a sub-step (E3A) consisting in making a correlation of the profile or profiles to estimate potential translations and optimal sampling; and a sub-step (E3B) consisting in making a completion with empty cells and zero energy components of previous profiles or of the next profile.

    6. The method according to claim 1, characterised in that the computing step (E4) consists, for each of the segmentation masks, in defining a unitary cloud whose number of points is equal to the number of cells of the calibrated accumulator, and comprises the following sequence of successive sub-steps (E4A, E4B, E4C), which are implemented for each cell of the calibrated accumulator: a sub-step (E4A) consisting in computing an assembly of components (Xi, Yi, Zi) referred to as individual of each of the pixels in the cell; a sub-step (E4B) consisting in computing an assembly of components (X, Y, Z) referred to as global for each of the cells, from the assembly of the individual components (Xi, Yi, Zi) of the cell; and a sub-step (E4C) consisting in computing a level component, based on at least the value of the energy profile at the cell.

    7. The method according to claim 6, characterised in that the computing of the level component takes into account, in addition to the value of the energy profile at the cell, the quadratic sum of the standard deviations computed on the 3D relocated pixels contained in the cell.

    8. The method according to claim 1, characterised in that the merging step (E5) comprises: a sub-step (E5A) consisting, for each unitary cloud, in carrying out the following operations: centring the global components of the unitary cloud; implementing a principal component analysis to estimate the longitudinal axis of the unitary cloud; and generating a rotation of the unitary cloud to orient it along a predefined axis; and a sub-step (E5B) consisting in computing the statistical average, point by point, of the global components (X, Y, Z) and of the level component of the assembly of the unitary clouds to obtain said optimised 3D cloud.

    9. The method according to claim 8, characterised in that the merging step (E5) comprises a sub-step (E5C) of filtering outliers.

    10. A device for generating an optimized 3D point cloud illustrating an elongate object, in particular a ship, from a sequence of images of the environment of the elongate object generated by a synthetic aperture radar provided with a plurality of paths, each of the images referred to as multipath generated by the synthetic aperture radar comprising a synthetic aperture image per path, the multipath images being subjected to an interferometric processing allowing to obtain, for each multipath image, a sum path image and angular maps in azimuth and in elevation, characterised in that it comprises at least: a thresholding unit configured to carry out an adaptive thresholding so as to generate a segmentation mask for each of the images referred to as multipath, each of the multipath images comprising a sum path image and angular maps in azimuth and in elevation; a processing unit configured to carry out, for each of said segmentation masks, an accumulation of measurements so as to generate, for each of said segmentation masks, at least one accumulator and one or more energy profiles; an alignment unit configured to calibrate the accumulators and the energy profiles so as to obtain calibrated accumulators and calibrated energy profiles; a computing unit configured to compute, for each of said segmentation masks, from the calibrated accumulators and the calibrated energy profiles, a unitary cloud (Nk) via a unitary merging; and a merging unit configured to merge the unitary clouds, so as to obtain said optimised 3D cloud.

    11. A system for recognising and identifying a target representing an elongate object, in particular a ship, said system comprising at least: a synthetic aperture radar provided with a plurality of paths and capable of generating images of the environment of the elongate target; a processing unit configured to process the images generated by the synthetic aperture radar so as to derive data referred to as detection; a database containing data referred to as target reference; and a comparison unit configured to compare the detection data with the reference data in the database so as to be able to recognise and identify an elongate target, characterised in that the processing unit comprises a device as specified in claim 10 and a unit carrying out an interferometric processing.

    12. The system according to claim 11, characterised in that it comprises a decision unit using the identification data of an elongate target transmitted by the comparison unit and additional data to make a goal designation decision.

    13. A system for generating a trained metric and a reference base related to at least one type of elongate object, in particular a ship, said system comprising at least: a base of object models, and at least of elongate objects; a multipath synthetic aperture radar scene generator linked to the object model base and capable of simulating multipath SAR images; a processing unit configured to process the images generated by the scene generator to create a point cloud depicting an elongate object and provide data; a creation unit for creating a reference base, linked to the object model base and adapted to create a reference base; and a learning unit configured to carry out a learning from the data received from the processing unit and from the reference base and to provide the trained metric and the reference base, characterised in that the processing unit comprises a device as specified in claim 10 and a unit carrying out an interferometric processing beforehand.

    Description

    BRIEF DESCRIPTION OF FIGURES

    [0072] Further advantages and characteristics will become apparent from the following description of several embodiments of the invention, given as non-limiting examples, with particular reference to the attached figures. In these figures, identical references designate similar elements.

    [0073] FIG. 1 is a block diagram of a device according to a particular embodiment of the invention.

    [0074] FIG. 2 illustrates a particular application of the invention.

    [0075] FIGS. 3A, 3B and 3C show schematically different 3D clouds.

    [0076] FIGS. 4A, 4B and 4C show schematically different accumulators.

    [0077] FIG. 5 is a block diagram of a method according to a particular embodiment of the invention.

    [0078] FIGS. 6A and 6B show schematically an alignment of pixels in the image and the creation of an accumulator.

    [0079] FIGS. 7A and 7B illustrate energy profiles before and after a calibration, respectively.

    [0080] FIGS. 8A, 8B, 8C and 8D allow to show a unitary cloud computing step.

    [0081] FIGS. 9A, 9B, 9C, 9D, 9E and 9F allow to show a unitary cloud merging step to obtain an optimised 3D cloud.

    [0082] FIG. 10 is a block diagram of a system for recognising and identifying a target representing an elongate object.

    [0083] FIG. 11 is a block diagram of a system for generating a trained metric and a reference base.

    DETAILED DESCRIPTION

    [0084] The device 1 shown schematically in FIG. 1 and allowing to illustrate the invention is intended to generate a 3D (three-dimensional, i.e. in space) cloud of points illustrating an elongate (or elongated or oblong) object 3 along a longitudinal axis, i.e. an object that is longer than it is wide.

    [0085] The device 1 is designed to generate the optimised 3D cloud from a sequence of radar-generated images of the environment of the elongate object. This radar (hereinafter “SAR radar 2”) is a multipath synthetic aperture radar (SAR) (i.e. with a plurality of paths). Preferably, the SAR radar 2 has one transmission path and several reception paths (to implement a radar interferometry). By way of illustration, FIG. 2 shows a very schematic representation of the electromagnetic waves OE emitted by the transmission path of the SAR radar 2.

    [0086] In general, a SAR image generated by a SAR radar (and thus an image generated by the SAR radar 2 in particular) has the following advantages: [0087] it enhances the signal in noise; [0088] it separates the different possible contributors.

    [0089] More specifically, as the integration time is increased, the resolution and the SNR are increased.

    [0090] In addition, a multipath SAR radar allows, through an interferometric processing, to relocate in 3D with: [0091] a distance measurement; and [0092] two angular maps in elevation and in azimuth (from the interferometric processing).

    [0093] The angular information is first order insensitive to the motion of the elongate object 3 and is noisy in proportion to the thermal noise of the image. It is also generally sensitive to the presence of several contributors in the pixel (fluctuation of the effect referred to as “glint”). The device 1 will in particular allow to remedy the two disadvantages mentioned above (angular uncertainty linked to the “glint” effect and to the thermal noise).

    [0094] Of course, within the scope of the present invention, the device 1 can be used to process images of other types of elongate objects, whether mobile or immobile, for example military land or sea craft.

    [0095] In addition, the SAR radar 2 can be mounted on other flying machines, for example on an observation aircraft.

    [0096] In the example described below, the elongate object 3 is a ship 33 travelling on a sea M (or other body of water). Furthermore, in the particular example shown in FIG. 2, the device 1 and the radar 2 are mounted on a missile 4 which is heading towards the ship 33, in this case an enemy ship, to neutralise it.

    [0097] In the context of the present invention, each of the images generated by the SAR radar 2 (of multipath type) comprises one SAR image per path.

    [0098] In the example shown in FIG. 1, the SAR radar is considered to be transmitting a sequence of N images, i.e. 1 to N, transmitted via links l2-1 to l2-N respectively.

    [0099] To facilitate the understanding of the processing implemented by the units 5, 6, 7 and 9 specified below, the operations implemented by each of the N multipath images are shown separately in FIG. 1, representing N modules, namely 5-1 to 5-N, 6-1 to 6-N, 7-1 to 7-N, and 9-1 to 9-N, although in each of these units, the corresponding unit carries out the same processing for the N images.

    [0100] The device 1 comprises the following units, as shown in FIG. 1: [0101] a thresholding unit 6 configured to carry out an adaptive thresholding so as to generate a segmentation mask for each of N sum path images received via links l5-1 to l5-N; [0102] a processing unit 7 configured to carry out, for each of the segmentation masks, received via links l6-1 to l6-N from the thresholding unit 6, an accumulation of measurements so as to generate, for each of these segmentation masks, at least one accumulator and an energy profile; [0103] an alignment unit 8 configured to calibrate the accumulators and the energy profiles (received via links l7-1 to l7-N from the processing unit 7) so as to obtain accumulators referred to as calibrated and energy profiles referred to as calibrated; [0104] a computing unit 9 configured to compute, for each of the segmentation masks, a unitary cloud via a unitary merging, from the calibrated accumulators and calibrated energy profiles generated by the alignment unit 8 and received via links l8-1 to l8-N from the alignment unit 8, as well as thresholded angular (in elevation and in azimuth) maps and distance indexes (received via links l1 to lN from the thresholding unit 6); and [0105] a merging unit 10 configured to merge the unitary clouds, received via links l9-1 to l9-N from the computing unit 9, so as to generate said optimised 3D cloud. This optimised 3D cloud can be transmitted via a link l10 to a user device or system (not shown).

    [0106] A unit 5 is also provided for subjecting the images generated by the SAR radar 2 (received via the links l21 to l2N) to an interferometric processing, prior to their transmission (via links l5-1 to l5-N) to the thresholding unit 6. This interferometric processing forms, for each multipath image, a sum path image and two associated azimuth and elevation angular maps.

    [0107] This unit 5 may for example be part of an assembly or module that also comprises the SAR radar 2.

    [0108] The characteristics and the processing carried out by the different units of the device 1 are specified below when describing a method PR implemented by the device 1.

    [0109] The device 1, as described above, implements the method PR shown in FIG. 5, to generate an optimised 3D point cloud illustrating an elongate object 3 (i.e. the ship 33 in the following description). The method PR allows to form the optimised 3D cloud from a sequence of SAR images of the environment of the elongate object 3 generated by the SAR radar 2 and processed by the interferometry (unit 5), these images comprising pixels relating to the elongate object 3.

    [0110] As mentioned below, the method PR will carry out the merging of several 3D clouds referred to as unitary to smooth out the unsteadiness and reduce the 3D noise.

    [0111] By way of illustration, three different clouds N1, N2 and N3 of points P are shown in FIGS. 3A, 3B and 3D in a three-dimensional space (illustrated by an X, Y and Z axis reference frame).

    [0112] In general, the method PR involves mapping assemblies of pixels between the SAR images, for example the point (or pixel) Pi shown in FIGS. 4A, 4B and 4C, and a merging of the assemblies into 3D, as shown below. It will not be possible to map each point Pi but assemblies of pixels. In 3D space, only the merging of the assemblies will be kept, each of which will be summarised in a single point. By way of illustration, FIGS. 4A, 4B and 4C show the mapping carried out by the device 1 (in particular via the generation of accumulators A1, A2 and A3 as specified below) in an image space (illustrated by a reference frame comprising a distance axis D and a Doppler frequency axis FD).

    [0113] The images generated by the SAR radar 2 are subjected to an interferometric processing in a step prior to the implementation of the method PR before being used in a thresholding step E1 (FIG. 5) of the method PR. The interferometric processing is carried out by the unit 5 (FIG. 1). This interferometric processing allows to obtain a sum path image and angular maps in azimuth and in elevation from the multipath images.

    [0114] The multipath images (SAR) are generated by the SAR radar 2 in a given time sequence. The interferometric processing is implemented by the unit 5, depending on the reception architecture (number of paths, antenna geometry) of the SAR radar 2.

    [0115] For example, in the case of a SAR radar architecture with four reception quadrants (or paths), it can be envisaged that the signal is transmitted by one path and the four reception paths acquire the return signal. The four images per path are then formed by means of the SAR processing. To carry out the interferometry, the “Monopulse” algorithm can be used. Originally, the “Monopulse” algorithm got its name from its use of a single transmitted pulse as a return echo. In the case of images, the algorithm is applied to each of the distance and Doppler pixels. Different signals (sum and difference) are created through the different paths. In this case, the ratio of the difference path to the sum path (two ratios for the two axes) allows the difference in speed to be determined, which is the basis of the distance measurement information.

    [0116] After this interferometric processing, we obtain, for each (measurement) time of the time sequence: [0117] a sum path image, representing a coherent sum (in C) of the SAR images from each of the reception paths of the SAR radar 2; and [0118] an angular map in azimuth and in elevation. Such an angular map is of the same size as the sum path image, with pixel levels corresponding to the elevation angles El and to the azimuth angles Az respectively.

    [0119] This implementation has several advantages, in particular: [0120] the only condition imposed by the method PR on the interferometric processing is to provide two estimated angles (azimuth and elevation) per pixel of the SAR image, regardless of the architecture of the multipath SAR radar used to obtain these data; [0121] the image interferometry is robust, to first order, to distortions induced in the SAR image by the mobile objects; and [0122] the time sequence can be carried out in a short time interval (with a compact acquisition in time) for optimality of the future mapping, as specified below.

    [0123] The method PR comprises, as shown in FIG. 5, a sequence of steps E1 to E5 comprising: [0124] a thresholding step E1, implemented by the thresholding unit 6 (FIG. 1), consisting in carrying out an adaptive thresholding so as to generate a segmentation mask for each of the sum path images (per multipath image); [0125] a processing step E2, implemented by the processing unit 7, consisting in carrying out, for each of the segmentation masks generated in the thresholding step E1, an accumulation of measurements so as to generate, for each of these segmentation masks, at least one accumulator and an energy profile; [0126] an alignment step E3, implemented by the alignment unit 8, consisting in aligning (or calibrating or re-phasing) the accumulators and the energy profiles (generated in the processing step E2) so as to obtain accumulators referred to as calibrated and energy profiles referred to as calibrated; [0127] a computing step E4, implemented by the computing unit 9, consisting in computing, for each of the segmentation masks, from the calibrated accumulators and the calibrated energy profiles obtained in the alignment step E3, a unitary cloud via a first merging referred to as unitary; and [0128] a merging step E5, implemented by the merging unit 10, consisting in merging (via a terminal merge) the unitary clouds obtained in the computing step E4, so as to generate said optimised 3D cloud.

    [0129] In a preferred embodiment, the thresholding step E1 comprises, as shown in FIG. 5: [0130] a sub-step E1A consisting in comparing the level of the intensity of each pixel to at least one minimum intensity threshold, and preferably to both a minimum intensity threshold and a maximum intensity threshold; and [0131] a sub-step E1B consisting in retaining in the segmentation mask only those pixels whose intensity is greater than this minimum intensity threshold, or in the case of a comparison at both a minimum intensity threshold and a maximum intensity threshold, retaining only those pixels whose intensity is between these thresholds (i.e. less than the maximum intensity threshold and greater than the minimum intensity threshold).

    [0132] The resulting segmentation mask is a binary map of the same size as the sum path image, in which the pixels retained in sub-step E1B (following the comparison implemented in sub-step E1A) are 1 and the pixels not retained (following the comparison) are 0.

    [0133] Sub-step E1B consists in redefining the angular maps by keeping only the points present in the segmentation mask.

    [0134] In a particular embodiment, sub-step E1A consist in: [0135] computing an upper quantile on the intensity of the sum path SAR image (modulus squared of the image) to define a maximum threshold to be retained in the segmentation (for example, a 100% quantile for a thresholding to the maximum peak in the image); [0136] computing a low threshold via a chosen dynamic range (in dB), for example 40 dB dynamic range; and [0137] carrying out a thresholding by comparing each (pixel) level with the high and low thresholds in order to obtain a binary segmentation mask of the selected points (value 0 for the pixels below the low threshold and value 1 for the pixels between the low threshold and the high threshold).

    [0138] Furthermore, in a particular embodiment, the thresholding step E1 also comprises a sub-step E1C, implemented after sub-step E1B, consisting in carrying out a morphological filtering (morphological opening operation), to eliminate the small isolated elements in the signature (corresponding for example to false alarms on thermal noise).

    [0139] This thresholding step E1 has the following advantages in particular: [0140] the high dynamic range of the signature comprises the most stable and majority points in the radiometry of the object; and [0141] the high dynamic range (points with the highest SNR ratios) is the least affected by thermal noise.

    [0142] The thresholding step E1 thus allows to retain only the strongest contributors of each SAR signature and thus presenting the best signal to noise ratio (or SNR). The thresholding step E1 provides a segmentation mask for each of the N instants of the image sequence.

    [0143] Furthermore, the processing step E2 which follows the thresholding step E1 aims to cut the signature of the object along a grid in its length axis (or longitudinal axis), known as the principal axis AP. The cross-sectional dimension will be lost in favour of an accumulation of measures allowing to reduce the 3D noise downstream, as detailed below.

    [0144] FIG. 6A illustrates in the image space (distance D and Doppler frequency FD) a segmentation mask (comprising the pixels P) and its principal axis AP.

    [0145] The processing step E2 generates, for each segmentation mask: [0146] an accumulator AC (or accumulation dictionary or pixel dictionary) representing a one-dimensional grid comprising a plurality of cells (longitudinal and transverse size parameter) such as the cells C1, C2, C3, C4 and C5 of the accumulator AC of FIG. 6B. Each of the cells C1 to C5 contains the pixels P of the segmentation mask which are located at the level of the cell as illustrated for the pixels Pa, Pb and Pc of the cell C1 in FIG. 6B; and [0147] an energy profile representing a one-dimensional vector whose values L1 to L5 depend on the intensities of the pixels of each of the cells C1 to C5 of the accumulator AC, as specified below.

    [0148] To this end, in a preferred embodiment, the processing step E2 comprises the following sequence of successive sub-steps E2A to E2C, which are implemented for each segmentation mask: [0149] the sub-step E2A consisting in implementing a principal component analysis (hereafter referred to as “PCA analysis”) to estimate a length axis of the object, representing the principal axis AP (FIG. 6A). This PCA analysis is carried out on the image coordinates of the points selected in the segmentation mask. A weighting of the PCA analysis can be considered in particular by the levels of the points, for example via an average of the moduli or squared moduli (choice of amplitude or choice of intensity) of the reflectivities of the segmented pixels; [0150] the sub-step E2B consisting in computing one or more accumulators AC (or accumulation dictionaries or pixel dictionaries) depending on the number of samplings considered. The accumulator or the accumulators AC are sampled along the principal axis AP (FIG. 6B), the accumulator AC thus representing a one-dimensional grid comprising a plurality of cells C1 to C5. Each of the cells C1 to C5 contains the pixels P of the segmentation mask which are located at the level of the cell. The closer the time sequence of the SAR images, the more similar the images become, and thus the angle of presentation of the elongate object slowly changes. This allows to avoid directional errors in the definition of the accumulators by checking that the orientation of the main axis does not fluctuate abruptly by an angle close to 180°; and [0151] the sub-step E2C consisting in determining one or more energy profiles from the accumulator or the accumulators AC, the energy profile thus representing a one-dimensional vector whose values depend on the pixel intensities of each of the cells C1 to C5 of the accumulator AC.

    [0152] To compute the accumulator or the accumulators (in sub-step E2B), a common assembly of cell size values (in pixels) is set. The minimum and maximum coordinates of the points are computed on the main axis AP. The cells are then defined by a range data (maximum point-minimum point) and the cell size. In each cell, the pixels of the mask are referenced in the SAR signature. This means that there can be several accumulators with different cell sizes.

    [0153] In addition, different computing modes are possible to determine (in sub-step E2C) the energy profile of a cell from the intensities of the pixels of the cell.

    [0154] In particular, in a first embodiment, the energy value of the energy profile, assigned to a cell, is equal to the average of the intensities (or reflectivities) of the pixels of the cell considered. Furthermore, in a second embodiment, the energy value of the energy profile, assigned to a cell, is equal to the sum of the intensities of the pixels in that cell. Other computing modes are also possible.

    [0155] The processing step E2 thus has the following advantages in particular: [0156] a maximisation of the longitudinal information by sacrificing the transverse axis, which contains little information for elongate objects (in particular ships, which are often symmetrical in the transverse axis). In particular, this will allow a first merge to be implemented in computing step E4, as specified below; [0157] the energy profile (determined in sub-step E2C) will allow the accumulators from each segmentation mask to be calibrated. This will allow for a mapping of the pixels of each measurement in the sequence (implemented in the alignment step E3) and finally for temporal merging of the 3D estimates in merging step E5.

    [0158] The alignment step E3 which follows the processing step E2 carries out the mapping of the N measurements of the time sequence. It aims to calibrate the energy profiles for all the measurements in the time sequence. It is fundamental for mapping the accumulators and the energy profiles of the moment of the time sequence.

    [0159] In the example shown in FIG. 7A, four energy profiles F1, F2, F3 and F4 (in this example N=4) are shown. These energy profiles F1 to F4 have been calibrated in the representation in FIG. 7B.

    [0160] This alignment step E3 provides for a mapping in the image space to obtain a normalisation of the accumulators and energy profiles.

    [0161] The alignment step E3 provides as many accumulators and energy profiles as there are segmentation masks (and thus measurement points in the time sequence). All the accumulators and the energy profiles provided have the same number of cells and all the cells of a given index correspond to each other (e.g. the cell C3 of an accumulator AC2 (time point 2 of the sequence) describes as closely as possible the pixels of the cell C3 of the accumulator AC1).

    [0162] Assuming that the longitudinal axis is potentially stretched (or compressed) between the segmentation masks, several samplings and therefore several accumulators and energy profiles per segmentation mask are preferably taken into account.

    [0163] In a preferred embodiment, the alignment step E3 comprises the following successive sub-steps E3A and E3B, which are implemented for each segmentation mask: [0164] the sub-step E3A consisting in carrying out a correlation of the profile or the profiles to estimate potential translations and optimal samplings; and [0165] the sub-step E3B consisting in carrying out a completion by empty cells and zero energy components of the previous profiles or of the next selected profile according to the sign of the estimated translation.

    [0166] In sub-step E3A, the profile or the profiles (one or more profiles depending on the number of samplings) are correlated with each other in order to estimate the potential translations and the optimal samplings (if several samplings) between each of them.

    [0167] An example of processing with a fixed sample value is: close correlation, profile n with profile n+1. The translation index is estimated by looking for the location of the peak in the correlation.

    [0168] In addition, an example of processing with several sample values is as follows: close correlation, profile n with the M variable sample profiles n+1. Among the M correlations, the selected profile Mj (and thus the sample) is the one with the strongest correlation peak among the M correlation peaks. The translation index is again estimated by the location of the peak in the Mj correlation.

    [0169] The alignment step E3 has, in particular, the following characteristics and advantages: [0170] the closer the time sequence of acquisition, the more similar the SAR images become and the smaller the differences between the profiles, which ensures a good mapping quality. A time-compact multipath SAR image sequence acquisition is favourable for the mapping. A multipath SAR image sequence with overlapping image integrations can be used to enhance the smooth transition of the profiles between the points in the sequence. In addition, as previously stated for the processing E2B, this prevents accidental reversals of direction between the accumulators; [0171] the processing carried out in the alignment step E3 is fundamental to map the accumulators and the energy profiles of the instants of the sequence; [0172] the choice of a variable sampling allows to compensate for possible compression/dilatation of the SAR signatures (segmentation mask) during the sequence. The number of samplings to be considered is a parameter.

    [0173] Furthermore, the computing step E4 which follows the alignment step E3 consists, for each of the segmentation masks, in defining a 3D unitary cloud whose number of points is equal to the number of cells of the calibrated accumulator. This processing is the first phase of the merging in 3D space, the second phase being implemented in the merging step E5.

    [0174] The output consists of N unitary clouds where N is the number of segmentation masks or points in the time sequence (number of multipath images).

    [0175] The computing step E4 provides, for each segmentation mask, the definition of a 3D cloud whose number of points is equal to the number of cells of the calibrated accumulator.

    [0176] Furthermore, the computing step E4 comprises the following sequence of successive sub-steps E4A to E4C, which are implemented for each cell (or cloud point) of the calibrated accumulator: [0177] the sub-step E4A consisting in computing the components Xi, Yi and Zi referred to as individual of each of the pixels in the cell; [0178] the sub-step E4B consisting in computing the components X, Y and Z referred to as final of each of the cells, from the assembly of the individual components Xi, Yi and Zi of the cell; and [0179] the sub-step E4C consisting in computing a level component L, based at least on the value of the energy profile F1 to F4 (FIG. 7B) at the cell.

    [0180] In a preferred embodiment, to compute the level component in a particular embodiment, the sub-step E4C takes into account, in addition to the value of the energy profile at the cell, the quadratic sum of the standard deviations computed over the 3D relocated pixels contained in the cell.

    [0181] The sub-step E4A consists in computing individual components Xi, Yi and Zi of each of the pixels Pi in the cell Cj of the calibrated accumulator using distance indexes from the sum path SAR image and the estimated angles in the angular maps (geometric relocation). If the cell of the accumulator is empty, the components Xi, Yi and Zi are assigned of a value referred to as “undefined”.

    [0182] The sub-step E4B consists in defining the terminal components X, Y, Z of the cloud by computing respective statistical averages over Xi, Yi and Zi of the points determined in sub-step E4A.

    [0183] The sub-step E4C consists in computing the level component L, which is preferably a function of the energy of the cell (value of the energy profile at the cell Cj) and the quadratic sum of the standard deviations X, Y and Z computed on the 3D relocated pixels contained in the cell: L=f(E,√{square root over (σ.sub.x.sup.2+σ.sub.y.sup.2+σ.sub.z.sup.2)}). For example, the level component L can be computed using the following expression: E*√{square root over (σ.sub.x.sup.2+σ.sub.y.sup.2+σ.sub.z.sup.2)}.

    [0184] To illustrate the computing step E4, a particular case of a SAR radar 2 moving at a speed V along an axis Y and taking images of an elongate object 3 corresponding to a ship is shown in FIGS. 8A and 8B. FIGS. 8A and 8B show schematic views, respectively, in a horizontal plane XY and in a vertical plane XZ. FIGS. 8C and 8D correspond to FIGS. 8A and 8B respectively and show a unitary 3D cloud, referenced Nk, of points Pk, which is projected onto the elongate object 3 in the XY plane and the XZ plane respectively.

    [0185] In addition, a scale 11 relative to a level component L is also shown in these FIGS. 8C and 8D.

    [0186] The grey level of the representation of this scale 11 varies with the value of the level component L. As the value of the level component L increases, so does the corresponding grey level in the representation on the figures. The grey level as represented (in FIGS. 8C and 8D) of the points Pk thus corresponds to the corresponding value of the level component.

    [0187] This computing step E4, which carries out merging referred to as unitary, allows: [0188] smoothing the radiometric content (energy profile averaged by construction over the cells of the accumulator) over a neighbourhood of the signature, thereby generating a spatial radiometric smoothing; and [0189] smoothing the geometric content (average of the 3D positions in each accumulator cell) over a neighbourhood of the signature, which generates a reduction of the 3D spatial noise induced by the angular noise of the interferometric measurement (itself induced by the thermal noise and the glint effect).

    [0190] In addition, the choice of the level component L (through a suitable function f) allows the inclusion of a radiometric information supplemented by an information related to the “glint” effect which is an intrinsic property of the object in the measurement (as opposed to the thermal noise which is an extrinsic property of the object).

    [0191] Furthermore, the purpose of the merging step E5, which follows the computing step E4, is to carry out a terminal merging of the individual clouds to create the final cloud (or optimised 3D cloud). It involves a centring and rotating of the individual clouds and the computing of an average of the components X, Y, Z and L point by point, as described below.

    [0192] The final cloud (or optimised 3D cloud) is communicated to a user device or system at the end of the merging step E5.

    [0193] In a preferred embodiment, the merging step E5 comprises in particular successive sub-steps E5A and E5B.

    [0194] The sub-step E5A, which is implemented for each unitary cloud, comprises the successive sub-steps E5Aa, E5Ab and E5Ac: [0195] the sub-step E5Aa consisting in centring the components X, Y and Z of the unitary cloud (components subtracted from their statistical mean), for example X′=X−m(X); [0196] the sub-step E5Ab consisting in implementing a principal component analysis to estimate the longitudinal axis of the unitary cloud (without taking into account the values of the undefined components of the cloud); [0197] the sub-step E5Ac consisting in generating a rotation of the 3D unitary cloud to orient it along a predefined axis (for example, the orientation of the object with respect to the radar in the middle of the sequence). Any uncertainty of 180° is easily resolved by knowing the first cell and the last cell of the accumulator. The orientation of the 3D cloud is known and mastered.

    [0198] In addition, the sub-step E5B consists in carrying out the terminal merging which is implemented by computing the statistical average, point by point, of the components X, Y and Z and of the level component L of the assembly of the unitary clouds to obtain said optimised 3D cloud.

    [0199] The sub-step E5B computes the point-to-point statistical average of the respective components X, Y, Z and L of the clouds, for example at the value i of the component X:

    [00001] X M e r g e [ i ] = 1 N clouds Σ n = 1 N Clouds X n [ i ] )

    [0200] This merging step E5 allows: [0201] to smooth the radiometric content over a temporal neighbourhood of the signature. Thus, the radiometric content is less fluctuating depending on the aspect angle (as the object is acquired at several aspect angles during the sequence); and [0202] to smooth the geometric content over a temporal neighbourhood of the signature. In this way, a further reduction of the 3D spatial noise induced by the angular noise of the interferometric measurement (itself induced by the thermal noise and the glint effect) is achieved.

    [0203] In a particular embodiment, the merging step E5 comprises a sub-step E5C of filtering outliers, the smoothing of which is not sufficient, thus allowing to optimise the method PR.

    [0204] Outliers are considered when the occurrence of undefined values among the components that cause the merging exceeds a fixed threshold. For example, for a threshold of 0.7, if the frequency of occurrence of undefined values in the assembly {[X.sub.n[i], Y.sub.n[i], Z.sub.n[i], L.sub.n[i]], n∈custom-character1, N.sub.Cloudscustom-character} exceeds 0.7, the coordinates X(i), Y(i), Z(i) and L(i) of the terminal cloud are deleted (or, in other words, the point i of the cloud is deleted).

    [0205] The filtering of the outliers thus allows the elimination of the points for which the smoothing is not sufficient (because obtained on a low rate of occurrence).

    [0206] To illustrate the merging step E5, a centred unitary cloud Nk (obtained in sub-step E5A) is shown in FIGS. 9A and 9B. FIGS. 9A and 9B show schematic views, respectively, in a horizontal plane X1Y1 and in a vertical plane X1Z1 positioned with respect to the elongate object 3. FIGS. 9C and 9D correspond to FIGS. 9A and 9B, respectively, and show the optimised 3D cloud Nopt (obtained after the terminal merging carried out in the sub-step E5B). Furthermore, FIGS. 9E and 9F correspond to FIGS. 9C and 9D respectively and show the optimised 3D cloud Nopt after the filtering of the outliers (represented by white circles in FIGS. 9C and 9D). In addition, a scale 11 related to the level component L is also shown in these FIGS. 9A to 9F.

    [0207] The method PR and/or the device 1, as described above, allow the computing of an optimised 3D cloud of an elongate object 3 from a sequence of SAR images generated by a multipath SAR radar 2. In particular, they have the following characteristics and advantages: [0208] they carry out a radiometric smoothing: the smoothing and the radiometric stability of the 3D cloud, as opposed to the SAR images alone (which are sensitive to the aspect angle), is achieved through: [0209] the merging on a neighbourhood of the signature (average over the cells of the accumulators) taking into account the elongate hypothesis of the object allowing to sacrifice the transverse axis of the signature; and [0210] the merging on a temporal neighbourhood of the signature (average over several unitary clouds created at different moments of the acquisition sequence); [0211] they are robust to motion, through the use of the interferometry, as opposed to the SAR images alone (which are blurred and distorted); [0212] they carry out a geometric smoothing. A reduction of the 3D spatial noise caused by the thermal noise and the glint effect is achieved by: [0213] the use of multipath SAR images which have a better SNR ratio than the native mode of an antenna (distance profiles); [0214] the segmentation of the upper portion of the signature (segmentation mask) to retain only the points with the best SNR ratio; [0215] the merging on a neighbourhood of the signature (average over the cells of the accumulators); and [0216] the merging on a temporal neighbourhood of the signature (average over several unitary clouds created at different times in a temporally contiguous acquisition sequence); and [0217] they can be carried out in a wide variety of different implementations. In particular, they can be implemented: [0218] with any multipath SAR radar, provided that two azimuth and elevation angles are estimated; [0219] on any elongate object, and thus in particular on ships; and [0220] in any type of situation concerning the aiming of the radio axis of the radar, the speed and/or the height of the radar in relation to the object.

    [0221] The device 1 and/or the method PR, as described above, can be implemented in many applications, in particular (although not exclusively) in the military field.

    [0222] Two examples of different applications are presented below.

    [0223] In a first application, said device 1 is part of a system 12, as shown in FIG. 10, for recognising and identifying an elongate target, in particular a ship. Preferably, this system 12 is mounted on board a flying machine, for example a recognition aircraft or a missile such as the missile 4 in FIG. 2, and whose processing are used on board the flying machine.

    [0224] In this first application, the optimised 3D cloud is to be used as a descriptor in a recognition and identification chain based on comparison with elongate reference objects (in particular ships).

    [0225] As shown in FIG. 10, said system 12 comprises: [0226] a SAR radar 2 provided with a plurality of paths and capable of generating images I of the environment of an elongate target; [0227] a database 13 containing data referred to as reference of potential targets; [0228] a recognition unit 14 comprising a processing unit 31 comprising said device 1 and incorporating a unit (not shown) such as the unit 5 for carrying out an interferometric processing. The recognition unit 14 is configured to process the images I generated by the SAR radar 2 and received via a link 15, so as to deduce data referred to as detection representing an optimised 3D cloud as described above; [0229] a comparison unit 16 configured to compare this detection data (received via a link 17) with the reference data received from the database 13 via a link 18 so that an elongate target can be recognised and identified.

    [0230] The processing unit 31 therefore comprises a device 1, as described above, for generating an optimised 3D point cloud depicting the elongate target. This elongate target may correspond to an object to be designated, which is transmitted to a user device (preferably on-board) via a link 19.

    [0231] Furthermore, in a preferred embodiment, as shown in FIG. 10, the system 12 also comprises a decision unit 20 using the data transmitted by the comparison unit 16 and additional data received via links 21, in particular other data related on a designation, to make a final decision related on a designation of a goal. The decision unit 20 can transmit the result of its processing via a link 22.

    [0232] Furthermore, in a second application, said device 1 is part of a system 23, as shown in FIG. 11, for generating a database of targets representing a same type of elongate object, in particular a ship.

    [0233] In this second application, which is related to a mission preparation in particular, a descriptor computing is used for the creation of the mission preparation entries.

    [0234] The system 23 comprises, as shown in FIG. 11: [0235] a database 24 comprising CAD models of objects; [0236] a multipath synthetic aperture radar scene generator 25, linked to the database 24 and capable of simulating multipath SAR images (images I); [0237] a processing unit 30 incorporating a unit (not shown) such as the unit 5 for carrying out an interferometric processing and comprising a device 1 as described above. The processing unit 30 is configured to process the images I generated by the generator 25 to generate an optimised 3D point cloud illustrating an elongate object and to provide corresponding data; [0238] a creation unit 26 of a reference base, linked to the database 24 and adapted to create a reference base 27 (e.g. CAD silhouettes); and [0239] a learning unit 28 configured to carry out a learning from the data (e.g. the optimised cloud, or a data that shapes the latter, such as an image recomputed on this cloud) received from the device 1 as well as from the reference base 27, and to provide a trained metric and the reference base. The learning unit 28 carries out the learning of a comparison metric (by Artificial Intelligence) using the descriptors on summary data.

    [0240] This trained metric (learning) and the reference base can be provided to a unit 29 as mission preparation inputs.