METHOD OF CALCULATING POWER LEVEL REFLECTANCE OF OBJECT ON GROUND USING SAR IMAGE
20230184929 · 2023-06-15
Inventors
- Dong Hyun Kim (Daejeon, KR)
- Do Chul YANG (Daejeon, KR)
- Ho Ryung JEONG (Sejong-si, KR)
- Doo Chun SEO (Daejeon, KR)
Cpc classification
G01S13/90
PHYSICS
International classification
Abstract
A method of calculating power level reflectance σ.sub.0 of an object on the ground using synthetic aperture radar (SAR) image includes receiving the SAR image composed of pixels each having a complex value (I.sub.DN+jQ.sub.DN), local incidence angle data including local incidence angle values respectively corresponding to the pixels of the SAR image and a reflection coefficient K.sub.2 of the SAR image, calculating power level reflectance β.sub.0 on a slant range domain of a first object corresponding to a first pixel based on the complex value (I.sub.DN+jQ.sub.DN) of the first pixel in the SAR image and the reflection coefficient K.sub.2, and calculating, using an equation that σ.sub.0=β.sub.0.Math.(sin θ.sub.i).sup.2, power level reflectance σ.sub.0 of the first object on the ground based on the power level reflectance β.sub.0 of the first object on the slant range domain and the local incidence angle value θ.sub.i corresponding to the first pixel.
Claims
1. A method of calculating, by a computing device, power level reflectance of an object on the ground using a synthetic aperture radar (SAR) image, the method comprising: receiving the SAR image composed of pixels arranged in two dimensions and each having a complex value (I.sub.DN+jQ.sub.DN), local incidence angle data including local incidence angle values respectively corresponding to the pixels of the SAR image, and a reflection coefficient K.sub.2 of the SAR image; based on the complex value (I.sub.DN+jQ.sub.DN) of a first pixel in the SAR image and the reflection coefficient K.sub.2, calculating power level reflectance β.sub.0 on a slant range domain of a first object corresponding to the first pixel; and based on the power level reflectance β.sub.0 on the slant range domain of the first object and a local incidence angle value θ.sub.i corresponding to the first pixel, calculating power level reflectance σ.sub.0 of the first object on the ground using an equation that σ.sub.0=β.sub.0.Math.(sin θ.sub.i).sup.2.
2. The method of claim 1, wherein the power reflectance β.sub.0 on the slant range domain of the first object is calculated using an equation that β.sub.0=K.sub.2.Math.(I.sub.DN.sup.2+Q.sub.DN.sup.2).
3. The method of claim 1, further comprising, based on the power level reflectance β.sub.0 on the slant range domain of the first object and the local incidence angle value θ.sub.i corresponding to the first pixel, calculating power level reflectance γ.sub.0 of the first object on a vertical domain in a beam radiation direction using an equation that γ.sub.0=β.sub.0.Math.(tan θ.sub.i).sup.2.
4. A method of generating, by a computing device, a synthetic aperture radar (SAR) image and a reflection coefficient K.sub.2 thereof, the method comprising: determining a rescaling factor (RF) of the SAR image generated by capturing an area to be photographed with an SAR device; determining a calibration constant (Calco) of an SAR processing system which generates the SAR image; determining an operation mode factor β.sub.3 for a point target of the SAR image; determining an operation mode factor β.sub.4 for a distributed target of the SAR image; determining an SAR image processing coefficient K.sub.0 of the SAR image; and based on the rescaling factor (RF), the operation mode factor β.sub.3 for the point target, the operation mode factor β.sub.4 for the distributed target, the calibration constant (Calco), and the SAR image processing coefficient K.sub.0, calculating a reflection coefficient K.sub.2 using an equation that K.sub.2=RF.sup.2.Math.β.sub.3.sup.2.Math.Calco/(β.sub.4.sup.2.Math.K.sup.2).
5. The method of claim 4, wherein the determining of the rescaling factor (RF) of the SAR image comprises: determining a reference slant range R.sub.slant_ref from the SAR device to the area to be photographed; determining a quantization coefficient (Qs) used to determine a digital number DN of each pixel of the SAR image; and calculating the rescaling factor (RF) using an equation that RF=4πR.sub.slant_ref.sup.2.Math.Qs.
6. The method of claim 4, wherein the determining of the calibration constant (Calco) of the SAR processing system comprises generating SAR raw data by observing a point target of which a radar cross section RCS.sub.point_target, a location, and a position are known, by using the SAR device operating in a strip map mode; generating a first raw SAR image by compensating for an antenna pattern according to a positional relationship between the position of the point target and the SAR device using RCS profile data according to the antenna pattern for the point target, without applying windowing to the SAR raw data; generating a first SAR image by removing a clutter level from the first raw SAR image; and calculating the calibration constant (Calco) using an equation that DN.sub.1.Math.RF.Math.(Calco).sup.1/2=(RCS.sub.point_target).sup.1/2, based on the radar cross section of the point target RCS.sub.point_target, a digital number DN.sub.1 of pixels corresponding to the location of the point target of the first SAR image and the rescaling factor (RF).
7. The method of claim 6, wherein the generating of the first SAR image by removing a clutter level from the first raw SAR image comprises: calculating clutter complex values corresponding to the clutter level based on complex values of an area independent of the point target in the first raw SAR image; generating a first SAR complex image by subtracting the clutter complex values from complex values of all pixels of the first raw SAR image; and generating the first SAR image by calculating the magnitude of each complex value of all pixels of the first SAR complex image.
8. The method of claim 6, wherein the determining of the operation mode factor β.sub.3 for a point target of the SAR image comprises: generating a second SAR image by observing the point target using the SAR device operating in a same operation mode as the SAR image, and performing SAR image processing; and calculating, using DN.sub.1=DN.sub.2.Math.β.sub.3, the operation mode factor β.sub.3 for the point target based on digital number DN.sub.1 of pixels corresponding to the location of the point target of the first SAR image and the digital number DN.sub.2 of pixels corresponding to the location of the point target of the second SAR image.
9. The method of claim 6, wherein the determining of the operation mode factor β.sub.3 for a point target of the SAR image comprises determining a first rescaling factor RF.sub.1 of the first SAR image; generating a second SAR image by observing the point target using the SAR device operating in a same operation mode as the SAR image, and performing SAR image processing; determining a second rescaling factor RF.sub.2 of the second SAR image; and calculating, using an equation that DN.sub.1.Math.RF.sub.1=DN.sub.2.Math.RF.sub.2.Math.β.sub.3, the operation mode factor β.sub.3 for the point target based on the digital number DN.sub.1 of pixels corresponding to the location of the point target of the first SAR image, the first rescaling factor RF.sub.1, a digital number DN.sub.2 of pixels corresponding to the location of the point target of the second SAR image, and the second rescaling factor RF.sub.2.
10. The method of claim 6, wherein the determining of an operation mode factor β.sub.4 for a distributed target of the SAR image comprises: generating a third SAR image by observing a homogeneous area of which location is known using the SAR device operating in the strip map mode, and performing image processing; generating a fourth SAR image by observing the homogeneous area at a same observation location using the SAR device operating in a same operation mode as the SAR image, and performing SAR image processing; determining a resolution ratio ρ.sub.slrrfocd4 in the slant range direction of the fourth SAR image to the third SAR image; determining a resolution ratio ρ.sub.slrafocd4 in the azimuth direction of the fourth SAR image to the third SAR image; and calculating, using an equation that DN.sub.3=DN.sub.4.Math.β.sub.3/(ρ.sub.slrrfocd4.Math.ρ.sub.slrafocd4.Math.β.sub.4), the operation mode factor β.sub.4 for the distributed target of the SAR image based on a digital number DN.sub.3 of pixels corresponding to the location of the homogeneous area of the third SAR image, a digital number DN.sub.4 of pixels corresponding to the location of the homogeneous area of the fourth SAR image, and the operation mode factor β.sub.3 for the point target.
11. The method of claim 6, wherein the determining of the SAR image processing coefficient K.sub.0 of the SAR image comprises: determining a resolution ratio ρ.sub.slrrfocd in a slant range direction of the SAR image to the first SAR image; determining a resolution ratio ρ.sub.slrafocd in an azimuth direction of the SAR image to the first SAR image; determining a peak reduction rate αβ.sub.1 depending on whether windowing is applied during the SAR image processing; determining an amplification factor f.sub.broadf_ra_az according to windowing application during SAR image processing; and calculating, using an equation that K.sub.0=αβ.sub.1.Math.β.sub.slrrfocd.Math.ρ.sub.slrafocd.Math.f.sub.broadf_ra_az, the SAR image processing coefficient K.sub.0 based on the peak reduction rate αβ.sub.1, the resolution ratio ρ.sub.slrrfocd in the slant range direction, the resolution ratio ρ.sub.slrafocd in the azimuth direction, and the amplification factor f.sub.broadf_ra_az.
12. The method of claim 11, wherein the resolution ratio ρ.sub.slrrfocd in the slant range direction is calculated by a ratio of the resolution in the slant range direction of the SAR image to the resolution in the slant range direction of the first SAR image.
13. The method of claim 11, wherein the resolution ratio ρ.sub.slrafocd in the azimuth direction is calculated by a ratio of the resolution in the azimuth direction of the SAR image to the resolution in the azimuth direction of the first SAR image.
14. The method of claim 11, wherein the determining of a peak reduction rate αβ.sub.1 depending on whether windowing is applied during the SAR image processing comprises: generating a second raw SAR image by applying windowing to the SAR raw data and compensating for an antenna pattern in accordance with positional relation between the point target and the SAR device using RCS profile data according to the antenna pattern for the point target; generating a fifth SAR image by removing a clutter level from the second raw SAR image; and calculating, using an equation that DN.sub.f5=DN.sub.f1αβ.sub.1, the peak reduction rate αβ.sub.1 based on a peak digital number DN.sub.f1 of pixels corresponding to the location of the point target of the first SAR image and a peak digital number DN.sub.f5 of pixels corresponding to the location of the point target of the fifth SAR image.
15. The method of claim 14, wherein the generating of the first SAR image by removing a clutter level from the first raw SAR image comprises: calculating a first clutter complex value corresponding to the clutter level based on complex values of an area independent of the point target in the first raw SAR image; and generating a first SAR complex image by subtracting the first clutter complex value from complex values of all pixels of the first raw SAR image; and wherein when windowing is not applied during SAR image processing for generating the SAR image, the amplification factor f.sub.broadf_ra_az according to windowing application during the SAR image processing may be determined by the magnitude value of complex values which is the sum of complex values of all pixels of the first SAR complex image divided by peak digital number DN.sub.f1 of pixels corresponding to the location of the point target of the first SAR image.
16. The method of claim 14, wherein the generating of a fifth SAR image by removing a clutter level from the second raw SAR image comprises: calculating a second clutter complex value corresponding to the clutter level based on complex values of an area independent of the point target in the second raw SAR image; and generating a second SAR complex image by subtracting the second clutter complex value from complex values of all pixels of the second raw SAR image, and wherein when windowing is applied during SAR image processing for generating the SAR image, the amplification factor f.sub.broadf_ra_az according to windowing application during the SAR image processing is determined by the magnitude value of complex values which is the sum of complex values of all pixels of the second SAR complex image divided by a peak digital number DN.sub.f5 of pixels corresponding to the location of the point target of the fifth SAR image.
17. The method of claim 16, wherein the determining of the operation mode factor β.sub.4 for a distributed target of the SAR image comprises: generating a third SAR image by observing a homogeneous area of which location is known using the SAR device operating in the strip map mode and performing image processing; determining a third rescaling factor RF.sub.3 of the third SAR image; determining a local incidence angle θ.sub.3 corresponding to the homogeneous area in the third SAR image based on the location of the homogeneous area and the observation location of the SAR device; generating a fourth SAR image by observing the homogeneous area using the SAR device operating in the same operation mode as the SAR image and performing SAR image processing; determining a fourth rescaling factor RF.sub.4 of the fourth SAR image; determining a resolution ratio ρ.sub.slrrfocd4 in the slant range direction of the fourth SAR image to the third SAR image; determining a resolution ratio ρ.sub.slrafocd4 in the azimuth direction of the fourth SAR image to the third SAR image; determining a local incidence angle θ.sub.4 corresponding to the homogeneous area in the fourth SAR image based on location of the homogeneous area and observation location of the SAR device; and calculating, using an equation that DN.sub.3.Math.RF.sub.3.Math.sinθ.sub.3=DN.sub.4.Math.RF.sub.4.Math.sin θ.sub.4.Math.β.sub.3(ρ.sub.slrrfocd4.Math.ρ.sub.slrafocd4.Math.β.sub.4), the operation mode factor β.sub.4 for the distributed target of the SAR image based on a digital number DN.sub.3 of pixels corresponding to the location of the homogeneous area of the third SAR image, a digital number DN.sub.4 of the pixels corresponding to the location of the homogeneous area of the fourth SAR image, and the operation mode factor β.sub.3 for the point target.
18. The method of claim 17, wherein the power level reflectance σ.sub.0_3 of the homogeneous area on the ground calculated based on the third SAR image is identical to the power level reflectance σ.sub.0_4 of the homogeneous area on the ground calculated based on the fourth SAR image.
19. A computer program stored in a medium to execute the method of claim 1 using a computing device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0033] The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
DETAILED DESCRIPTION
[0040] Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the present description. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
[0041] Hereinafter, various embodiments will be described in detail with reference to the accompanying drawings so that a person skilled in the art to which the disclosure pertains may easily implement them. However, the technical ideas of the disclosure are not limited to the embodiments described herein since they may be modified and implemented in various forms. In describing the embodiments disclosed herein, when it is determined that a detailed description of related art may obscure the subject matter of the disclosure, detailed description of the related art will be omitted. The same or similar components will be given the same reference numbers, and overlapping descriptions thereof will be omitted.
[0042] When an element is referred to as being “connected” with another element in the present description, this includes not only when they are “directly connected” but also “indirectly connected” with another element interposed in between. When an element is referred to as “including” any other elements, this means that the element may include other elements further, without excluding other elements, unless specifically stated otherwise.
[0043] Some embodiments may be described in terms of functional block configurations and various processing steps. Some or all of these functional blocks may be implemented in various numbers of hardware and/or software configurations that perform specific functions. For example, functional blocks of the disclosure may be implemented by one or more microprocessors or by circuit configurations for a certain function. Functional blocks of the disclosure may be implemented in various programming or scripting languages. Functional blocks of the disclosure may be implemented as an algorithm running on one or more processors. A function performed by a functional block of the disclosure may be performed by a plurality of functional groups, or functions performed by a plurality of functional blocks of the disclosure may be performed by one functional block. In addition, the disclosure may employ related art for electronic configuration, signal processing, and/or data processing.
[0044] Synthetic aperture radar (SAR) raw data acquired through the operation of an SAR device and hardware includes complex values acquired by generating voltage signals using satellite's power device and oscillator, converting the generated voltage signals into electromagnetic waves for transmission, receiving echo signals which are returned electromagnetic waves, and converting the received echo signals into voltage levels for storage. A radar discharge equation for power level of the obtained signals is as follows:
P.sub.r=λ.sup.2/(4π).Math.P.sub.t.Math.A.sub.e.sup.t.Math.G.sup.t.Math.σ.Math.A.sub.e.sup.r.Math.G.sup.r/(4πR.sub.slant.sup.2).sup.2.Math.eff
[0045] where P.sub.r is reception power, λ is wavelength, P.sub.t is transmission power, A.sub.e.sup.t is antenna efficiency in transmission, G.sup.t is absolute antenna gain in transmission, σ is radar cross section from which radar beam is reflected on the ground, A.sub.e.sup.r is antenna efficiency in reception, G.sup.r is absolute antenna gain in reception, R.sub.slant is slant range, and eff.sup.−1 is calibration constant which is gain offset of SAR system.
[0046] The signal of received voltage level is as follows:
V.sub.r=(R.sup.P.sub.r).sup.1/2
[0047] where R is resistance value of an electronic circuit.
[0048] By inputting raw data of this voltage level, an SAR image forming and processing process is performed to obtain digital number (DN) data of a single look complex (SLC) image. The relation between digital number and radar cross section (σ) is as follows:
σ.sup.1/2=DN.Math.RF/eff.sup.1/2
[0049] where RF is a rescaling factor of SLC image and may be defined as follows:
RF=4πR.sub.slant_ref.sup.2.Math.Quantization.sub.step/(RaProcScale.Math.AzProcScale)
[0050] where R.sub.slant_ref is a reference slant range, Quantization.sub.step is a quantization step value, and RaProcScale and AzProcScale are scaling factors applied to image processing in a slant range direction and in an azimuth direction, respectively. An initial correction constant (eff) is 1. Quantization.sub.step/(RaProcScale.Math.AzProcScale) may be expressed as a quantization coefficient (Qs) used to determine digital number (DN) of each pixel of SAR image.
[0051] In the above equations, the data size information is a result of overlap and interference of 2D sinc functions which are responses of SAR system, and the size of signals is adjusted by compression so that there is no dependence on bandwidth. There are assumptions about bright values of SAR image data as follows:
[0052] Assumption 1) The unit of an object is a small dot and has unique structural and organizational reflective properties.
[0053] Assumption 2) The brightness value of image for distributed target is a result of constructive interference of response signals to point targets constituting the object.
[0054] Assumption 3) The spatial transformation of brightness value information of distributed target is based on density of voltage level signals.
[0055] Assumption 4) The noise level or antenna distortion in image data is already compensated.
[0056] From the above equations, a relationship between a brightness value and a physical quantity on SAR image data is as follows:
β.sup.1/2=DN.Math.RF.Math.β.sub.3.Math.calco.sup.1/2
[0057] where β.sub.3 is an operation mode factor for point target, which is 1 when operating in a strip map mode and is set through calculation in other operation modes. β is power level reflectance of an object on the slant range, zero doppler image domain. Calco is a calibration constant of SAR processing system.
[0058] The result of analyzing theoretical brightness value information on an object on SAR image as inherent reflection characteristics of material and image processing-related parameters is as follows:
β.sup.1/2=β.sub.0.sup.1/2.Math.α.sub.1.Math.ρ.sub.slrrfocad.Math.f.sub.broadf_ra[slrwc].Math.β.sub.1.Math.ρ.sub.slrafocd.Math.f.sub.broadf_az[azwc].Math.β.sub.4
[0059] where β.sub.0 is a power level reflectance of an object on a slant range domain. α.sub.1 is a sinc peak reduction rate in a slant range direction due to windowing in a slant range direction, and β.sub.1 is a sinc peak reduction rate in an azimuth direction due to windowing in an azimuth direction.
[0060] ρ.sub.slrrfocad is a ratio of the resolution of the present image in a slant range direction to the resolution of certain reference mode/beam in a slant range direction, and may be determined by using the slant range resolution information considering only slant range band width information. ρ.sub.slrafocd is a ratio of the resolution of the present image in an azimuth direction to the resolution of a certain reference mode/beam in an azimuth direction, and may be determined by using the azimuth direction resolution information considering only azimuth direction band width information.
[0061] f.sub.broadf_ra is a ratio of constructive interference in a slant range direction by broadening due to windowing in a slant range direction, and slrwc is a windowing coefficient in a slant range direction. f.sub.broadf_az is a ratio of constructive interference in an azimuth direction by broadening due to windowing in an azimuth direction, and azwc is a windowing coefficient in an azimuth direction. β.sub.4 is an operation mode factor for a distributed target, which is 1 in strip mode and is set through calculation in other operation modes.
[0062] ρ.sub.slrrfocad, ρ.sub.slrafocd, f.sub.broadf_ra, f.sub.broadf_az, and β.sub.4 are for a distributed target and have a value of 1 for a point target.
[0063] By applying the above equation to a point target such as a corner reflector or an active transponder, the theoretical brightness value on the SAR image may be analyzed.
β.sup.1/2=RCS.sub.pt.sup.1/2.Math.α.sub.1.Math.β.sub.1
[0064] where RCS.sub.pt replaces β.sub.0 in consideration of structural and functional characteristics of artificial point target.
[0065] Parameters of the above equations may be calculated using SLC Data generated by observing and image processing a corner reflector or an active transponder using strip map operation mode and standard beam. When windowing is not applied, it may be used that both of α.sub.1 and β.sub.1 are 1.
[0066] The power level reflectance of an object may be calculated as follows:
β.sub.0=σ.sub.0/(sin θ.sub.i).sup.2
γ.sub.0σ.sub.0/(cos θ.sub.i).sup.2
[0067] where σ.sub.0 is power level reflectance of an object on the ground, and γ.sub.0 is power level reflectance of an object on a vertical domain in a beam radiation direction. θ.sub.i is a local incidence angle.
[0068] In the disclosure, the relation between the bright value of image and the reflectance of an object is analyzed and derived in consideration of image processing of voltage level data.
[0069]
[0070] Referring to
[0071] The processor 110 may be configured to process commands of a computer grogram by performing basic arithmetic, logic, and input/output operations. The commands are stored in the memory 120, and the processor 110 may read out commands stored in the memory 120. For example, the processor 110 may be configured to execute received commands according to program code stored in the memory 120.
[0072] The memory 120 is a computer-readable recording medium and may include a random-access memory (RAM), a read-only memory (ROM), and a permanent mass storage device such as disk drive. In addition, program code for controlling the computing device 100 may be temporarily or permanently stored in the memory 120.
[0073] The input unit 130 may receive an SAR image product 10 from an external device. The input unit 130 may be a communication module capable of receiving the SAR image product 10 through wired or wireless network. The input unit 130 may be an input device capable of reading a storage medium in which the SAR image product 10 is stored.
[0074] The SAR image product 10 may include SAR image 11, local incidence angle data 12 and a reflection coefficient K.sub.2 13. The SAR image is an image composed of pixels arranged in two dimensions and each having a complex value (I.sub.DN+jQ.sub.DN), and may be, for example, a single look complex (SLC) image. The SAR image 11 may have the resolution in a slant range direction and the resolution in an azimuth direction, and pixels corresponding to the product of these resolutions are arranged in two dimensions. Each pixel has a complex value (I.sub.DN+jQ.sub.DN).
[0075] The local incidence angle data 12 includes local incidence angle values e respectively corresponding to pixels of the SAR image 11. The local incidence angle means an angle between a line perpendicular to a certain location or the surface of an object at the location and a radar beam incident on the location. According to an embodiment, local incidence angle data may have the same number of local incidence angle values θ.sub.i as the number of pixels in the SAR image 11. Local incidence angle values corresponding to pixels in the SAR image 11 may be included in the local incidence angle data 12. According to another embodiment, local incidence angle data 12 may have more or less local incidence angle values e than the number of pixels in the SAR image 11, and the local incidence angle values e respectively corresponding to pixels in the SAR image 11 may be calculated using local incidence values of the local incidence angle data 12, for example, using interpolation.
[0076] The reflection coefficient K.sub.2 13 is a value calculated in consideration of hardware and software processes for generating the SAR image 12, and will be described in more detail below.
[0077] The SAR image product 10 including the SAR image 11, the local incidence angle data 12 and the reflection coefficient K.sub.2 13 may be received and stored in the memory 120 by the input unit 130. The processor 110 may calculate power level reflectance σ.sub.0 of an object on the ground displayed in the SAR image 11 using the SAR image 11, the local incidence angle data 12 and the reflection coefficient K.sub.2 13.
[0078]
[0079] Referring to
[0080] The processor may calculate power level reflectance β.sub.0 on a slant range domain of a first object corresponding to a first pixel based on any complex value (I.sub.DN+jQ.sub.DN) of first pixel in the SAR image 11 and the reflection coefficient K.sub.2 13 (S120). The processor may calculate the power level reflectance β.sub.0 of an object on slant range domain using β.sub.0=K.sub.2.Math.(I.sub.DN.sup.2+Q.sub.DN.sup.2) or β.sub.0=K.sub.2.Math.DN.sup.2, where I.sub.DN and Q.sub.DN are values included in the SAR image 11, and K.sub.2 is a reflection coefficient K.sub.2 13, which is a value included in and received by the SAR image product 10 in S110. DN is digital number corresponding to the size of first pixel, and is the size (i.e., absolute value) of complex value (I.sub.DN+jQ.sub.DN) of first pixel.
[0081] The processor 120 may calculate power level reflectance σ.sub.0 of first object on the ground based on the power level reflectance β.sub.0 of first object on the slant range domain calculated in S120 and the local incidence angle value e corresponding to the first pixel received in S110 (S130). According to the disclosure, the processor 110 may calculate power level reflectance σ.sub.0 of first object on the ground by using σ.sub.0=β.sub.0.Math.(sin θ.sub.i).sup.2.
[0082] The processor 120 may calculate power level reflectance γ.sub.0 of first object on the vertical domain in the beam radiation direction based on the power level reflectance β.sub.0 of first object on the slant range domain calculated in S120 and the local incidence angle value θ.sub.i corresponding to the first pixel received in S110 (S140). According to the disclosure, the processor 110 may calculate power level reflectance yo of first object on the vertical domain in the beam radiation direction using γ.sub.0=β.sub.0.Math.(tan θ.sub.i).sup.2.
[0083] This is because spatial transformation of image data must be performed at voltage level since power flux density of transmission beam and reception beam is compensated and normalized during SAR signal processing, each target point is compressed on the slant range domain at voltage level, and signals for multiple points are superimposed.
[0084]
[0085] Referring to
[0086] The SAR device 20 may be mounted on air vehicles such as artificial satellite, aircraft, unmanned aerial vehicle or the like. SAR is a radar system that creates a ground topographic map or observes the surface by processing the minute time difference in which radar waves are reflected back to the curved surface after sequentially firing radar waves from the air to the ground or to the sea. The SAR device 20 includes a transmission/reception module for transmitting and receiving radar waves, and a control module for controlling the transmission/reception module. The SAR device 20 may observe an area to be photographed (PR) including an object (Ob) in side looking mode.
[0087] The computing device 200 may be installed, for example, in a ground station that communicates with a vehicle equipped with the SAR device 20.
[0088] The processor 210 may be configured to process commands of computer program by performing basic arithmetic, logic and input/output operations. The commands are stored in the memory 220 and the processor 210 may read the commands stored in the memory 220. For example, the processor 110 may be configured to execute received commands according to program code stored in the memory 120.
[0089] The memory 120 is a computer-readable recording medium and may include random access memory (RAM), read only memory (ROM) and a permanent mass storage device such as disk drive. In addition, program code for controlling the computing device 100 may be temporarily or permanently stored in the memory 120.
[0090] The communication unit 130 may receive SAR raw data from the SAR device 20 through wireless communication. The SAR raw data is image data generated by the SAR device and composed of pixels arranged in two dimensions.
[0091] Each of the pixels may have a complex value. The SAR raw data may be stored in the memory 120, and the processor 210 may image process the SAR raw data to generate SAR image and a reflection coefficient K.sub.2 thereof. The generated SAR image and a reflection coefficient K.sub.2 thereof include the SAR image 11 and reflection coefficient K.sub.2 thereof in the SAR image product 10 and may be provided to the computing device 100 of
[0092]
[0093] Referring to
[0094] The processor 210 may determine a calibration constant (Calco) of SAR processing system that has generated the SAR image 11 (S220). The SAR processing system is implemented in the processor 210 of computing device 200 generates SAR image 11 by image processing SAR raw data. The calibration constant (Calco) is one of parameters applied when generating the SAR image 11 and may be a value that varies depending on the SAR processing system.
[0095] The processor 210 may determine the operation mode factor β.sub.3 for the point target of the SAR image 11 (S230). The operation mode in which the SAR device 20 photographs an area to be photographed (PR) to generate SAR image 11 is one of a plurality of preset operation modes. The operation mode may include, for example, strip map mode, and the operation mode factor β.sub.3 in the strip mode is 1. When the SAR device 20 captures an area to be photographed (PR) in different operation modes, the operation mode factor β.sub.3 for the point target is other than 1.
[0096] The processor 210 may determine the operation mode factor β.sub.4 for the distributed target of the SAR image 11 (S240). When the SAR deice 20 captures an area to be photographed (PR) in strip map mode, the operation mode factor β.sub.4 for the distributed target is 1. When the SAR device 20 captures an area to be photographed (PR) in different operation modes, the operation mode factor β.sub.4 for the point target is other than 1.
[0097] The processor may determine an SAR image processing coefficient K.sub.0 of SAR image 11 (S250). The SAR image processing coefficient K.sub.0 of SAR image 11 is a coefficient for calculating a reflection coefficient K.sub.2, which will be described in more detail below.
[0098] The processor 210 may calculate a reflection coefficient K.sub.2 of SAR image 11 based on a rescaling factor (RF), the operation mode factor β.sub.3 for the point target, the operation mode factor β.sub.4 for a distributed target, a calibration constant (Calco) and an SAR image processing coefficient K.sub.0. The processor may calculate the reflection coefficient K.sub.2 of SAR image 11 using K.sub.2=RF.sup.2.Math.β.sub.3.sup.2.Math.Calco/(β.sub.4.sup.2.Math.K.sub.0.sup.2).
[0099] In order to determine the rescaling factor (RF) of SAR image 11, the processor 210 may determine a reference slant range R.sub.slant_ref from the SAR device 20 to an area to be photographed (PR). Since the area to be photographed (PR) is observed in side looking mode, its slant range, i.e., a distance, from the SAR device 20 varies according to its location. The reference slant range R.sub.slant_ref is used to correct the distance difference within the area to be photographed (PR) since even with respect to the same object, the object is displayed brighter as the object gets closer to the SAR device 20 and the object is displayed darker as the object gets farther therefrom. That is, the signal may be corrected to indicate that an arbitrary position within an area to be photographed (PR) is spaced apart from the SAR device 20 by the reference slant range R.sub.slant_ref.
[0100]
[0101] Referring to
[0102] Referring back to
[0103] The processor 210 may calculate the rescaling factor (RF) using 4πR.sub.slant_ref.sup.2.Math.Qs. The calculated rescaling factor (RF) may be stored in the memory 220.
[0104] In order to determine the calibration constant (Calco) of SAR processing system, the SAR device 20 operating in strip map mode may generate first SAR raw data by observing a point target. The point target is a target of which radar cross section RCS.sub.point_target, location and position are known, and may be installed in a preset position at a preset location. The point target has a preset radar cross section RCS.sub.point_target according to its size or performance. The point target may be a corner reflector or an active transponder. The point target is a device that reflects laser waves transmitted from the SAR device 20 entirely to the SAR device 20. The computing device 200 may acquire first SAR raw data.
[0105] The processor 210 may generate a first raw SAR image based on the first SAR raw data. The first raw SAR image is generated by compensating for the antenna pattern according to positional relation between the point target and the SAR device 20 using RCS profile data according to the antenna pattern for the point target without applying windowing to the first SAR raw data. The RCS profile data according to the antenna pattern for the point target may be previously stored in the processor 210. The gain of antenna pattern may vary according to positional relation between the point target and the SAR device 20. The first raw SAR image may be an image processed to compensate for the antenna pattern according to positional relation between the point target and the SAR device 20 using RCS profile data of the point target stored in advance with respect to the first SAR raw data.
[0106] The processor 210 may generate the first SAR image by removing a clutter level from the first raw SAR image. According to an embodiment, the processor 210 may calculate a first complex clutter value corresponding to the clutter level based on complex values of an area independent of a point target in the first raw SAR image. The clutter level means a complex value corresponding to an area without a point target, that is a background, and may be calculated as a first clutter complex value by averaging complex values of all pixels in four areas in a diagonal direction of the point target. The processor 210 may generate the first SAR complex image by subtracting the first clutter complex value from the complex values of all pixels of the first raw SAR image. The processor 210 may generate the first SAR image by calculating the size of each complex value of all pixels of the first SAR complex image. The processor 210 may determine the size of the complex value by calculating the absolute value of each complex value of all pixels of the first SAR complex image.
[0107]
[0108] Referring to
[0109] Referring back to
[0110] According to an embodiment, in order to determine an operation mode factor β.sub.3 for the point target of the SAR image 11, the SAR device 20 operating in the same operation mode at the same observation location as when generating the SAR image 11 may generate second SAR raw data by observing the same point target as when generating the first SAR raw data. The computing device 200 may acquire second SAR raw data. The processor 210 may generate a second SAR image by performing SAR image processing on the second SAR raw data.
[0111] The processor 210 may generate a second raw SAR image by compensating for the antenna pattern according to the positional relation between the point target and the SAR device 20 using RCS profile data according to the antenna pattern for the point target without applying windowing to the second SAR raw data. The processor 210 may generate the second SAR image by removing the clutter level from the second raw SAR image. The processor 210 may calculate a second clutter complex value corresponding to the clutter level based on complex values of an area independent of the point target in the second raw SAR image, and generate the second SAR complex image by subtracting the second clutter complex value from complex values of all pixels of the second raw SAR image. The processor 210 may generate the second SAR image by calculating the size of each complex value of all pixels of the second SAR complex image.
[0112] The processor 210 may calculate the operation mode factor β.sub.3 for the point target of SAR image 11 based on digital number DN.sub.1 of pixels corresponding to the location of the point target in the first SAR image and digital number DN.sub.2 of pixels corresponding to the location of point target in the second SAR image. The operation mode factor β.sub.3 for the point target of SAR image 11 may be calculated using DN.sub.1=DN.sub.2.Math.β.sub.3.
[0113] According to another embodiment, in order to determine the operation mode factor β.sub.3 for the point target of SAR image 11, the processor 210 may determine the first rescaling factor RF.sub.1 of the first SAR image. The processor 210 operating in the same operation mode as when generating the SAR image 11 may generate second SAR raw data by observing the same point target as when generating the first SAR raw data. The computing device 200 may acquire the second SAR raw data. The processor 210 may generate a second SAR image by performing SAR image processing on the second SAR raw data.
[0114] The processor 210 may generate a second raw SAR image by compensating for the antenna pattern according to the positional relation between the point target and the SAR device 20 using RCS profile data according to the antenna pattern for the point target without applying windowing to the second SAR raw data. The processor 210 may generate the second SAR image by removing the clutter level from the second raw SAR image. The processor 210 may calculate a second clutter complex value corresponding to the clutter level based on complex values of an area independent of the point target in the second raw SAR image, and generate the second SAR image by subtracting the second clutter complex value from complex values of all pixels of the second raw SAR image to generate the second SAR complex image and calculating the size of each complex value of all pixels of the second SAR complex image. The processor 210 may determine a second rescaling factor RF.sub.2 of the second SAR image.
[0115] The processor 210 may calculate an operation mode factor β.sub.3 for the point target of SAR image 11 based on digital number DN.sub.1 of pixels corresponding to the location of the point target in the first SAR image, the first rescaling factor RF.sub.1, digital number DN.sub.2 of pixels corresponding to the location of point target in the second SAR image and the second rescaling factor RF.sub.2. The operation mode factor β.sub.3 may be calculated using DN.sub.1.Math.RF.sub.1=DN.sub.2.Math.RF.sub.2.Math.β.sub.3.
[0116] According to an embodiment, in order to determine an operation mode factor β.sub.4 for the dispersion target of the SAR image 11, the SAR device 20 operating in strip map mode may generate a third SAR image by observing a homogeneous area of which location is known and performing SAR image processing. For example, the SAR device 20 operating in strip map mode may generate third SAR raw data by observing a homogeneous area of which location is known. The computing device 200 may acquire the third SAR raw data. The processor 210 may generate a third SAR image by performing SAR image processing on the third SAR raw data. The processor 210 may generate the third raw SAR image without applying windowing to the third SAR raw data. The processor 210 may generate a third SAR complex image by removing the clutter level from the third raw SAR image, and generate a third SAR image by calculating the size of each complex value of all pixels of the third SAR complex image.
[0117] The SAR device 20 operating in the same operation mode at the same observation location as when generating the SAR image 11 may generate the fourth SAR raw data by observing the same homogeneous area as when generating the third SAR raw data. The computing device 200 may acquire the fourth SAR raw data. The processor 210 may generate a fourth SAR image by performing SAR image processing on the fourth SAR raw data. The processor 210 may generate the fourth raw SAR image without applying windowing to the fourth SAR raw data. The processor 210 may generate a fourth SAR complex image by removing the clutter level from the fourth raw SAR image, and generate a fourth SAR image by calculating the size of each complex value of all pixels of the fourth SAR complex image.
[0118] The processor 210 may determine the resolution ratio ρ.sub.slrrfocd4 in the slant range direction of the fourth SAR image to the third SAR image, and determine the resolution ratio ρ.sub.slrafocd4 in the azimuth direction of the fourth SAR image to the third SAR image.
[0119] The processor 210 may calculate an operation mode factor β.sub.4 for the distributed target of SAR image 11 based on digital number DN.sub.3 of pixels corresponding to the location of homogeneous area in the third SAR image, digital number DN.sub.4 of pixels corresponding to the location of homogeneous area in the fourth SAR image and the operational mode factor β.sub.3 for the distributed target of the SAR image. The operation mode factor β.sub.4 for the distributed target of SAR image 11 may be calculated using DN.sub.3=DN.sub.4.Math.β.sub.3/(ρ.sub.slrrfocd4.Math.ρ.sub.slrafocd4.Math..sub.4).
[0120] In order to determine the SAR image processing coefficient K.sub.0 of the SAR image 11, the processor 210 may determine a resolution ratio ρ.sub.slrrfocd in the slant range direction of the SAR image 11 to the first SAR image and a resolution ratio ρ.sub.slrafocd in the azimuth direction of the SAR image 11 to the first SAR image. The resolution ratio in the slant range direction ρ.sub.slrrfocd is calculated by the ratio of the resolution in the slant range direction of the SAR image 11 to the resolution in the slant range direction of the first SAR image, and the resolution ratio in the azimuth direction ρ.sub.slrafocd may be calculated by the ratio of the resolution in the azimuth direction of the SAR image 11 to the resolution in the azimuth direction of the first SAR image.
[0121] The processor 210 may determine a peak reduction rate αβ.sub.1 depending on whether windowing is applied during SAR image processing and may determine an amplification factor f.sub.broadf_ra_az according to windowing application during SAR image processing. The processor 210 may calculate the SAR image processing coefficient K.sub.0 based on the peak reduction rate αβ.sub.1, the resolution ratio in the slant range direction ρ.sub.slrrfocd, the resolution ratio in the azimuth direction ρ.sub.slrafocd, and the amplification factor f.sub.broadf_ra_az. The SAR image processing coefficient K.sub.0 may be calculated using K.sub.0=αβ.sub.1.Math.ρ.sub.slrrfocd.Math.ρ.sub.slrafocd.Math.f.sub.broadf_ra_az.
[0122] In order to determine the peak reduction rate αβ.sub.1 according to whether or not windowing is applied during SAR image processing, the processor 210 may apply windowing to the first SAR raw data, generate fifth raw SAR data by compensating for the positional relation between the point target and the SAR device 20 using RCS profile data according to the antenna pattern for the point target, and generate a fifth SAR image by removing the clutter level from the fifth raw SAR image. The processor 210 may calculate a fifth clutter complex value corresponding to the clutter level based on complex values of an area independent of the point target in the fifth raw SAR image, generate a fifth SAR complex image by subtracting a fifth clutter complex value from the complex values of all pixels of the fifth raw SAR image, and generate the fifth SAR image by calculating the size of each complex value of all pixels of the fifth SAR complex image.
[0123] The processor 210 may calculate a peak reduction rate αβ.sub.1 based on peak digital number DN.sub.f1 of pixels corresponding to the position of the point target in the first SAR image and peak digital number DN.sub.f5 of pixels corresponding to the location of the point target in the fifth SAR image The peak reduction rate αβ.sub.1 may be calculated using DN.sub.f5=DN.sub.f1.Math.αβ.sub.1. The peak reduction rate αβ.sub.1 may be understood as α.sub.1.Math.β.sub.1, where α.sub.1 is a sinc peak reduction rate in the slant range direction due to windowing in slant range direction and β.sub.1 is a sinc peak reduction rate in the azimuth direction due to windowing in azimuth direction.
[0124] The amplification factor f.sub.broadf_ra_az according to windowing application during SAR image processing may vary depending on whether windowing is applied during SAR image processing to generate SAR image 11. According to an embodiment, when windowing is not applied during SAR image processing for generating the SAR image, the amplification factor f.sub.broadf_ra_az according to windowing application during the SAR image processing may be determined by the magnitude value of complex values which is the sum of complex values of all pixels of the first SAR complex image divided by peak digital number DN.sub.f1 of pixels corresponding to the location of the point target of the first SAR image. According to another embodiment, when windowing is applied during SAR image processing for generating the SAR image 11, the amplification factor f.sub.broadf_ra_az according to windowing application during the SAR image processing may be determined by the magnitude value of complex values which is the sum of complex values of all pixels of the fifth SAR complex image divided by peak digital number DN.sub.f5 of pixels corresponding to the location of the point target of the fifth SAR image.
[0125] The first and fifth SAR images are generated based on the first SAR raw data, where the first SAR image is generated through SAR image processing that does not apply windowing to the first SAR raw data and where the fifth SAR image is generated through SAR image processing that applies winnowing to the first SAR raw data.
[0126]
[0127] Referring to
[0128] Referring back to
[0129] The processor 210 may determine a local incidence angle 83 corresponding to the homogeneous area in the third SAR image based on the location of the homogeneous area and the observation location of the SAR device when generating the third SAR raw data.
[0130] The SAR device 20 operating in the same operation mode as when generating the SAR image 11 may generate fourth SAR raw data by observing same the homogeneous area as when generating the third SAR raw data. The computing device 200 may acquire the fourth SAR raw data. The processor 210 may generate a fourth SAR image by performing SAR image processing on the fourth SAR raw data. The processor 210 may generate the fourth raw SAR image without applying windowing to the fourth SAR raw data. The processor 210 may generate a fourth SAR complex image by removing the clutter level from the fourth raw SAR image, and generate a fourth SAR image by calculating the size of each complex value of all pixels of the fourth SAR complex image. The processor 210 may determine a fourth rescaling factor RF.sub.4 of the fourth SAR image.
[0131] The processor 210 may determine a local incidence angle (θ.sub.5) corresponding to the homogeneous area in the fifth SAR image based on the location of the homogeneous area and the observation location of the SAR device when generating the fourth SAR raw data.
[0132] The processor 210 may determine the resolution ratio ρ.sub.slrrfocd4 in the slant range direction of the fourth SAR image to the third SAR image, and determine the resolution ratio ρ.sub.slrafocd4 in the azimuth direction of the fourth SAR image to the third SAR image.
[0133] The processor 210 may calculate an operation mode factor β.sub.4 for the distributed target of SAR image 11 based on digital number DN.sub.3 of pixels corresponding to the location of homogeneous area in the third SAR image, digital number DN.sub.4 of pixels corresponding to the location of homogeneous area in the fourth SAR image and the operational mode factor β.sub.3 for the distributed target of the SAR image. The operation mode factor β.sub.4 may be calculated using DN.sub.3.Math.RF.sub.3.Math.sine θ.sub.3=DN.sub.4.Math.RF.sub.4.Math.sine θ.sub.4.Math.β.sub.3/(ρ.sub.slrrfocd4.Math.ρ.sub.slrafocd4.Math.β.sub.4).
[0134] According to the disclosure, the power level reflectance ρ.sub.0_3 of the homogeneous area on the ground calculated based on the third SAR image is identical to the power level reflectance σ.sub.0_4 of the homogeneous area on the ground calculated based on the fourth SAR image.
[0135] The various embodiments described above are exemplary and do not need to be performed independently of each other. The embodiments described in this description may be implemented in combination.
[0136] The various embodiments described above may be implemented in the form of a computer program that is executable through various components on a computer, and such a computer program may be recorded in a computer-readable medium. The medium may be to continuously store the program executable by the computer, or to temporarily store the program for execution or download. The medium may also be various recording means or storage means of a form in which one or a plurality of pieces of hardware has been combined, and may be distributed over a network, not limited to a medium directly connected to a computer system. Examples of the medium may include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM and DVD; magneto-optical media such as floptical disks; and media configured to store program instructions, such as ROM, RAM, flash memory, and the like. Furthermore, other examples of the medium may include an app store in which apps are distributed, a site in which other various pieces of software are supplied or distributed, and recording media and/or store media managed in a server.
[0137] In this description, “unit”, “module”, etc. may be a hardware component such as a processor or circuit, and/or a software component executed by a hardware component such as a processor. For example, “unit”, “module”, etc. may be implemented by components such as software components, object-oriented software components, class components and task components, and processes, functions, properties, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, database, data structures, tables, arrays and variables.
[0138] The foregoing description of the disclosure is for purposes of illustration, and a person skilled in the art may easily modify and change the embodiments in various ways from the description without changing technical ideas or essential features of the disclosure. Therefore, the embodiments described above are to be considered illustrative in all respects and are not intended to limit the disclosure, For example, each component described as singular may be distributed, and similarly components described as distributed may also be combined.
[0139] According to the disclosure, it is possible to provide a new analysis method in consideration of the SAR image processing process with respect to the relation between the brightness value of the SAR image data and the reflectance of object. Also, it is possible to provide a method of calculating the power level reflectance (σ.sub.0, sigma naught) of an object on the ground from the SAR image data. According to the disclosure, it is possible to provide a method of generating the reflection coefficient K.sub.2 of the SAR image in consideration of the SAR image processing process for generating the SAR image.
[0140] It should be understood that embodiments described herein should be considered in a descriptive sense only and not for aims of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments. While one or more embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the following claims.