Low cost 3D radar imaging and 3D association method from low count linear arrays for all weather autonomous vehicle navigation
09739881 · 2017-08-22
Assignee
Inventors
Cpc classification
G01S13/86
PHYSICS
International classification
Abstract
A low cost, all weather, high definition RF radar system for an autonomous vehicle is described. The high definition RF radar system generates true target object data suitable for imaging, scene understanding, and all weather navigation of the autonomous vehicle. The high definition RF radar system includes a pair of independent orthogonal linear arrays. Data from both linear arrays is fed to a processor that performs data association to form true target detections and target positions. A Boolean association method for determining true target detections and target positions reduces many of the ghosts or incorrect detections that can produce image artifacts. The high definition RF radar system provides near optimal imaging in any dense scene for autonomous vehicle navigation, including during visually obscured weather conditions such as fog.
Claims
1. A high definition RF radar system for an autonomous vehicle, the system comprising: a first array and a second array, the first array comprising a first plurality of receive antenna elements provided by a first receive switch network and a first plurality of transmit antenna elements provided by a first transmit switch network, and the second array comprising a second plurality of receive antenna elements provided by a second receive switch network and a second plurality of transmit antenna elements provided by a second transmit switch network; a Boolean associator configured to receive data from the first array and the second array and determine that the received data represents a true target detection of an object, wherein the Boolean associator is configured to determine the true target detection and target position when a plurality of constraints are met, the plurality of constraints including at least one first type of constraint and at least one second type of constraint; and a scene imaging unit, the scene imaging unit being configured to receive a plurality of true target detections and target positions from the Boolean associator and provide at least one of object detection information and scene imaging information to one or more systems of the autonomous vehicle; wherein the first array is disposed on the autonomous vehicle with a first orientation such that the first plurality of receive antenna elements and the first plurality of transmit antenna elements are configured for detection in a first direction; wherein the second array is disposed on the autonomous vehicle with a second orientation such that the second plurality of receive antenna elements and the second plurality of transmit antenna elements are configured for detection in a second direction; and wherein the first array is oriented generally orthogonal to the second array.
2. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein the first type of constraint is a geometric constraint and wherein the second type of constraint is a detection constraint.
3. The high definition RF radar system for an autonomous vehicle according to claim 2, further comprising a first geometric constraint and a second geometric constraint; and wherein the Boolean associator is configured to determine the true target detection and the target position when: (1) the first geometric constraint is less than or equal to a first threshold value, (2) the second geometric constraint is less than or equal to a second threshold value, and (3) the detection constraint is greater than a third threshold value.
4. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein the first direction is azimuth and the second direction is elevation.
5. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein the first array is disposed on a roof of the autonomous vehicle; and wherein the second array is disposed on at least one support pillar of the autonomous vehicle.
6. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein at least one of the first array and the second array includes a plurality of sacrificial antenna elements; and wherein the sacrificial antenna elements are disposed on either side of the first plurality of receive antenna elements of the first array and/or the second plurality of receive antenna elements of the second array.
7. The high definition RF radar system for an autonomous vehicle according to claim 6, wherein the plurality of sacrificial antenna elements has a nominally matched impedance as the first plurality of receive antenna elements and/or the second plurality of receive antenna elements.
8. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein the scene imaging unit is configured to provide the at least one of object detection information and scene imaging information in adverse weather conditions including one or more of visual airborne and/or surface obscurants.
9. The high definition RF radar system for an autonomous vehicle according to claim 1, wherein the autonomous vehicle is configured to provide signals for at least one of steering, acceleration, and braking controls based on the at least one of object detection information and scene imaging information provided by the scene imaging unit.
10. The high definition RF radar system for an autonomous vehicle according to claim 1, further comprising at least one additional sensor providing object information data to the autonomous vehicle, the at least one additional sensor including one or more of a camera, Lidar, sonar, ultrasound, GPS, INS, wheel encoders, and at least one additional high definition RF radar system.
11. A method of providing scene imaging for an autonomous vehicle using a high definition RF radar system, the method comprising: transmitting a first RF beam from a first array, the first array comprising a first plurality of transmit antenna elements provided by a first transmit switch network; receiving data at the first array received from reflections of the first RF beam, the data being received by a first plurality of receive antenna elements provided by a first receive switch network; transmitting a second RF beam from a second array, wherein the second array is oriented generally orthogonal to the first array, the second array comprising a second plurality of transmit antenna elements provided by a second transmit switch network; receiving data at the second array received from reflections of the second RF beam, the data being received by a second plurality of receive antenna elements provided by a second receive switch network; associating the data from the first array and the second array using a Boolean associator applying a Boolean association method to the data, wherein the Boolean association method determines the data represents a true target detection and target position of an object when a plurality of constraints are met, the plurality of constraints including at least one first type of constraint and at least one second type of constraint; and providing at least one of object detection information and scene imaging information to one or more systems of the autonomous vehicle from a scene imaging unit, the scene imaging unit receiving a plurality of true target detections and target positions from the Boolean associator and combining the plurality of true target detections and target positions to form the at least one of object detection information and scene imaging information.
12. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, wherein the first type of constraint is a geometric constraint and wherein the second type of constraint is a detection constraint.
13. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 12, wherein the Boolean association method further comprises a first geometric constraint and a second geometric constraint; and wherein the Boolean associator determines the true target detection and target position when: (1) the first geometric constraint is less than or equal to a first threshold value, (2) the second geometric constraint is less than or equal to a second threshold value, and (3) the detection constraint is greater than a third threshold value.
14. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 13, wherein the Boolean associator determines the data represents a ghost or incorrect target detection when any of the first geometric constraint, the second geometric constraint, and/or the detection constraint are not met.
15. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, wherein the first RF beam is an azimuthal beam and the second RF beam is an elevational beam.
16. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, the method further comprising correcting mutual coupling of at least one of the first plurality of receive antenna elements of the first array and the second plurality of receive antenna elements of the second array by providing a plurality of sacrificial antenna elements in the at least one of the first array and/or the second array.
17. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 16, wherein the method of correcting mutual coupling includes providing the plurality of sacrificial antenna elements with a same impedance as the first plurality of receive antenna elements and/or the second plurality of receive antenna elements.
18. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, wherein the scene imaging unit provides the at least one of object detection information and scene imaging information in adverse weather conditions including one or more of visual airborne and/or surface obscurants.
19. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, wherein the autonomous vehicle provides signals for at least one of steering, acceleration, and braking controls based on the at least one of object detection information and scene imaging information provided by the scene imaging unit.
20. The method of providing scene imaging for an autonomous vehicle using a high definition RF radar system according to claim 11, further comprising fusing multiple object information data from the scene imaging unit and at least one additional sensor, the at least one additional sensor including one or more of a camera, Lidar, sonar, ultrasound, GPS, INS, wheel encoders, and at least one additional high definition RF radar system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention can be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
DETAILED DESCRIPTION
(31) A high-definition radio-frequency (RF) domain imaging sensor for use by autonomous vehicles is disclosed in the exemplary embodiments. In an exemplary embodiment, the high-definition RF domain imaging sensor can be a 4D (3D and Doppler velocity) high definition RF radar system that detects objects to image and/or interpret a scene in both visually clear and opaque weather, as well as when road surface and other features are obscured by precipitation or other adverse conditions. Real time filtered data obtained from the RF radar system also provides sufficient resolution for both localization and navigation of an autonomous vehicle in dense urban environments when GPS is unavailable or compromised. In some embodiments, the high-definition RF radar system's outputs can be further integrated with other traditional autonomous vehicle sensors, including, but not limited to: GPS/INS, cameras, Lidar, ultrasound, wheel encoders, and/or other known conventional vehicle sensors, to permit safe driving in foul weather and improve safe driving in benign weather conditions.
(32) The present embodiments are described with reference to an autonomous vehicle in the form of an autonomous self-driving car (ASDC). The embodiments described herein and the associated principles and methods can be applied to any level of vehicle automation, including, but not limited to any one or more of fully autonomous (Level 5), highly autonomous (Level 4), conditionally autonomous (Level 3), and/or partially autonomous (Level 2), as described in SAE International Standard J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, which is incorporated by reference in its entirety. The present embodiments can also be implemented in conventional vehicles to provide additional situational awareness information to a driver, including, but not limited to driver assistance (Level 1), and/or no automation (Level 0), as described in SAE J3016, referenced above.
(33) Moreover, while the exemplary embodiments are described with reference to an automotive vehicle, the embodiments described herein and the associated principles and methods can be applied to any type of autonomous vehicles, including but limited to: automobiles; trucks; fleet vehicles; heavy construction, mining, or earth moving vehicles; trains; off-road vehicles; factory or warehouse vehicles; as well as any other types of wheeled or tracked land vehicles. The present embodiments can also be applied to autonomous vehicles in other environments, including air and/or sea vehicles, including, but not limited to: UAVs or drones; gliders; airplanes; dirigibles, blimps, and/or zeppelins; other propelled or non-propelled flying or air vehicles; boats; ships; personal watercraft; hovercraft; submersible vessels; and other vehicles traveling in, on, and/or under water. The present embodiments can also be applied to toy vehicles, radio controlled vehicles, amusement rides, and/or other land, air, or sea vehicles regardless of size.
(34) An autonomous vehicle 100 including an exemplary embodiment of a high-definition RF radar system 110 is illustrated in
(35) It should be understood, however, that high-definition RF radar system 110 can be arranged at different locations on an autonomous subject vehicle to allow detection in any desired direction or orientation. Generally, the locations of arrays on the subject vehicle are chosen to minimize the effect of subject-vehicle structural scattering, air drag, vibration, and airborne particulate accumulation on the array's radome, and/or to maximize the array's field of view. For example, in different embodiments, first linear array 112 and second linear array 114 may instead be mounted on or in the windshield itself, one near the top edge, and another near the side edge. Alternatively, first linear array 112 and second linear array 114 may be similarly mounted on or around the rear or side windows for other embodiments detecting objects behind and/or to the side of the subject vehicle.
(36) In addition, in the exemplary embodiments, a pair of linear arrays is described. It should be understood, however, that the arrays may be arranged into any suitable configurations and need not be strictly linear. For example, the arrays may have different shapes to accommodate placement on a subject autonomous vehicle, including, but not limited to curved shapes, arced shapes, shapes conforming to vehicle surfaces and/or components, as well as various random or organized geometric or non-geometric sparse 2D arrangements for the arrays.
(37) In an exemplary embodiment, the pair of linear arrays, first linear array 112 and second linear array 114, are configured so that the RF beams of each array are generally orthogonal to each other. For example, first linear array 112 is oriented to have azimuthal beams and second linear array 114 is oriented to have elevation beams. In this embodiment, the azimuthal beam from first linear array 112 is generally orthogonal to the elevation beam from second linear array 114. The relationship between Cartesian coordinates (i.e., x-axis, y-axis, z-axis) and the coordinate system of the high-definition RF radar system 110, including first linear array 112 and second linear array 114, that uses range, azimuth, and elevation to determine location of a target T, is shown in
(38) Further, it should be noted that first linear array 112 is disposed in a generally horizontal orientation on roof 104 of autonomous vehicle 100 and is configured to transmit and receive in a generally azimuthal direction. Similarly, second linear array 114 is disposed in a generally vertical orientation on support pillar 106 of autonomous vehicle 100 and is configured to transmit and receive in a generally elevational direction. In this embodiment, the location and orientation of the pair of linear arrays is configured for forward-facing object detection and/or scene imaging. In other embodiments, different locations and orientations may be chosen based on one or more factors including, but not limited to object detection, scene imaging, type of autonomous vehicle, vehicle environment, and/or other relevant considerations to provide a desired configuration.
(39) Referring now to
(40) In some embodiments, outputs from 4D scene imaging unit 130 can be used by various subsystems of autonomous vehicle control unit 140, including, but not limited to a localization unit 142, a navigation unit 144, and/or a vehicle dynamics control unit 146, to control autonomous vehicle 100. For example, vehicle dynamics control unit 146 may be configured to operate one or more of steering, braking, acceleration, safety systems, obstacle avoidance, and/or lane-level guidance operations for the autonomous vehicle. It should be understood that the various subsystems of autonomous vehicle control unit 140 may be optional or different depending on the level of automation of the vehicle. Additionally, in the case of Level 1 or Level 0 automation, autonomous vehicle control unit 140 may be omitted and outputs from 4D scene imaging unit 130 can be shown on a display or transformed into an auditory signal used by a driver to manually control his or her vehicle.
(41) The high-definition RF radar system of the present embodiments exploits a longer wavelength than the shorter optical wavelength sensors which are blind to airborne obscurants. The high-definition RF radar system 110, comprising a pair of linear arrays 112, 114, actively illuminates a scene to penetrate fog, snow, rain, dust, smoke, smog, pollen, and/or other visual obscurants. Radio-frequency wavelengths are able to “see” much further in these adverse conditions compared to the shorter optical wavelengths. The received RF signal reflections from the scene are filtered by coordinated radar processor 120 to extract 3D position and velocity. The 4D scene imaging unit 130 builds a synthetic 3D image of the environment and includes velocity information for detected objects. Internally, the data associated from the pair of linear arrays includes high-definition 4D information comprised of the 3D target position and velocity of each target detection. The target velocities, and their derivatives, contribute to tracking functions for estimating non-stationary object trajectories to improve situational awareness of the autonomous vehicle.
(42) Representation views of operating an autonomous vehicle 201 equipped with high definition RF radar system 110 in adverse weather conditions are shown from a bird's eye view in
(43) High definition RF radar system 110 of the present embodiments can penetrate visual obscurants that typically render optical sensors ineffective to allow autonomous vehicle 100 to “see” visual ground truth 200 in adverse weather conditions, for example, fog described above. Referring now to
(44)
(45) Additionally, high-definition RF detection scene 300 can also include information of the location and orientation of facades of buildings 210 disposed on either side of the road being travelled by autonomous vehicle 201. For example, as shown in
(46) For a high-definition RF radar system to guide an autonomous vehicle, there are unique challenges that must be overcome to realize a practical and low-cost design. First, consider the general need for localization accuracy and precision in an RF imaging sensor under opaque weather conditions, e.g. a fog or heavy snowfall which preclude optical sensors.
(47) Spatial resolution localization requirements for the ego-car or subject vehicle are considered for an urban street scenario with a 10 foot wide travel lane, and a 7 foot wide ego-vehicle and 6″ high curbs. A cross-street localization accuracy of less than 1 foot would merely provide a 0.5 foot margin. In the range axis, along the street, a 0.5 foot range accuracy requirement suggests that the subject vehicle maintain a margin of at least 1 foot to the nearest intersection or pedestrian crossing. Next consider some observations of traffic in the opposing lane and curb objects. A double yellow line is about 1 foot wide. Detecting that an opposing vehicle has moved 1 foot on the cross-street axis at 250 feet would require an azimuth resolution of 4 milliradians. Similarly, detecting a change of 6″ in height associated with a curb at 63 feet would require 8 milliradians of elevation resolution.
(48) Next consider the RF imaging sensor's bandwidth, carrier frequency, and aperture cross-section area. Since B=c/2 dR, where B is the bandwidth, c is the speed of light, and dR is the range resolution, then a 6″ range resolution implies a minimum bandwidth of 1 GHz. With a 6″ range resolution, a human separated by 1 foot in range from a stationary non-subject vehicle would be detectable. The 1 GHz bandwidth can be obtained at high frequency where the bandwidth is a small percentage of the carrier. While both the 24+ GHz and 77+ GHz bands are available for automotive RF imaging sensor use, the shorter wavelength associated with the 77 GHz band is preferable for several reasons: one, lower cost associated with commercial mmW transmitter and receiver MIMICs available at 77 GHz; two, the smaller wavelength requires smaller aperture cross-section area for the same beamwidth; three, a 3× increase in Doppler sensitivity at 77 GHz; and four, higher ERP allowed at 77 GHz.
(49) High azimuth resolution is required to support an autonomous vehicle traveling with high probability of no collisions. On a narrow street with two-way traffic, accurate location of a non-subject vehicle is required.
(50)
(51)
(52) Conventional 2D electronically steered arrays (ESAs) have the capacity to form multiple simultaneous beams on transmission and reception. With simultaneous beams on receive, ESAs have the capacity, on a single look, to form a 3D image comprised of 3D voxels. A voxel is a unit of information representing a single 3D data point. In the present embodiments, voxels represent a 3D detection associated with a unique range, azimuthal angle alpha, and elevation angle phi of the steered received beam. An image can be generated from the detections in a perspective convenient for image processing. In other embodiments, voxels can be represented using units in other coordinate systems, depending on the intended environment for an autonomous vehicle. The conventional 2D ESA has no mechanical parts with improved reliability compared to an alternative mechanically scanned linear 1D array.
(53) A conventional 2D array that supports a 4 milliradian beamwidth in both azimuth and elevation can be both unacceptably large and extremely costly at 77 GHz. A fully populated 2D array would be 3.17 feet by 3.17 feet and have 246,016 (=496.sup.2) antenna elements at λ/2 spacing and their associated transceivers. Both the cost and cross section area of the large array face of greater than 10 square feet is untenable for most automotive applications, as well as many other types of autonomous vehicles.
(54) In the present embodiments, instead of a dense large physical cross-section 2D antenna array, the high definition RF radar is split into two independent generally orthogonal 1D linear arrays. Further, in some embodiments, the linear arrays can be sparsely populated using the convolution principle. The present embodiments further describe a method to synthesize 3D data and velocities from these two generally orthogonal sparse 1D linear arrays to provide object detection and scene imaging for an autonomous vehicle.
(55) The high definition RF radar system of the present application reduces the aperture total area (and associated costs) by more than two orders of magnitude compared to a conventional 2D array. Instead of fully populated 2D array, two generally orthogonally oriented sparse 1D linear arrays are employed. In one embodiment, shown in
(56) As described above, coordinated radar processor 120 fuses ranges, azimuthal angle, and elevation angle from both first linear array 112 and second linear array 114 to form 3D voxels and extract the 3D velocity vector for each target detection. The acquired data from multiple target detections (i.e., a plurality of 3D voxels and associated velocities) is then used by 4D scene imaging unit 130 to provide object detection and/or scene imaging to a driver and/or autonomous vehicle systems. The association method used by coordinated radar processor 120 to correctly form 3D data from 2D azimuth and elevation data received from first linear array 112 and second linear array 114 is described more fully further below.
(57)
(58) One consideration for autonomous vehicle sensors in the automotive environment is the near-field ranges that a high definition RF radar system will see. The near-field is defined as less than 2d.sup.2/λ, where d is the aperture span and λ is the RF carrier. At 77.5 GHz with an aperture dimension of 3.17 feet, the near-field ranges are less than 1580 feet. In the very near-field, the received wavefront becomes spherical. Thus spatial beam forming across the radar's arrays is no longer just a function of direction of arrival, but also a function of range.
(59) Both the antenna architecture and 3D data association methods discussed further below are applicable to any radar with a waveform that satisfies the required bandwidth. The waveform could be pulsed or linear-frequency-modulated continuous-wave (LFMCW). In the present embodiments, the transmit radar waveform is assumed to be LFMCW. The matched filter receiver dechirps the linear-frequency modulated signal with a coherent frequency offset copy of the transmit waveform. The resulting intermediate frequency (IF) signal bandwidth is very low and proportional to the range offset of the target plus its maximum radial velocity in Hz. In the automotive environment, with a reasonable sized aperture for the array, the range is almost always near-field with automotive speeds resulting in a very small baseband bandwidth.
(60)
(61) It should be understood that linear arrays according to the principles described herein can be made with different dimensions and corresponding resolutions. In the present application, the proposed architectures for the RF radar arrays are based on providing similar 5-30 milliradian beamwidths equivalent to Lidar systems conventionally used in automotive systems. However, the sizes of array apertures, numbers of antenna elements, and/or beamwidths can be larger or smaller, with a corresponding increase or decrease in sensor resolution, as desired depending on the type of autonomous vehicle, environment, and/or other considerations for a greater or lesser resolution RF radar system, as would be obvious to one of ordinary skill in the art.
(62) Referring now to
(63) To lower the costs of a linear array, the convolution property between two smaller arrays, one transmit and one receive, can be employed. Let
(64)
and
(65)
represent the transmit and receive element weighting as a function of position along each respective array. The two-way response is given by the convolution of the transmit and receive array weighting functions:
(66)
(67) The far-field pattern as function of θ (relative to boresight) is:
(68)
(69) The implication of Equations 1a and 1b is that the CLA of N.sub.CLA receive antenna elements whose aperture dimension is D.sub.CLA, can be realized as the product of a transmit array of length D.sub.T with N.sub.T transmit elements and a receive array of length D.sub.R with N.sub.R receive elements, where:
N.sub.CLA=N.sub.TN.sub.R
d.sub.R=λ/2
D.sub.R=(N.sub.R−1)d.sub.R
D.sub.T=(N.sub.T−1)d.sub.T
d.sub.T=D.sub.R
D.sub.T=(N.sub.T−1)(N.sub.R−1)d.sub.R
Effective Aperture=D.sub.T+D.sub.R=N.sub.T(N.sub.R−1)d.sub.R
(70) Thus an equivalent 496 element CLA realization with a convolution array (CVA) has only 46 elements in a 23 element transmission array and a 23 element receive array. By contrast, the expensive alternative is a CLA 2D array of 496.sup.2 (246,016) total elements. The CVA can be configured as two separate linear arrays (one horizontal and one vertical), each employing convolution. This architecture results in a reduction in the number of transmit and receive elements of more than three orders of magnitude from 246,016 (=496.sup.2) to 92 (=46×2) total elements.
(71) One architecture of a CVA using the convolution property in practice can be a multi-static time multiplexed radar with N.sub.T separate transmissions and N.sub.R separate receivers. There are N.sub.TN.sub.R unique transmission and reception pairs each with their own round trip path length to a unique scatterer in the far-field. There is a unique mapping between RF phase differences between each unique (T, R) pair in the sparse CVA and the same phase differences found in the fully populated CLA. Thus, in the far-field, the convolution array has a phase gradient across its virtual N.sub.TN.sub.R array elements that is equivalent to that found in the CLA, N.sub.CLA=N.sub.TN.sub.R.
(72) An alternate architecture for a CVA with a time efficient realization uses code division multiple access (CDMA) so that there are N.sub.T simultaneous transmissions with N.sub.T orthogonal codes. On reception at each of N.sub.R separate receivers, a bank of N.sub.T matched filters separates the N.sub.T orthogonal codes. This is followed by association of the N.sub.TN.sub.R unique transmission and reception pairs each with their own round trip path length to a unique scatterer in the far-field. Subsequent processing mirrors the time domain method.
(73) Generally, a low cost CVA architecture can be obtained by selection of numbers of transmit and receive elements N.sub.T=N.sub.R=√{square root over (N.sub.CLA)}, under the assumption that the RF switch costs, in dollars or performance or both, is much less than the non RF switch costs. An exemplary embodiment of a convolution array 900 that may be used with a high-definition RF radar system, including high-definition RF radar system 110, is shown in
(74) In one embodiment, convolution array 900 includes receiver and analog-to-digital converter module 901 using a N.sub.R switch network 910 to provide a plurality of time multiplexed receive antenna elements 912. Similarly, convolution array 900 also includes transmitter module 903 using a N.sub.T switch network 911 to provide a plurality of time multiplexed transmit antenna elements 913. Convolution array 900 performs the transmission operation using RF time-switching among transmit elements 913 and even faster switching of receiver elements 912. The single transmitter module 903 generates an LFM chirp, which constitutes an effective pulse of duration τ.sub.p. This effective transmit pulse is divided into N.sub.T smaller sub-pulses whose period is
(75)
Else by another option, the transmit sub-pulse period is
(76)
where q repetitions are taken across plurality of transmit elements 913 during one chirp (i.e., q is an integer greater than 1). With q=1, transmit elements 913 are RF-multiplexed one per τ.sub.pT, whereas receive elements 912 are also RF-multiplexed having a still shorter sub-pulse period of
(77)
(78) In this embodiment, both of these RF multiplex rates can be made much smaller than the radiated chirp bandwidth because, although the stretch operation follows all of this multiplexing, the post-stretch bandwidth affords a much reduced Nyquist rate, which the RF multiplexing rate easily satisfies. On reception, the N.sub.R receive antenna elements 912 are time multiplexed at the faster rate so that all receive elements 912 observe a single transmit sub-pulse r times, where r is an integer chosen to satisfy the Nyquist post-stretch received signal bandwidth. Thus, the aggregate switch rate on receive is
(79)
with corresponding demands on the ADC bandwidth. With this architecture, convolution array 900 comprises a single transmitter module 903, a single receiver module 901, fast switches 910, 911, and a low cost high speed ADC (integrated with module 901).
(80) In addition to the architecture of convolution array 900 shown in
(81) The architecture of convolution array 900 shown in
(82) In this alternate embodiment, low-loss convolution array 1000 comprises a plurality of N.sub.T transmitters 1003 located as close as possible to a plurality of transmit antenna elements 1012, and a plurality of N.sub.R receivers 1001 are located as close as possible to a plurality of receive antenna elements 1010. In this embodiment, low-loss convolution array 1000 also includes a direct digital synthesis, local oscillator (LO) 1002 and a power divider 1004, which may be substantially similar to local oscillator 902 and power divider 904, described above. With this arrangement, low-loss convolution array 1000 with plurality of receivers 1001 having plurality of receive antenna elements 1010 closely located and plurality of transmitters 1003 having plurality of transmit antenna elements 1012 closely located provides a CVA architecture with lower noise than convolution array 900. Accordingly, in other embodiments, low-loss convolution array 1000 may be used in a high-definition RF radar system for autonomous vehicles that have stricter requirements for noise thresholds.
(83) The CVA architecture is flexible, the N.sub.R and N.sub.T quantities, with the constraint of N.sub.CLA=N.sub.RN.sub.T, can be optimized to minimize a metric that includes RF performance, component costs, and manufacturing costs.
(84) Referring now to
(85) Referring now to
(86) As previously described with reference to
(87) Time multiplexed convolution arrays, described above, have several assumptions. The first is that during the collection period of the N.sub.TN.sub.R unique transmission and reception pairs, the illuminated scene is stationary. If the N.sub.TN.sub.R time multiplexing completes in the period of a nominal radar pulse of approximately 20-50 micro-seconds, the relative radial velocity components in automotive applications are slow, resulting in only a degree or two of carrier phase change in the 77-80 GHz band. This would be comparable to the phase spread that would be found in an equivalent CLA.
(88) Another assumption is that the virtual N.sub.TN.sub.R array created by convolution has the same impedance characteristics as the CLA. In an array of elements, the impedance Z(jw), of a selected antenna element varies by mutual coupling with other nearby antenna elements. The impedance matrix for an N element array is:
(89)
(90) where Z.sub.kk represents self-impedance of the antenna k, and Z.sub.kj represents the mutual impedance between antenna k and j, and Z.sub.o is the terminating load. The net impedance seen at the p.sup.th antenna terminal is:
(91)
(92) where I.sub.m is the current into the terminal of the m.sup.th antenna element.
(93) The relative magnitude and phase contributions to the mutual impedances, Z.sub.kj, are inversely related to the distance between element k and j. Thus, elements in the center of a long linear array experience similar coupling in contrast to elements at the array ends.
(94) For example, consider a CVA with mutual coupling. Let the CVA be comprised of a receive array of N.sub.T=51 transmit element array and a N.sub.R=5 receive element array with d.sub.R=λ/2 spacing. Compared to the CLA, the smaller CVA receive array of only 5 elements has increased mutual coupling since most of the elements are near the edges of the array. As a result, the virtual array of N.sub.TN.sub.R elements has a mutual coupling function that is periodic in N.sub.R elements. The result is a sidelobe increase in the beam pattern compared to the CLA, thereby distorting the field in the CVA.
(95) There are several approaches to reduce the mutual coupling distortion effect in CVAs. The first approach attempts to directly compensate for mutual coupling by signal processing alone. In this approach, let V.sub.MC be the measured voltages at each antenna terminal in the presence of mutual coupling. If V.sub.NC represents the ideal voltages that would be measured in the absence of mutual coupling, then a coupling matrix, C, maps the two domains,
V.sub.MC=CV.sub.NC Equation (2)
(96) Compensation is accomplished by V.sub.NC=C.sup.−1V.sub.MC. However, in practice, there are multiple problems with this approach. First, the practical solution of C is complicated in the near-field since C is a function of Z and the spatial position of the unknown scatterer at (R, α, φ). The latter is easily understood from the array's polarization spatial response in the near-field. Second, most approaches for estimating C without additional hardware assume that the array itself is in free space. In the automotive environment, however, the presence of structural scattering from high dielectric materials, such as an automobile roof (e.g., roof 104 shown in
(97) In an exemplary embodiment, artifacts of mutual coupling can be minimized by using a convolution array 1400 that includes a plurality of sacrificial antenna elements. Referring now to
(98) As compared to previous CVA embodiments, however, convolution array 1400 also includes sacrificial antenna elements disposed on both sides of the receive antenna N.sub.R elements 1412 to minimize the artifacts caused by mutual coupling. As shown in
(99) A representative comparison of the sidelobe response of a CVA with and without sacrificial elements around the receive array, for example, convolution array 1400 compared to convolution array 900, is illustrated in
(100) As previously described above, the present embodiments of high definition RF radar system 110 use a coordinated radar processor 120 to receive data inputs from first linear array 112 and second linear array 114 and perform an association method to determine whether the data indicates a true target detection or a ghost (i.e., a false detection). In these embodiments, first linear array 112 and second linear array 114 are non-coherent. This arrangement allows each linear array to be separately and independently positioned with respect to the other without requiring a coherent, networked transmitter between each array. The absence of coherence reduces costs and has an advantage in not requiring inter-array calibration, especially in high vibration environments.
(101) Association methods for coherent, networked 3D arrays are known in the art. However an efficient, low false target association method to form 3D data from “incomplete” 2D non-coherent arrays in the presence of spatial quantization noise is lacking. The known association methods do not address the problem of “ghost” or false target formation that develops in the association process with incomplete 2D arrays. Ghost formation is a function of spatial resolution, the non-linear discriminant function to be described, and spatial noise. Accordingly, ghost formation is a challenging problem in dense scenes (e.g., containing multiple targets) such as those encountered by autonomous vehicles.
(102) Generally, two orthogonal 2D arrays can be referred to as a “Boolean” pair. The Boolean pair is non-coherent with respect to each other. The association across the Boolean pair to form a 3D voxel measurement and detection can occur both in fast-time (single pulse), or in slow-time (multiple pulses with Doppler). As will be described in more detail below, exemplary embodiments of coordinated radar processor 120 provide for Boolean fast-time data association with low ghost or false target generation in a high definition RF radar system 110 for an autonomous vehicle.
(103) In some embodiments, coordinated radar processor 120 provides a geometry based method to correctly associate 3D data from two independent orthogonally oriented 2D arrays. This data association method applies to both conventional linear arrays and convolution arrays, including, but not limited to CLA 800, as well as one or more of CVA 900, CVA 1000, CVA 1100, and/or CVA 1400, described above.
(104) First, consider two independent orthogonally oriented arrays deploying radar waveforms that are also orthogonal in either time, frequency, and/or code space. The transmission beam for each array can cover a wide field of view in both azimuth and elevation. On receive, the same field of view is fully sampled with simultaneous receive beams formed in the digital processor.
(105) Generally, a given transmit phase-center can be realized by selecting a subset of available transmit elements in a CLA, and this phase center is further determined by applying an arbitrary transmit weighting function to the beamformer. Likewise, a given receive phase center can be realized by selecting a subset of available receive elements in a CLA and also by applying an arbitrary receive weighting function to the beamformer. The present embodiments of coordinated radar processor 120 use an association method that applies to any selected transmit phase-center and receive phase-center, as well as to combinations of varied phase centers implemented via time-sequential transmission elements else simultaneous orthogonally encoded transmission elements.
(106) Beam steering and range alignment are both necessary and accomplished in either 2D CVA or CLA linear arrays as follows. A particular two-way range hypothesis, R.sub.2way and a particular narrow beam direction hypothesis, for example the receiver beam angle, α, are selected. Any and all time-sequential transmissions among individual array elements of a CLA, else encoded simultaneous transmissions among individual array CLA elements can be accommodated by an exemplary embodiment of Boolean associator 124 (shown in
(107) Now, in the case of a CLA Boolean pair array, the elevation Boolean partner CLA can accomplish these same tasks by similar operations. Thus an azimuth CLA range and angle steering hypothesis leads to a particular range bin and a particular azimuth beam angle. And this one azimuth angle-range pair among N such implemented pairs spanning a given volume must be associated with any and all similar elevation angle-range pairs among M such implemented pairs from the elevation Boolean array. The objective is to find only those azimuth pairs that correctly associate with elevation pairs because such associations are defined to be correct when they are actually produced by objects (or more generally by resolved scatterers of objects). This objective leads to a high performance Boolean associator, and near optimal, resolution-limited, imaging in any dense scene.
(108) For the two orthogonal arrays, the 3D geometry based data association problem is how to correctly associate a quantized measurement 2D data pair (range.sub.1, angle.sub.1) from a first array, with a 2D data pair (range.sub.2, angle.sub.2) from a second array, that culminates in accepting the detection of a 3D true target, and not of a miss-association. Incorrect pair miss-associations can generate false or “ghost” targets where real targets are absent. If N target detections are found by the first array and M for the second array, there are NM association tests to be performed. In dense scenes, for example, an urban environment with multiple objects/potential targets, N and M each could easily be 4 orders of magnitude. The generation of only a small percentage of false or “ghost” targets can severely degrade the resulting 3D radar images and burden downstream processing with severe consequences for scene interpretation and safe navigation by autonomous vehicles. For example, the autonomous vehicle may incorrectly react to a miss-association “ghost” target and/or may fail to react to a real or true target, due to the effects of such ghost detections.
(109) Referring now to
(110) For a given target P 1600 located at (x.sub.t, y.sub.t, z.sub.t), let (R.sub.AZ, α.sub.AZ) be the corresponding analog range 1601 and azimuthal angle 1603 relative to the horizontal aperture of first array 112, and (R.sub.EL, φ.sub.EL), be the corresponding analog range 1602 and elevation angle 1604 relative to the vertical aperture of second array 114. The target and aperture geometries as shown in
x.sub.t.sup.2+y.sub.t.sup.2+z.sub.t.sup.2=R.sub.AZ.sup.2 Equation (3a)
(x.sub.t−dx).sup.2+(y.sub.t−dy).sup.2+(z.sub.t−dz).sup.z=R.sub.EL.sup.2 Equation (3b)
z.sub.t−dz=R.sub.EL sin φ.sub.EL Equation (3c)
y.sub.t=R.sub.AZ sin α.sub.AZ Equation (3d)
(111) The first array 112 and second array 114 report quantized measurements of (R.sub.AZQ, α.sub.AZQ) and (R.sub.ELQ, φ.sub.ELQ). There is a corresponding estimated quantized target position (x.sub.tQ, y.sub.tQ, z.sub.tQ) of target 1600 that may be found from:
x.sub.tQ.sup.2+y.sub.tQ.sup.2+z.sub.tQ.sup.2=R.sub.AZQ.sup.2+err.sub.1 Equation (4a)
(x.sub.tQ−dx).sup.2+(y.sub.tQ−dy).sup.2+(z.sub.tQ−dz).sup.2=R.sub.ELQ.sup.2+err.sub.2 Equation (4b)
z.sub.tQ−dz=R.sub.ELQ sin φ.sub.ELQ+err.sub.3 Equation (4c)
y.sub.tQ=R.sub.AZQ sin α.sub.AZQ+err.sub.4 Equation (4d)
(112) Where err.sub.k represents the k.sup.th unknown including the spatial quantization noise. One approach for association is to numerically search for an estimated quantized target position that minimizes the sum of the squared errors,
Cost=min.sub.x.sub.
(113) where γ.sub.k are relative weights, and err.sub.k is from Equations (4a-4d),
err.sub.1=x.sub.tQ.sup.2+y.sub.tQ.sup.2+z.sub.tQ.sup.2−R.sub.AZQ.sup.2 Equation (5b)
err.sub.2=(x.sub.tQ−dx).sup.2+(y.sub.tQ−dy).sup.2+(z.sub.tQ−dz).sub.2−R.sub.ELQ.sup.2 Equation (5c)
err.sub.3=z.sub.tQ−dz−R.sub.ELQ sin φ.sub.ELQ Equation (5d)
err.sub.4=y.sub.tQ−R.sub.AZQ sin α.sub.AZQ Equation (5e)
(114) One method determines association between (R.sub.AZQ, α.sub.AZQ) and (R.sub.ELQ, φ.sub.ELQ) if the Cost determined in Equation (5a) is less than a predetermined threshold, with the threshold being a function of the desired resolution and corresponding accuracy. The search incurs a costly time penalty to perform the 3D search over (x.sub.tQ, y.sub.tQ, z.sub.tQ), the cost growing as the product of the number of detections in the generally orthogonal arrays.
(115) One alternative approach employed in the prior art is to perform association with range only. For the phase incoherent pair of radar arrays (e.g., first array 112 and second array 114) illustrated in
|R.sub.AZQ−R.sub.ELQ|≦GThreshold.sub.1Association Equation (6a)
|R.sub.AZQ−R.sub.ELQ|>GThreshold.sub.1No Association Equation (6b)
where,
GThreshold.sub.1=μ√{square root over (dx.sup.2+dy.sup.2+dz.sup.2)} Equation (6c)
(116) and μ is a scalar weighting term.
(117) One problem with this above-described legacy association method, however, is that it is far too lenient with respect to ghosts or false detections, especially when the spatial distance given by the square root in Equation (6c) is large with respect to resolution, notably range. The leniency of this legacy association method generates many ghosts, which can burden the downstream image processing and further generate blurred or incorrect images that degrade the localization and situational awareness for autonomous vehicles. These degradations can be severe and devastating to imaging and estimation functions and cause problems for navigation and/or control of the autonomous vehicle.
(118) According to an exemplary embodiment, Boolean associator 124 uses a new Boolean association method that is time efficient and generates far fewer ghosts with improved image quality in the presence of spatial quantization noise. This Boolean association method requires two types of constraints to be satisfied simultaneously: one, geometric constraints; and two, a detection constraint. Thus, Boolean associator 124 is more optimally configured to compare the data from the pair of linear arrays, for example, first linear array 112 and second linear array 114, and declare a correct or true target association if both (a) the geometric constraints, and (b) the detection constraint are met. The Boolean association method used by Boolean associator 124 will be further described in more detail below in relation to Equations (7a-7c), (8a-8c), (9a-9b), and (10) below. First, geometric constraints and their associated thresholds will be discussed.
(119) In an exemplary embodiment, the Boolean association method directly calculates two separate estimates of the quantized target x-coordinate position. A metric that is a function of the Euclidean distance between the two quantized target positions is compared to a threshold to declare association. In this embodiment, the metric is an association discriminant that is formed as the Euclidean distance between the two quantized target 3D positions with a certain normalization based on the one-way measured ranges from each array to the common object in the case of correct association, else to a miss-associated ghost in the other case. This association discriminant is compared to a threshold to declare a potential correct or true target association versus incorrect or ghost association.
(120) The process for generating two separate estimates of quantized target positions is described next. Rewriting Equations (4a-4d) above, without the error terms, as two equation sets, each containing 3 equations and 3 unknown target positions. Equation set I is:
(x.sub.tQ1−dx).sup.2+(y.sub.tQ1−dy).sup.2+(z.sub.tQ1−dz).sup.2=R.sub.ELQ.sup.2 Equation (7a)
z.sub.tQ1−dz=R.sub.ELQ sin φ.sub.ELQ Equation (7b)
y.sub.tQ1=R.sub.AZQ sin α.sub.AZQ Equation (7c)
(121) and equation set II is:
x.sub.tQ2.sup.2+y.sub.tQ2.sup.2+z.sub.tQ2.sup.2=R.sub.AZQ.sup.2 Equation (8a)
z.sub.tQ2−dz=R.sub.ELQ sin φ.sub.ELQ Equation (8b)
y.sub.tQ2=R.sub.AZQ sin α.sub.AZQ Equation (8c)
(122) By construction, the spatial quantization noise is embedded in the quantized target positions. The first estimated target position, [x.sub.tQ1 y.sub.tQ1 z.sub.tQ1], is obtained by analytic solution of Equations (7a-7c), and the second estimated target position [x.sub.tQ2 y.sub.tQ2 z.sub.tQ2], is similarly obtained from Equations (8a-8c). Target position solutions that contain non-zero imaginary components represent invalid associations and are pre-declared as “No Association” (i.e., ghosts).
(123) Consider the association discriminant, β, defined as:
(124)
(125) Observe that Equation (9a) is firstly a function of the absolute difference of two squared-ranges, namely R.sub.SQ1 and R.sub.SQ11, defined by:
R.sub.SQ1=x.sub.tQ1.sup.2+t.sub.tQ1.sup.2+z.sub.tQ1.sup.2
R.sub.SQ11=x.sub.tQ2.sup.2+t.sub.tQ2.sup.2+z.sub.tQ2.sup.2
(126) Each squared range is derived from the sum of the square of the estimated 3D target position components of an equation set, said components being obtained as the solution to the respective equation set. The association discriminant, β, further incorporates a normalization by the square-root of the absolute difference of the two measured ranges, namely R.sub.AZQ−R.sub.ELQ.
(127) In an exemplary embodiment, Boolean associator 124 will declare the necessary geometric constraints met for Boolean association between any pair (R.sub.AZQ, α.sub.AZQ) and (R.sub.ELQ, φ.sub.ELQ) if both (1) the association discriminant, β, is less than or equal to a first threshold value, and (2) a difference between quantized range values from the pair of arrays is less than or equal to a second threshold value. That is the first type of constraint, geometric constraints, required for “Association” or true target detection is found when:
β≦GThreshold.sub.2 AND |R.sub.AZQ−R.sub.ELQ|≦GThreshold.sub.1. Equation (9b)
(128) Otherwise, “No Association” or ghost detection is declared. Selected constant threshold values, GThreshold.sub.1 and GThreshold.sub.2 are defined for these tests.
(129) As defined, the association discriminant, β, is in units of the square-root of distance, although other normalization powers in the denominator are also possible, including one that makes β dimensionless. Those skilled in the art will realize that this binary hypothesis test can be modified to include certain discriminant versions, distinct from β, as defined here; and such versions can lead to performance similar to that of β. For example, excellent associator performance is achievable with more than one selection of the normalization power, combined with the specified numerator of this method. A generalization of Equation (9a) is:
(130)
(131) Where f(R.sub.SQ1,R.sub.SQ11) is a generalized function of the difference between R.sub.SQ1 and R.sub.SQ11 including non-unit norms, and similarly for g(R.sub.AZQ,R.sub.ELQ). For each β.sub.k, there is a corresponding threshold test. The intersection of the family of k tests can further reduce ghost detections, albeit with more computation required.
(132) As stated above, both (a) the earlier geometric constraints, and (b) the detection based constraint must be simultaneously satisfied to declare a fast-time association across the Boolean pair using the Boolean association method. The first type of constraint, geometric constraints, is met by satisfying the above requirements in relation to the first and second threshold values as stated in Equation (9b). Next, the second type of constraint, a detection constraint, must also be met before Boolean associator 124 can declare a correct or true target association is met. The requirements for a detection constraint and its associated threshold value will be discussed.
(133) Referring back to
(134) In the exemplary embodiment of the Boolean association method, the detection constraint is determined using a detection filter that qualifies a target association with a 3D voxel detection on a single look. This detection filter evaluates a sufficient statistic, f(v.sub.AZ, v.sub.EL) against a threshold, DThreshold, for a desired P.sub.d. For the non-coherent pair of arrays, one form of the sufficient statistic for the detection constraint is the joint voltage magnitude product of the received voltages, such as
(135)
or a pre-detected Boolean voltage product. By looking to the received voltages, the Boolean associator can reduce thermal noise errors in the measurements to eliminate false alarm “ghost” detections. Thus, in this embodiment, Boolean associator 124 will declare the necessary detection constraint met for Boolean association if the detection filter determines a voltage value is greater than a third threshold value. That is the second type of constraint, a detection constraint, required for “Association” or true target detection is found when this condition is met. Accordingly, if all of (a) the two geometric constraints, and (b) the detection constraint are satisfied, then the resulting detection vector, (R.sub.AZQ, α.sub.AZQ, v.sub.AZ, R.sub.ELQ, φ.sub.ELQ, v.sub.EL), representing the linear array data for the true target detection is passed by Boolean associator 124 to downstream processing by 4D scene imaging unit 130 of high-definition RF radar system 110, or other suitable processing.
(136) Referring now to
(137) Referring now to
(138) First process 1600 receives as an input, time domain data, Y.sub.AZ(K.sub.AZ, M.sub.AZ) 1601, where K.sub.AZ is quantized time and M.sub.AZ is antenna element number, from first array 112. Time domain data 1601 is first time aligned at step 1602 to form S.sub.AZ(K.sub.AZ, M.sub.AZ) 1603, and then range match filtered at step 1604 by W.sub.RAZ 1605, a function of R.sub.AZQ, to produce range quantized measurements U.sub.AZ(K.sub.AZ, M.sub.AZ) 1606. Range quantized measurements 1606 are near-field beam space match filtered at step 1607 with Wα.sub.AZ 1608, a function of α.sub.AZQ and R.sub.AZQ, with resulting complex voltage v.sub.AZ 1609 resulting in the azimuthal measurement triplet (R.sub.AZQ, α.sub.AZQ, v.sub.AZ) 1620.
(139) Similarly, second process 1610 follows for second array 114. Second process 1610 receives as an input, time domain data, Y.sub.EL(K.sub.EL, M.sub.EL) 1611, where K.sub.EL is quantized time and M.sub.EL is antenna element number, from second array 114. Time domain data 1611 is first time aligned at step 1612 to form S.sub.EL(K.sub.EL, M.sub.EL) 1613, and then range match filtered at step 1614 by W.sub.REL 1615, a function of R.sub.ELQ, to produce range quantized measurements U.sub.EL(K.sub.EL, M.sub.EL) 1616. Range quantized measurements 1616 are near-field beam space match filtered at step 1617 with Wφ.sub.EL 1618, a function of φ.sub.ELQ and R.sub.ELQ, with resulting complex voltage v.sub.EL 1619 resulting in the elevational measurement triplet (R.sub.ELQ, φ.sub.ELQ, v.sub.EL) 1621. The time alignment 1602, 1612 and vector matrix matched filter operations 1604, 1614, 1607, 1617 can be implemented in parallel pipelined FPGA and GPU hardware for low latency.
(140) An exemplary embodiment of Boolean associator 124 is shown in
(141) Accordingly, using the Boolean association method described above, Boolean associator 124 declares a correct or true target association if all of the following conditions are met: (1) an association discriminant is less than or equal to a first threshold value, (2) a difference between quantized range values from the pair of arrays is less than or equal to a second threshold value (i.e., both geometric constraints are met), and (3) a detection filter determines a voltage value is greater than a third threshold value (i.e., the detection constraint is met).
(142)
(143) Referring now to
(144) In
(145) The preceding example is actually generous to the legacy associator. Other more dense images having many closely spaced target clusters result in many more legacy ghosts and thus poor image quality. By contrast the Boolean associator handles more dense scenes just as it handles the illustrated scene. In such denser scenes the Boolean associator produces few ghosts, and with high confidence most of the ghosts are essentially co-located with target estimates, resulting in excellent image quality. Thus by contrast to the legacy associator, the few retained ghosts have a minimal effect on the image, its interpretation, and estimated features. Moreover the actual target estimates in the image possess just the minimum error expected of the spatial resolution limits.
(146) Additional comparisons for a different target set are shown in
(147)
(148)
(149) Referring now to
(150) Generally, regardless of the target true analog image, the legacy associator image quality is inferior and often leads to image misinterpretations. The image formed from true targets determined by the Boolean associator of the present embodiments has near optimal quality with reduced ghosting, limited by the sensor's resolution. The Boolean associator affords a dramatic improvement to image quality, and to subsequent feature estimators. The Boolean associated image's improved precision supports autonomous vehicle localization, navigation, and/or control.
(151) The present embodiments of high-definition RF radar system 110, as well as the Boolean association method performed by Boolean associator 124, described herein provide an apparatus and method for deploying a distributed, sparse, and low cost high definition RF imaging sensor for autonomous vehicles. While the previous embodiments of high-definition RF radar system 110 provide sufficient resolution to support standalone navigation in almost all weather conditions for autonomous vehicles, in other embodiments, further integration with traditional sensors can increase detection capabilities of an autonomous vehicle.
(152) Referring now to
(153) Referring now to
(154) The multi-mode sensing by multi-mode sensing and imaging unit 2613 begins with coordinating images and features from each of the uni-modal RF, optical, acoustic, motion, and other sensors inputs as shown in
(155) From this combined image provided by multi-mode image and feature fusion unit 2710, scene interpretation unit 2712 and localization unit 2714 can process the results to generate navigation planning of a set of time coordinated inputs to the steering, throttle, and braking control system 2614 to control autonomous vehicle 2600.
(156) Additionally, while the present embodiments have been described in reference to radio-frequency (RF) domain imaging, it should be understood that the principles and methods described herein in relation to the exemplary embodiments can also be applied to high definition imaging systems configured for other frequency domains. For example, a high definition imaging system according to the same principles and methods for array architecture and/or Boolean association method and apparatus of the present embodiments described herein can also apply to sonar and optical arrays. Accordingly, the disclosed methods and apparatus of the present embodiments can be applied to appropriate frequencies to provide for high definition sonar imaging and/or high definition optical imaging, with the corresponding appropriate transmitters and/or receivers for such domains.
(157) While various embodiments of the invention have been described, the description is intended to be exemplary, rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims.