LINEAR KALMAN FILTER WITH RADAR 2D VECTOR VELOCITY OBJECT ESTIMATION USING DISTRIBUTED RADAR NETWORK
20250076483 ยท 2025-03-06
Inventors
- Jessica Bartholdy Sanson (Munich, DE)
- Kalin Hristov KABAKCHIEV (Munich, DE)
- Aravind RAMACHANDRAN (Cupertino, CA, US)
Cpc classification
G01S13/878
PHYSICS
International classification
Abstract
A radar sensor system comprises a first radar sensor and at least a second radar sensor and one or more processors configured to perform acts comprising transmitting a first signal from a first transmit antenna in a first radar sensor and transmitting a second signal from a second transmit antenna in a second radar sensor. The acts further comprise detecting an object at the first radar sensor and the second radar sensor and estimating vector velocity information v.sub.x and v.sub.y for the object. The acts also comprise generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object and incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z. Additionally, the acts comprise iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein, and a linear Kalman filter until correct velocity values are determined.
Claims
1. A method performed by a radar sensor system, the method comprising: transmitting a first signal from a first transmit antenna in a first radar sensor; transmitting a second signal from a second transmit antenna in a second radar sensor; detecting an object at the first radar sensor and the second radar sensor; estimating vector velocity information v.sub.x and v.sub.y for the object; generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object; incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z; and iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein and a linear Kalman filter.
2. The method of claim 1, wherein the first radar sensor has a first rotation angle relative to normal.
3. The method of claim 2, wherein the second radar sensor has a second rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
4. The method of claim 1, further comprising fusing radar information with the correct velocity values with lidar information for the detected object.
5. The method of claim 1, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
6. The method of claim 1, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
7. The method of claim 6, wherein the linear least squares formula is a Moore-Penrose inverse linear least squares formula.
8. The method of claim 1, wherein the first and second radar sensors are deployed on an automated vehicle.
9. A radar sensor system comprising: a first radar sensor and at least a second radar sensor; one or more processors configured to perform acts comprising: transmitting a first signal from a first transmit antenna in the first radar sensor; transmitting a second signal from a second transmit antenna in the second radar sensor; detecting an object at the first radar sensor and the second radar sensor; estimating vector velocity information v.sub.x and v.sub.y for the object; generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object; incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z; and iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein, and a linear Kalman filter.
10. The radar sensor system of claim 9, wherein the first radar sensor has a first rotation angle relative to normal.
11. The radar sensor system of claim 10, wherein the second radar sensor has a second rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
12. The radar sensor system of claim 9, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
13. The radar sensor system of claim 9, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
14. The radar sensor system of claim 9, wherein the first and second radar sensors are deployed on an automated vehicle.
15. A central processing unit comprising: a computer-readable medium having stored thereon instructions which, when executed by a processor, cause the processor to perform certain acts; one or more processors configured to execute the instructions, the acts comprising: causing a first transmit antenna in a first radar sensor to transmit a first signal; causing a second transmit antenna in a second radar sensor to transmit a second signal; detecting an object based on a first received signal received at the first radar sensor responsive to the first signal and a second received signal received at the second radar sensor responsive to the second signal; estimating vector velocity information v.sub.x and v.sub.y for the object; generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object; incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z; and iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein, and a linear Kalman filter.
16. The central processing unit of claim 15, wherein the first radar sensor has a first rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
17. The central processing unit of claim 16, wherein the second radar sensor has a second rotation angle relative to normal.
18. The central processing unit of claim 15, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
19. The central processing unit of claim 15, wherein estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
20. The central processing unit of claim 15, wherein the first and second radar sensors are deployed on an automated vehicle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
DETAILED DESCRIPTION
[0028] Various technologies pertaining to automated vehicle (and other) radar and lidar systems are described herein. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it is to be understood that functionality that is described as being carried out by certain system components may be performed by multiple components. Similarly, for instance, a component may be configured to perform functionality that is described as being carried out by multiple components.
[0029] Moreover, the term or is intended to mean an inclusive or rather than an exclusive or. That is, unless specified otherwise, or clear from the context, the phrase X employs A or B is intended to mean any of the natural inclusive permutations. That is, the phrase X employs A or B is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles a and an as used in this application and the appended claims should generally be construed to mean one or more unless specified otherwise or clear from the context to be directed to a singular form.
[0030] Further, as used herein, the terms component, module, and system are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices. Further, as used herein, the term exemplary is intended to mean serving as an illustration or example of something and is not intended to indicate a preference.
[0031] Examples set forth herein pertain to an autonomous vehicle including a radar sensor system that facilitates streamlining radar data processing using a linear Kalman filter to enable rapid fusion with lidar data. Thus, the described techniques mitigate the need for two different Kalman models (linear for lidar and extended for radar) for tracking objects, facilitating the fusion of information from the radar sensor system with information from the lidar system.
[0032] Automotive radar sensor systems need an accurate and reliable sense of the environment around a vehicle. Among the commonly used sensors, radar is generally considered to be a robust and cost-effective solution, even in adverse driving scenarios, such as poor or strong lighting or bad weather. Radar data is often used in late fusion (track fusion) with other sensors such as lidar and camera. Lidar and radar are quite complementary in that they provide high angular resolution and range, and radar provides information about the radial velocity of objects.
[0033] One of the challenges of merging lidar data and radar data is the different format or model of data outputted by each type of sensor, which makes it necessary to use different models for each type of sensor. While lidar provides information in a linear (p.sub.x, p.sub.y) model, radar provides information in a polar model ( (rho), (phi), .sup. (rhodot)). So that information from both can be merged directly, different Kalman models are used for each type of sensor: linear Kalman for lidar, and extended Kalman for radar.
[0034] In extended Kalman, a polar-to-linear model conversion is conventionally performed with a Jacobian matrix because the radar velocity estimation is radial and not vector. The position p.sub.x and p.sub.y can be estimated with and , but it is difficult to estimate velocity v.sub.x and v.sub.y directly from p.sup..
[0035] To overcome these problems and others, the described aspects provide a radar network architecture that facilitates rapid estimation of the v.sub.x and v.sub.y velocity of objects. This architecture is based on using at least two radars with the overlapping fields of view (FOV) that are rotated at different angles (e.g., 30/30, 10/15, 20/35, etc.) rather than both facing straight ahead. In addition to being able to provide the v.sub.x and v.sub.y velocity, the described aspects allow the model to converge to the correct vector velocity faster than conventional approaches, mitigating a need for multiple measurements and updates. For situations where the object is detected with low response time, having the vector velocity information already estimated facilitates correctly predicting instantaneous motion for the detected object.
[0036] With reference now to
[0037] The radar sensor 100 further comprises one or more digital to analog converters (DACs) 108. The hardware logic component 106 comprises a signal generator component 110 that prepares radar signals for transmission by way of the transmit antenna 102. The signal generator component 110 is configured to control the DAC 108 to cause the DAC 108 to generate an analog radar signal for transmission by the transmit antenna 102. In other words, the signal generator component 110 generates digital values that, when received by the DAC 108, cause the DAC 108 to output an analog radar signal having various desired signal characteristics. Hence, the radar sensor 100 is configured as a digitally modulated radar sensor, wherein characteristics of radar signals output by the transmit antenna 102 are digitally controlled by the signal generator component 110 of the hardware logic component 106. For example, the signal generator component 110 can be configured to control the DAC 108 such that the radar sensor operates as a phase modulated continuous wave (PMCW) radar sensor. It is to be appreciated that these examples can be extended to other types of radar signals transmitted in steps, linear ramps, etc. (e.g., stepped orthogonal frequency division multiplexing (OFDM) radar, etc.).
[0038] The radar sensor 100 further includes an analog signal processing component 112. The signal processing component 112 is generally configured to perform various analog signal processing operations on analog signals that are to be output by the transmit antenna 102 and/or that are received by the receive antenna 104. By way of example, and not limitation, the signal processing component 112 can amplify a radar signal output by the DAC 108 to increase the power of the radar signal prior to transmission by way of the transmit antenna 102. In a further example, the signal processing component 112 can be configured to mix a radar signal output by the DAC 108 with a carrier signal to shift a center frequency of the radar signal. The signal processing component 112 can include any of various components that are configured to perform these various functions. For example, the signal processing component 112 can include mixers, amplifiers, filters, or the like. Functionality of the signal processing component 112 and its constituent components can be controlled by the hardware logic component 106. The transmit antenna 102 receives processed radar signals from the signal processing component 112 and emits the radar signals into an operational environment of the radar sensor 100.
[0039] The receive antenna 104 receives radar returns from the operational environment. In exemplary embodiments, the radar returns received by the receive antenna 104 comprise reflections, from objects in the operational environment of the sensor 100, of radar signals emitted by the transmit antenna 102. It is to be understood that the radar returns received by the receive antenna 104 can further include reflections of radar signals emitted by other radar emitters that are active within the operational environment of the radar sensor 100. Responsive to receipt of radar returns from the operational environment of the sensor 100, the receive antenna 104 outputs an electrical signal that is indicative of the received radar returns. This electrical signal is referred to herein as a return signal and is transmitted along one or more transmission lines in the radar sensor 100, as distinct from radar returns that are received by the receive antenna 104 as radiated signals propagating through air or free space in the operational environment of the radar sensor 100.
[0040] The signal processing component 112 receives a return signal from the receive antenna 104. The signal processing component 112 is configured to perform various analog signal processing operations over return signals received from the receive antenna 104. By way of example, and not limitation, the signal processing component 112 can perform various mixing, filtering, and amplification operations on return signals output by the receive antenna 104. The signal processing component 112 can be configured to perform various of these signal processing operations (e.g., mixing) based further upon a radar signal transmitted by the transmit antenna 102.
[0041] The radar sensor 100 further comprises one or more ADCs 114 that receive a processed return signal from the signal processing component 112. The ADC 114 digitally samples the return signal and outputs digital values that are indicative of amplitude of the return signal over time. These digital values are collectively referred to herein as radar data. The radar data output by the ADC 114 are indicative of the radar returns received by the receive antenna 104.
[0042] The hardware logic component 106 receives the radar data from the ADC 114. The hardware logic component 106 further comprises a radar processing component 116. The radar processing component 116 is configured to compute positions and/or velocities of targets in the operational environment of the radar sensor 100 based upon the radar data. In a non-limiting example, the radar processing component 116 can compute a range, a bearing, and/or a velocity of a target in the operational environment of the sensor 100 based upon the radar data.
[0043] With reference now to
[0044] The radar processing component 116 comprises a processor 206 and a memory 208 configured to provide certain functionality as described herein. For example, the memory 208 can store computer executable instructions that, when executed by the processor 206, cause the radar processing component 116 to perform certain acts. The memory 208 comprises a range fast Fourier transform (FFT) component 210 that is executed on a digitized signal received from an ADC, such as the ADC 114 of
[0045]
[0046] The radar processing unit 314 performs various acts on the digitized signal and provides functionality similar or identical to the functionality provided by the radar processing component 116 of the hardware logic component 106 (see, e.g.,
[0047] For example, the central unit 316 can receive raw data or point cloud data from two (or more) radar units (Radar1 and Radar2) having overlapping fields of view (FOV). One or both radar units can be rotated by a desired angle relative to normal (i.e., straight ahead) as described herein. The central unit 316 executes a calculation component 318 that, for each radar, determines an angle at which the radar is rotated relative to normal, and subtracts a detected angle of a reflected signal (relative to normal) from the rotation angle to calculate a rotated angle of reflection Q. This value is then used for position and velocity estimation (v.sub.x, v.sub.y) upon execution of a velocity estimation component 320 by the central unit 316. The central unit 316 executes a measurement component 322 that generates a measurement vector that includes position information (p.sub.x, p.sub.x) and into which the velocity information (v.sub.x, v.sub.y) is incorporated. The central unit executes a linear Kalman filter 324 using the measurement vector z (with velocity information incorporated) in order to update the velocity estimations until correct velocity values v.sub.x, v.sub.y are converged upon. The processed radar information can be fused with lidar information received from one or more lidar sensors (not shown).
[0048]
[0049] In one embodiment, the MIMO radar sensor units 402, 404 transmit raw radar data to the central unit 316 for processing and velocity and/or range disambiguation. In another embodiment, the MIMO radar sensor units 402, 404 process the received signals and generate respective point clouds including at least velocity and range data, which are transmitted to the central unit 316 for processing. The central unit 316 processes the received radar data as described herein with regard to
[0050]
[0051] In one embodiment, the signal received at the central unit/CPU (not shown in
[0052] With continued reference to
[0053] When estimating the velocity vector using a radar network, the approach involves using two or more radars on the network. Each radar in the network has a different rotation/orientation so that the two radars can estimate the object at different angles. The angular difference between the two radars is proportional to the accuracy of the estimation. The radial velocity of an object estimated by a radar depends on the vector components of the velocity v.sub.x and v.sub.y (assuming v.sub.z=0), and the azimuth angle () and elevation angle () at which the target is illuminated, given by:
[0054] On different sensors, the object can have different radial velocities v.sub.r:
[0055] The elevation () component can be isolated:
[0056] To facilitate the mathematical demonstration, let's consider .sub.1=.sub.2=90, and discard it from the equation. Thus:
[0057] If .sub.1=.sub.2, both sensors estimate the radial velocity with the same value .sup..sub.1=.sup..sub.2. But if .sub.1 is different from .sub.2 as described herein, the radial velocity values will be different and are directly related to the velocity vector (v.sub.x and v.sub.y).
[0058]
[0059] However, by rotating the two radars with different rotations (.sub.1.sub.2) as shown in examples 604 and 606, both radars detect the target at different angles so that the estimated radial velocity value in each radar is different, which in turn simplifies vector estimation. The new angles are calculated as:
[0060] Thus, the greater the difference between .sub.1 and .sub.2, the greater the difference in the final angle estimated for the target.
[0061] For the new radial velocities, (disregarding the elevation () component):
[0062] After the association of objects/detections between the two radars has been performed, the vector velocity can be directly estimated through an equations system with two variables (v.sub.x and v.sub.y). If only one value of angle and velocity for the object is available, the determination of v.sub.x and v.sub.y can be done by isolating a variable in the first equation and estimating its value with the other equation. However, if multiple data points representing multiple values for angle and velocity of the object are available, the estimate can be made using linear least squares (e.g., Moore-Penrose inverse) such that:
[0063] After estimating the velocity vector value of the object, the vector can be used as a parameter of linear measurement in prediction models such as Kalman filters, in the same way as lidar, with the advantage that the vector velocity of the object can be initialized with the measured value of the object itself.
[0064] According to another example, a linear Kalman filter with radar measurements using a velocity vector is considered. This example assumes a constant velocity model, although the described systems and methods are not limited thereto. For the lidar case, a state x and an uncertainty P are estimated at time t+1 from the previous states x and P at time t, such that:
[0065] where F is a transition matrix from t to t+1, u is the noise, and Q is a covariance matrix including noise.
[0066] During a measurement update step, position is updated along with the manner of prediction at the next step:
where z is the measurement vector, H is the measurement function, y is the difference between the actual measurement and the prediction, R is the sensor noise, S is the system error, and K is the Kalman Gain.
[0067] In lidar, the measurement vector z and the state vector x are defined as:
[0068] The measurement function H is defined as:
where H is the matrix that projects a prediction of the object's current state into the measurement space of the sensor. For lidar, this means that velocity information is discarded from the state variable since the lidar sensor only measures position. The state vector x contains information about [p.sub.x, p.sub.y, v.sub.x, v.sub.y], whereas the z vector only contain [p.sub.x, p.sub.y]. Multiplying Hx permits a comparison of the predicted x and with the sensor-measured z value.
[0069] The transition matrix F is defined as:
where t is measurement interval time.
[0070] Next, an extended Kalman filter (EKF) is considered for the radar case. It is predicted that:
[0071] In a conventional measurement update step, the matrix H is replaced by a nonlinear function h(x) such that y=zH.sub.x is replaced by y=zh(x). In the EKF radar measurement update step, a Jacobian Matrix H.sub.j is used to calculate S, K and P, such that:
[0072] To calculate y, equations that map the predicted location x cartesian coordinates to polar coordinates are used. The predicted measurement vector x is a vector containing values in the form [p.sub.x, p.sub.y, v.sub.x, v.sub.y], but the radar sensors have the output in polar coordinates (, , .sup.). In order to calculate y for the radar sensor, x needs to be converted to polar coordinates. Thus, the function h(x) maps values from cartesian coordinates to polar coordinates, and the radar equations become:
[0073] Returning to the example considering a linear Kalman filter with radar measurements using a velocity vector, in this example it is predicted that:
[0074] In the measurement update step, now the matrix H does not need to be replaced by a nonlinear function h(x), and also the Jacobian Matrix H.sub.j does not need to be introduced. The equation format for the radar case is now the same as in the lidar case:
[0075] The difference is in the measurement vector z and measurement function H. Since the lidar sensor only measures position, the H matrix projects the object's current state into the measurement space of the sensor. Therefore, in this step velocity information is discarded from the state variable since the lidar sensor only measures position. However, using the estimation of v.sub.x and v.sub.y as described herein, this information can be introduced in z. Thus, incorporating the v.sub.x and v.sub.y estimation information into the radar measurement vector z, the measurement vector z and measurement function H for a linear Kalman filter with a radar network becomes:
[0076] This approach makes the model much simpler than when using the EKF (conventionally used in radar measurements), and with the addition of the vectorial information of the instantaneous velocity in the measurements, makes the convergence on the correct v.sub.x and v.sub.y value in the estimation of movement x much faster. Thus, the described technique mitigates the need for two different Kalman models (linear for lidar and extended for radar) for tracking objects, facilitating the fusion of information from the radar sensor system with information from the lidar system.
[0077]
[0078] Moreover, the acts described herein may be computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include a routine, a sub-routine, programs, a thread of execution, and/or the like. Still further, results of acts of the methodology can be stored in a computer-readable medium, displayed on a display device, and/or the like.
[0079] Turning now solely to
[0080]
[0081] Various technologies described herein are suitable for use in connection with an autonomous vehicle (AV) that employs a radar sensor system to facilitate navigation about roadways. Referring now to
[0082] The AV 900 further includes several mechanical systems that are used to effectuate appropriate motion of the AV 900. For instance, the mechanical systems can include but are not limited to, a vehicle propulsion system 910, a braking system 912, and a steering system 914. The vehicle propulsion system 910 may be an electric engine, an internal combustion engine, or a combination thereof. The braking system 912 can include an engine brake, brake pads, actuators, a regenerative braking system, and/or any other suitable componentry that is configured to assist in decelerating the AV 900. The steering system 914 includes suitable componentry that is configured to control the direction of movement of the AV 900.
[0083] The AV 900 additionally comprises a computing system 916 that is in communication with the sensor systems 902-908 and is further in communication with the vehicle propulsion system 910, the braking system 912, and the steering system 914. The computing system 916 includes a processor 918 and memory 920 that includes computer-executable instructions that are executed by the processor 918. In an example, the processor 918 can be or include a graphics processing unit (GPU), a plurality of GPUs, a central processing unit (CPU), a plurality of CPUs, an application-specific integrated circuit (ASIC), a microcontroller, a programmable logic controller (PLC), a field programmable gate array (FPGA), or the like.
[0084] The memory 920 comprises a perception system 922, a planning system 924, and a control system 926. Briefly, the perception system 922 is configured to identify the presence of objects and/or characteristics of objects in the driving environment of the AV 900 based upon sensor data output by the sensor systems 902-908. The planning system 924 is configured to plan a route and/or a maneuver of the AV 900 based upon data pertaining to objects in the driving environment that are output by the perception system 922. The control system 926 is configured to control the mechanical systems 912-914 of the AV 900 to effectuate appropriate motion to cause the AV 900 to execute a maneuver planned by the planning system 924.
[0085] The perception system 922 is configured to identify objects in proximity to the AV 900 that are captured in sensor signals output by the sensor systems 902-908. By way of example, the perception system 922 can be configured to identify the presence of an object in the driving environment of the AV 900 based upon images generated by a camera system included in the sensor systems 904-908. In another example, the perception system 922 can be configured to determine a presence and position of an object based upon radar data output by the radar sensor system 902. In exemplary embodiments, the radar sensor system 902 can be or include the radar sensor 100, 300, 402 and/or 404. In such embodiments, the perception system 922 can be configured to identify a position of an object in the driving environment of the AV 900 based upon the estimated range output by the radar sensor 100, 300, 402 and/or 404.
[0086] The AV 900 can be included in a fleet of AVs that are in communication with a common server computing system. In these embodiments, the server computing system can control the fleet of AVs such that radar sensor systems of AVs operating in a same driving environment (e.g., within line of sight of one another, or within a threshold distance of one another) employ different pulse sequence carrier frequencies. In an exemplary embodiment, a radar sensor system of a first AV can be controlled so as not to transmit pulse sequences having same center frequencies as pulse sequences transmitted by a radar sensor system of a second AV at the same time. In further embodiments, the radar sensor system of the first AV can be controlled to transmit pulse sequences in a different order than a radar sensor system of a second AV. For instance, the radar sensor system of the first AV can be configured to transmit a set of pulse sequences at four different center frequencies A, B, C, and D in an order A, B, C, D. The radar sensor system of the second AV can be configured to transmit pulse sequences using a same set of center frequencies in a frequency order B. A, D. C. Such configurations can mitigate the effects of interference when multiple AVs that employ radar sensor systems are operating in a same driving environment.
[0087] Referring now to
[0088] The computing device 1000 additionally includes a data store 1008 that is accessible by the processor 1002 by way of the system bus 1006. The data store 1008 may include executable instructions, radar data, beamformed radar data, embeddings of these data in latent spaces, etc. The computing device 1000 also includes an input interface 1010 that allows external devices to communicate with the computing device 1000. For instance, the input interface 1010 may be used to receive instructions from an external computing device, etc. The computing device 1000 also includes an output interface 1012 that interfaces the computing device 1000 with one or more external devices. For example, the computing device 1000 may transmit control signals to the vehicle propulsion system 910, the braking system 912, and/or the steering system 914 by way of the output interface 1012.
[0089] Additionally, while illustrated as a single system, it is to be understood that the computing device 1000 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 1000.
[0090] Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer-readable storage media. A computer-readable storage media can be any available storage media that can be accessed by a computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media.
[0091] Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include FPGAs, ASICs, Application-specific Standard Products (ASSPs), SOCs, Complex Programmable Logic Devices (CPLDs), etc.
[0092] Described herein are various technologies according to at least the following examples.
[0093] (A1) In an aspect, a method performed by a radar sensor system includes transmitting a first signal from a first transmit antenna in a first radar sensor. The method also includes transmitting a second signal from a second transmit antenna in a second radar sensor. The method further includes detecting an object at the first radar sensor and the second radar sensor. Additionally, the method includes estimating vector velocity information v.sub.x and v.sub.y for the object. Furthermore, the method includes generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object. Moreover, the method includes incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z. The method also includes iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein and a linear Kalman filter.
[0094] (A2) In some embodiments of the method of (A1), the first radar sensor has a first rotation angle relative to normal.
[0095] (A3) In some embodiments of the method of (A2), the second radar sensor has a second rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
[0096] (A4) In some embodiments of the method of at least one of (A1)-(A3), the method further includes fusing radar information with the correct velocity values with lidar information for the detected object.
[0097] (A5) In some embodiments of the method of at least one of (A1)-(A4), estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
[0098] (A6) In some embodiments of the method of at least one of (A1)-(A5), estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
[0099] (A7) In some embodiments of the method of (A6), the linear least squares formula is a Moore-Penrose inverse linear least squares formula.
[0100] (A8) In some embodiments of the method of at least one of (A1)-(A7), the first and second radar sensors are deployed on an automated vehicle.
[0101] (B1) In another aspect, a radar system is configured to perform at least one of the methods disclosed herein (e.g., any of the methods of (A1)-(A8)).
[0102] (C1) In yet another aspect, a radar system includes a hardware logic component (e.g., circuitry), where the hardware logic component is configured to control elements of a radar system to perform at least one of the methods disclosed herein (e.g., any of the methods of (A1)-(A8)).
[0103] (D1) In yet another aspect, a radar sensor system includes a first radar sensor and at least a second radar sensor. The radar sensor system further includes one or more processors configured to perform acts including transmitting a first signal from a first transmit antenna in the first radar sensor. The acts further include transmitting a second signal from a second transmit antenna in the second radar sensor. The acts also include detecting an object at the first radar sensor and the second radar sensor. Additionally, the acts include estimating vector velocity information v.sub.x and v.sub.y for the object. Furthermore, the acts include generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object. The acts also include incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z. The acts further include iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein, and a linear Kalman filter.
[0104] (D2) In some embodiments of the radar sensor system of (D1), the first radar sensor has a first rotation angle relative to normal.
[0105] (D3) In some embodiments of the radar sensor system of (D2), the second radar sensor has a second rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
[0106] (D4) In some embodiments of the radar sensor system of at least one of (D1)-(D3), estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
[0107] (D5) In some embodiments of the radar sensor system of at least one of (D1)-(D4), estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
[0108] (D6) In some embodiments of the radar sensor system of at least one of (D1)-(D5), the first and second radar sensors are deployed on an automated vehicle.
[0109] (E1) In another aspect, a central processing unit includes a computer-readable medium having stored thereon instructions which, when executed by a processor, cause the processor to perform certain acts. The central processing unit also includes one or more processors configured to execute the instructions. The acts include causing a first transmit antenna in a first radar sensor to transmit a first signal. The acts also include causing a second transmit antenna in a second radar sensor to transmit a second signal. The acts further include detecting an object based on a first received signal received at the first radar sensor responsive to the first signal and a second received signal received at the second radar sensor responsive to the second signal. Additionally, the acts include estimating vector velocity information v.sub.x and v.sub.y for the object. Moreover, the acts include generating a radar measurement vector z that comprises position information p.sub.x and p.sub.y for the object. The acts also include incorporating the vector velocity information v.sub.x and v.sub.y into the radar measurement vector z. The acts further include iteratively performing a measurement update using the measurement vector z, with velocity information incorporated therein, and a linear Kalman filter.
[0110] (2) In some embodiments of the central processing unit (1), the first radar sensor has a first rotation angle relative to normal, the second rotation angle being different than the first rotation angle.
[0111] (3) In some embodiments of the radar sensor system of (2), the second radar sensor has a second rotation angle relative to normal.
[0112] (4) In some embodiments of the central processing unit of at least one of (1)-(3), estimating the vector velocity information v.sub.x and v.sub.y comprises solving a two-equation system when only one value of angle and velocity is available for the object.
[0113] (5) In some embodiments of the central processing unit of at least one of (1)-(4), estimating the vector velocity information v.sub.x and v.sub.y comprises using a linear least squares formula to estimate the velocity values when multiple data points representing multiple values for angle and velocity are available.
[0114] (6) In some embodiments of the central processing unit of at least one of (1)-(5), the first and second radar sensors are deployed on an automated vehicle.
[0115] (F1) In still yet another aspect, use of any of the radar systems (e.g., any of (B1), (C1), (D1)-(D6) or (1-E6)) to detect and classify a target is contemplated.
[0116] What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methodologies for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term includes is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term comprising as comprising is interpreted when employed as a transitional word in a claim.