Six-dimensional point cloud system for a vehicle
10139833 ยท 2018-11-27
Assignee
Inventors
Cpc classification
G05D1/0246
PHYSICS
B60W30/16
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0214
PHYSICS
B60T2201/08
PERFORMING OPERATIONS; TRANSPORTING
G01S7/415
PHYSICS
G01S7/41
PHYSICS
International classification
G01S17/02
PHYSICS
B60W30/16
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A six-dimensional point cloud system for a vehicle is provide which may, for example calculate, for each point in a point cloud, a three-dimensional velocity of the respective point in the point cloud, segment each point in the point cloud into one or more clusters of points, compute a kinematic state of each of the one or more clusters of points, determine an object type associated with each of the one or more cluster, and determine a threat level and a response command based upon the determined object type and the kinematic state of each of the one or more clusters of points.
Claims
1. A vehicle, comprising: at least one control system configured to control the vehicle; and a six-dimensional point cloud system, comprising: a multiple input multiple output radar system; a memory; and a processor communicatively connected to the multiple input multiple output radar system and the memory, the processor configured to: receive, from the multiple input multiple output radar system, a current data frame comprising a point cloud, the point cloud including three-dimensional position information and Doppler data corresponding to an object detected by the multiple input multiple output radar system at each point in the point cloud; calculate, for each point in the point cloud, a three-dimensional velocity of the respective point in the point cloud based upon the three-dimensional position information and Doppler data associated with the point in the current data frame and data from a previous data frame stored in them memory; segment each point in the point cloud into one or more clusters of points based upon the three-dimensional position information associated with each respective point and the calculated three-dimensional velocity of each respective point; compute a kinematic state of each of the one or more clusters of points, the kinematic state including a center of mass for the respective cluster of points, a reference velocity for the respective cluster of points, an angular velocity for the respective cluster of points, and contour points for the respective cluster of points; determine an object type associated with each of the one or more cluster; and determine a threat level and a response command based upon the determined object type and the kinematic state of each of the one or more clusters of points, wherein the response command causes the at least one control system to control the vehicle.
2. The vehicle of claim 1, wherein the processor is further configured to classify each point in the point cloud as either stationary or dynamic based upon the three dimensional velocity of the respective point of the point cloud.
3. The vehicle of claim 1, wherein the processor is further configured to determine a misalignment angle of the multiple input multiple output radar system based upon data associated with one or more stationary points in the point cloud.
4. The vehicle of claim 1, wherein the processor is further configured to calculate the three-dimensional velocity k of each respective point kin the point cloud according to:
5. The vehicle of claim 4, wherein the processor is configured to determine a.sub.jk according to:
6. The vehicle of claim 1, wherein the processor is configured to segment each point in the point cloud into a respective cluster when a sum of:
7. The vehicle of claim 1, wherein the processor is configured to determine the object type associated with the cluster through a deep learning neural network.
8. A method for operating a six-dimensional point cloud system for a vehicle, comprising: receiving, by a processor from a multiple input multiple output radar system, a current data frame comprising a point cloud, the point cloud including three-dimensional position information and Doppler data corresponding to an object detected by the multiple input multiple output radar system at each point in the point cloud; calculating, by the processor for each point in the point cloud, a three-dimensional velocity of the respective point in the point cloud based upon the three-dimensional position information and Doppler data associated with the point in the current data frame and data from a previous data frame stored in a memory; segmenting, by the processor, each point in the point cloud into one or more clusters of points based upon the three-dimensional position information associated with each respective point and the calculated three-dimensional velocity of each respective point; computing, by the processor, a kinematic state of each of the one or more clusters of points, the kinematic state including a center of mass for the respective cluster of points, a reference velocity for the respective cluster of points, an angular velocity for the respective cluster of points, and contour points for the respective cluster of points; determining, by the processor, an object type associated with each of the one or more cluster; determining, by the processor, a threat level and a response command based upon the determined object type and the kinematic state of each of the one or more clusters of points; and activating at least one control system configured to control the vehicle based upon the determined response command.
9. The method according to claim 8, further comprising classifying, by the processor, each point in the point cloud as either stationary or dynamic based upon the three dimensional velocity of the respective point of the point cloud.
10. The method according to claim 9, further comprising determining a misalignment angle of the multiple input multiple output radar system based upon data associated with one or more stationary points in the point cloud.
11. The method according to claim 8, further comprising calculating the three-dimensional velocity k of each respective point k in the point cloud according to:
12. The method according to claim 11, further comprising determining a.sub.jk according to:
13. The method according to claim 8, further comprising segmenting, by the processor, each point in the point cloud into a respective cluster when a sum of:
14. The method according to claim 8, further comprising determining, by the processor, the object type associated with the cluster through a deep learning neural network.
Description
DESCRIPTION OF THE DRAWINGS
(1) The exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
(2)
(3)
(4)
DETAILED DESCRIPTION
(5) The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
(6) As discussed above, city environments are a challenging environment for a vehicle to navigate due to the large number of variable situations the vehicle can encounter. Accordingly, as discussed in further detail below, a six-dimensional point cloud system is provided which utilizes a multiple input multiple output radar system for accurately identifying and tracking objects around a vehicle in both urban and highway environments.
(7)
(8) The vehicle 100 includes a six-dimensional (6D) point cloud system 110. The 6D point cloud system 110 generates three-dimensional (3D) velocity data and 3D position data for objects to accurately identify an object near the vehicle and to accurately track the object, as discussed in further detail below. The objects may be, for example, another vehicle, a motorcycle, a bicycle, a pedestrian, a street light, a street sign, construction/warning objects (e.g., barricades, cones, drums, etc.), or the like.
(9) The 6D point cloud system 110 includes a multiple input multiple output (MIMO) radar system 120. The MIMO radar system 120 may include a single, or multiple co-located or distributed antenna that is capable of simultaneous transmitting and receiving multiple frequencies of radar signals in the same field of view. Accordingly, the MIMO radar system 120 is capable of generating a dense point cloud of data. Every time an object reflects the radar signal a point in the point cloud is created. The number of points on the point cloud that correspond to a single object can vary depending upon the size of the object and the distance of the object from the vehicle. As discussed in further detail below, the MIMO radar system 120 outputs a multiple data points for each point in the point cloud which the 6D point cloud system 110 utilizes to identify and track objects.
(10) The 6D point cloud system 110 further includes at least one processor 130. The at least one processor 130 may be a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller, or any other logic device or any combination thereof. While the 6D point cloud system 110 may utilize multiple processors 130, the at least one processor 130 hereinafter is referred to as a singular processor 130 for simplicity.
(11) The 6D point cloud system 110 further includes a memory 140. The memory may be any combination of volatile and non-volatile memory. The memory 140 may store frames of data from the MIMO radar system 120 for processing, as discussed in further detail below. The memory 140 may also store non-transitory computer readable instructions for implementing the 6D point cloud system 110 as discussed herein.
(12) The vehicle 100 further includes at least one control system 150 which can be controlled by the 6D point cloud system 110. The control system 150 may be, for example, a braking system, an acceleration system, a steering system, a lighting system, or the like. When the 6D point cloud system 110 identifies an object which may impact or come near the vehicle 100, the 6D point cloud system 110 may generate a control signal to activate the control system to generate a warning, avoid the object or minimize an impact with the object.
(13)
(14) The intensity data point may be measured in, for example, watts, decibels, range in meters, velocity in meters per second, Doppler in Hz, angles in degrees, or the like. The Doppler data point includes a range and a range rate. In other words, the Doppler data point includes data corresponding to whether a point in the point cloud, which corresponds to an object, is getting closer to or further from the vehicle 100 as well as the distance from the vehicle 100.
(15) The processor 130 then computes an estimated three-dimensional velocity for each data point in the point cloud where an object was detected by the MIMO radar system 120. (Step 210). In order to determine the estimated three-dimensional velocity for each data point in the point cloud, the processor 130 must determine which point in the point cloud corresponds to which point in the point cloud of a previous data frame.
(16)
(17) Accordingly, the processor 130 determines the three-dimensional velocity .sub.k for each point k in the point cloud of the current frame by determining the minimum velocity .sub.k (i.e., argmin.sub.vk) between the respective point in the point cloud of the current data frame and every point in the point cloud of the previous frame. In Equation 1, s.sub.k is the multidimensional location data received from the MIMO radar system 120 (i.e., the three location points discussed above) for the point k in the current data frame for which the velocity is being determined for, m.sub.j is the location data from one of the points in the point cloud of the previous frame, d.sub.k is the Doppler measurement for the point in the current frame (i.e., the Doppler measurement for s.sub.k), n.sub.k is the unit direction from the center of the MIMO radar system 120 to the point s.sub.k, .sub.1 and .sub.2 are calibration parameters, N.sub.k is the number of radar reflection points in current time frame matches with a point m.sub.j detected in previous time frame, and a.sub.jk is the probability that m.sub.j and s.sub.k are associated. In one embodiment, for example, a.sub.jk may be calculated according to Equation 2:
(18)
(19) where I.sub.k is an intensity value for the point s.sub.k, I.sub.j is an intensity value for the point m.sub.j, and c is a normalization factor such that a sum of all a.sub.jk is equal to 1. In other words, c is a normalization factor such that a sum of all the probabilities that any given point in the point cloud of the previous data frame corresponds to the point s.sub.k in the point cloud of the current data frame. In equations 1 and 2 t is the difference in time between the respective data frames and .sub.1, .sub.2 and .sub.3 are calibration parameters are the standard deviation of position measurement, Doppler measurement, and intensity measurement of a calibrated target (e.g., corner reflector), respectively. The processor 130, when performing the calculations for Equations 1 and 2, may initially set the velocity v.sub.k of s.sub.k as .sub.j.sup.N.sup.
(20) Returning to
(21) The processor 130 then classifies each point in the point cloud of the current frame as stationary or dynamic. (Step 220). A point in a point cloud is assumed to be stationary when an estimated relative velocity for the point is below a predetermined threshold. The threshold may be, for example, ten decibels (dB) above a noise level. The processor 130 determines the relative velocity of each point in the point cloud of the current frame by comparing the velocity of the point to the vehicle spend after compensating the velocity .sub.k for the point (determined in Step 215) with the yaw, pitch and roll of the vehicle and compensating the position of the point (i.e., the three location points from MIMO radar system 120) based upon a position of the MIMO radar system 120 as well as the yaw, pitch and roll of the vehicle. In one embodiment, for example, the processor may determine a classification for each point in the point cloud based upon Equation 3:
R.Math..sub.k+(.sub.H.Math.R.Math.s.sub.k+V.sub.H)< Equation 3
(22) where .sub.k is the velocity of the point in the current data frame, .sub.H is a pitch rate of the vehicle 100, s.sub.k is the position of the point in the current data frame, V.sub.H Is the speed of the vehicle 100 and R is a rotational matrix that compensates the velocity .sub.k and position s.sub.k for the yaw, pitch and roll of the vehicle 100. In one embodiment, for example, the rotational matrix R may be calculated according to Equation 4:
(23)
(24) where is a yaw of the vehicle 100, is a pitch of the vehicle 100, is a roll of the vehicle, c represents a cosine function (i.e., c.sub.=cos()) and s represents a sin function (i.e., s.sub.=sin()).
(25) In equation 3 represents a predetermined stationary speed. In one embodiment, for example, may be less than 0.5 kilometers per hour (KPH). Accordingly, if the norm of the sum of the velocities represented on the left side of Equation 3 is less than , the processor 130 classifies the point in the point cloud as stationary. However, if the norm of the sum of the velocities represented on the left side of Equation 3 is greater than , the processor classifies the point in the point cloud as dynamic. The processor 130, in Step 220 performs this classification for every point in the point cloud of the current data frame.
(26) In one embodiment, for example, the processor 130 may analyze the points in the point cloud classified as stationary to correct a radar angle alignment. (Step 225). In other words, the processor 130 may analyze the points in the point cloud classified as stationary determine if the angle of the MIMO radar system 120 has become misaligned from an intended angle. The misalignment could be from an impact to the vehicle 100 or from installation tolerances during installation of the MIMO radar system 120.
(27) Consider a stationary radar point s.sub.k, we have R.sub.k=u.sub.k, u.sub.k=(.sub.Hp.sub.k+.sub.H), for k=1, . . . , K. Let =[.sub.1 . . . .sub.K], and =[u.sub.1 . . . u.sub.K]. The and may include radar points from current and past time frames, for example, within 1 hour. The unknown rotation matrix R of the radar referring to the vehicle frame is equal to UCV.sup.T and C is the matrix:
(28)
(29) The det(UV.sup.T) is the determinant of the matrix UV.sup.T. U is the left orthogonal matrix and V is the right orthogonal matrix of the singular value decomposition (svd) of the matrix .sup.T.
(30) A pitch correction , a roll correction , and an azimuth correction for the radar angle alignment may be calculated according to Equations 5, 6, and 7:
(31)
(32) Where 1, 2, and 3 refer to coordinates in the a matrix R. The calculated pitch correction , a roll correction , and an azimuth correction may then be used to correct subsequent radar data input by compensating subsequent radar data input by the calculated pitch correction , roll correction , and azimuth correction .
(33) Matrices and needn't to be stored, since .sup.T can be recursively computed. For example, in case of deleting a sample (.sub.1,u.sub.1) and adding a sample (.sub.K+1,u.sub.K+1), the new .sup.T is computed as .sup.T=.sup.T.sub.1u.sub.1.sup.T+.sub.K+1u.sub.K+1.sup.T.
(34) The processor 130 may then segment the points in the point cloud from the current frame into clusters (Step 230). Any two points of the point cloud may be clustered in the same cluster according to Equation 8:
(35)
(36) In other words, when the difference between the three dimensional position of two points j and k in the point cloud, divided by .sub.1.sup.2 which is a varience in the position pkpj added to the difference between the three dimensional velocity of the two points j and k, divided by .sub.2.sup.2 which is a varience in the velocity data ukuj, is less than one, than the processor 130 associates the points j and k into the same cluster. As discussed above, the three dimensional velocity for a point k can be calculated according to Equation 1. The three dimensional position is determined based upon the data from the MIMO radar system 120.
(37) The processor 130 may then compute a kinematic state for each cluster. (Step 235). The kinematic state may include, for example, a center of mass for the cluster, a reference velocity and angular velocity for the cluster and contour points for the cluster. The contour of the cluster corresponds to an outline of the cluster. The center of mass p.sub.cm of the cluster may be determined by the processor 130 according to Equation 9:
(38)
(39) In Equation 9, C corresponds to the number of points in the cluster and p.sub.k corresponds to the position of each point in the cluster. The processor 130 may then determine the reference velocity u.sub.ref and a reference angular velocity .sub.ref according to Equation 10:
(40)
(41) Where is the radial frequency 2f. The radial frequency is a function of target velocity and the measured frequency shift induced by target motion. The processor 130 may determine the contour points according to p.sub.kp.sub.cm for each point k=1, 2, . . . C in the cluster. In other words, the processor 130 determines a contour point for each point in the cluster corresponding to the difference in the three-dimensional position of each respective point k in the cluster and the three-dimensional position of the center of mass for the cluster.
(42) In one embodiment, for example, the processor 130 may use a Kalman filter on the kinematic state data (i.e., the center of mass for the cluster, the reference velocity, the reference angular velocity, and the contour points for the cluster) to track and report a smoothed kinematic state for the cluster over multiple radar data frames. One benefit of using the Kalman filter over multiple frames is that any anomalies in the kinematic state data are averaged out.
(43) The processor 130 may than classify an object to be associated with each cluster. (Step 240). In other words, the processor 130 determines an object type for the cluster. A cluster may be classified, for example, as a vehicle, a pedestrian, a barricade, a street light, a stop sign, or any other object a vehicle could encounter on the road. In one embodiment, for example, a deep learning neural network (DLNN) may perform the object classification. As the object classification is a complex process, a separate processor 130, such as a GPU or an array of FPGA's may be used on this process alone. However, as discussed above, the number of processors 130 in the 6D point cloud system 110 may vary depending upon a performance level of the processor and the desired response time of the 6D point cloud system 110.
(44) The processor 130 may input a normalized intensity map and a time sliding window of micro-Doppler signals to the DLNN for each cluster. As discussed above, the MIMO radar system 120 may output an intensity associated with each data point. The normalized intensity map is created by dividing the intensity values by the total number of values. The time sliding window of micro-Doppler signals corresponds to a frequency in hertz received by the MIMO radar system 120 corresponding to the cluster over a period of data frames. The DLNN creates feature maps for each cluster by extracting features from the intensity map and the time sliding window of micro-Doppler signals.
(45) The processor 130 classifies a cluster based upon the reference velocity associated with the cluster and the contour of the cluster. Each type of target, for example, a pedestrian or a vehicle, has a typical micro-Doppler signature, and thus when analyzing the spectrogram (i.e., a spectrum change over time corresponding to the target), the processor 130 can classify the target based upon which signature the target most closely resembles.
(46) The processor 130 then tracks each classified cluster in subsequent passes through the process. (Step 245). Each objet (e.g., vehicle, pedestrian, etc.) has multiple detection points in the point cloud. Accordingly, the processor 130 does not need to track each detection point individually, but rather the group of points clustered together to represent an object. On each subsequent pass through the method 200, the processor 130 tracks a change in the center of the cluster and parameters (e.g., position and velocity) of the cluster. The updated center and other parameters are used in subsequent threat level determinations.
(47) The processor 130 then determines a threat level based upon the kinematic state data for the object determined in Steps 235 and 245 (i.e., the center of mass, reference velocity, reference angular velocity and contour), the object type determined in Step 235 and the object size (i.e., the number of points in the point cloud corresponding to the object) and response commands. (Step 250). The response commands may include generating a command for a vehicle control system 150 to avoid an object, to minimize an impact with the object to warn an object, or the like. The command may be, for example, to brake, accelerate, steer, flash lights, or the like, based upon the cluster velocity and the cluster proximity.
(48) One benefit of the 6D point cloud system 110 is that by generating a 6D point cloud from the four-dimensional data received from the MIMO radar system 120, the 6D point cloud system 110 can accurately identify and track objects in an urban environment using a small sensor array.
(49) While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.