Method and apparatus for phase unwrapping radar detections using optical flow

11372097 · 2022-06-28

    Inventors

    Cpc classification

    International classification

    Abstract

    Radar systems are disclosed having phase measures limited to +/−π. An optical flow method considers the time derivative of the range with respect to phase (or velocity), and gives an indication of whether the phase is outside the measurable range by comparing the derivatives to forward and reverse wrap thresholds.

    Claims

    1. A method for determining a velocity of an object detected by a radar system, the method comprising: capturing radar sensor data from a reflected signal; determining a plurality of range measurements for detected objects based on frequency changes in the radar sensor data; determining a plurality of velocity estimations for the detected objects based on phase changes in the radar sensor data; and initiating an optical flow process to confirm the plurality of velocity estimations, the optical flow process comprising: calculating changes in the plurality of range measurements as time derivatives; comparing the time derivatives to threshold limits that comprise a forward wrap threshold and a reverse wrap threshold; and correcting at least one velocity estimation of the plurality of velocity estimations when at least one of the time derivatives is outside a corresponding one of the threshold limits.

    2. The method as in claim 1, wherein one or more of the time derivatives that falls outside the threshold limits indicate phase-wrapped ambiguous velocity estimations.

    3. The method as in claim 2, wherein the radar system has a field of view, the method further comprising dividing the field of view into a plurality of pixels.

    4. The method as in claim 3, wherein calculating the changes in the plurality of range measurements as the time derivatives comprises comparing range measurements for each pixel in the field of view.

    5. The method as in claim 1, wherein the threshold limits identify unambiguous velocity estimations.

    6. The method as in claim 5, wherein correcting the at least one velocity estimation of the plurality of velocity estimations comprises unwrapping a corresponding phase of the time derivative.

    7. The method as in claim 6, further comprising: comparing a phase of a received signal to a phase of a transmitted frequency-modulated carrier-wave (FMCW) signal; determining a phase difference between the received signal and the transmitted FMCW signal; and identifying a velocity corresponding to the phase difference.

    8. The method as in claim 7, wherein the transmitted FMCW signal has a sawtooth waveform.

    9. The method as in claim 1, wherein the radar system is located on a vehicle.

    10. The method as in claim 9, wherein the vehicle is an autonomous vehicle.

    11. The method as in claim 1, wherein the radar system is located on a structure and the structure is one of a building, a billboard, a road sign, or a traffic light.

    12. The method as in claim 1, wherein the forward warp threshold is +π or +2π, and the reverse warp threshold is −π or −2π.

    13. A system for determining a velocity of an object detected by a radar system, the system comprising: a radar capture module configured to capture radar sensor data from a reflected signal; a range measurement module configured to determine a plurality of range measurements for detected objects based on frequency changes in the radar sensor data; a velocity estimation unit configured to determine a plurality of velocity estimations for the detected objects based on phase changes in the radar sensor data; and an optical flow module configured to confirm and correct the plurality of velocity estimations by comparing the plurality of velocity estimations with threshold limits, wherein the threshold limits comprise a forward wrap threshold and a reverse wrap threshold.

    14. The system as in claim 13, wherein the optical flow module comprises: a range derivative module configured to calculate changes in the plurality of range measurements as time derivatives; a threshold comparison module configured to compare the time derivatives to the threshold limits; and a resolution module configured to correct a velocity estimation of the plurality of velocity estimations when at least one time derivative is outside the threshold limits, thereby correcting the velocity estimation.

    15. A radar system, comprising: a radar transceiver configured to prepare modulated transmit signals and to receive reflections of the modulated transmit signals; a range Doppler map (RDM) processing unit coupled to the radar transceiver, the RDM processing unit adapted to capture received data and to generate RDM information over a field of view, and an optical flow module adapted to calculate changes in received data over time to identify an ambiguous measurement and correct the ambiguous measurement by comparing the ambiguous measurement with threshold limits, wherein the threshold limits comprise a forward wrap threshold and a reverse wrap threshold.

    16. The radar system as in claim 15, wherein the optical flow module is configured to calculate changes in range measurement over time and compare the changes in range measurements to measurement capabilities of the radar system.

    17. The radar system as in claim 15, wherein the radar system is located on a vehicle.

    18. The radar system as in claim 15, wherein the radar system is located on a structure and the structure is one of a building, a billboard, a road sign, or a traffic light.

    19. The radar system as in claim 15, wherein the forward warp threshold is +π or +2π, and the reverse warp threshold is −π or −2π.

    20. The system as in claim 13, wherein the forward warp threshold is +π or +2π, and the reverse warp threshold is −π or −2π.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    (1) The present application may be more fully appreciated in connection with the following detailed description taken in conjunction with the accompanying drawings, which may not be drawn to scale and in which like reference characters refer to like parts throughout, and in which:

    (2) FIG. 1 illustrates perspectives of an object detection system for a vehicular application, according to various implementations of the subject technology;

    (3) FIG. 2 illustrates position and velocity of the vehicles of FIG. 1 as a function of range, according to various implementations of the subject technology;

    (4) FIG. 3 illustrates phase unwrapping of phase measured by an object detection system, according to various implementations of the subject technology;

    (5) FIG. 4 illustrates an object flow mapping of the movement of a vehicle to a two-dimensional (2-D) plane, according to various implementations of the subject technology;

    (6) FIG. 5 illustrates movement of a vehicle and the corresponding optical flow mapping for a vehicle as in FIG. 4, according to various implementations of the subject technology;

    (7) FIG. 6 is a flow diagram of a process for identifying objects as a function of object flow mapping, according to various implementations of the subject technology;

    (8) FIGS. 7A, 7B, 7C, and 7D illustrate a mapping of the movement of a vehicle as a function of object flow mapping using a process as in FIG. 6, according to example embodiments of the present invention;

    (9) FIG. 8A illustrates a plot of velocity measures and thresholds indicating phase wrapping, according to various examples;

    (10) FIG. 8B illustrates a plot of exemplary transmitted and received ramp waveforms in a frequency-modulated continuous-wave (FMCW) system with a sawtooth waveform, according to various implementations of the subject technology;

    (11) FIG. 9 illustrates a radar system incorporating an FMCW modulated signal, according to various implementations of the subject technology; and

    (12) FIG. 10 illustrates an object detection system, according to various implementations of the subject technology.

    DETAILED DESCRIPTION

    (13) Methods and apparatuses to improve object detection in a radar system are disclosed. There are many applications for these solutions, including those as illustrated herein below in a radar system for driver assist and autonomous operation of a vehicle. This is not meant to be limiting, but rather provided for clarity of understanding.

    (14) An object detection system in a vehicle is a moving sensor that is tasked with understanding the environment within which it operates. As illustrated in FIG. 1, a vehicle 104 may be traveling on a busy multi-lane highway 106 with vehicles 102 and 104 moving at a variety of speeds in multiple directions. The vehicle 104 operates within environment 100 and must navigate other vehicles 102.

    (15) In object detection systems incorporating a radar modulation scheme, such as a frequency-modulated carrier-wave (FMCW) scheme, the difference between transmit and receive signals provides range information and velocity. The velocity is deduced from the phase difference between the transmit and receive signals.

    (16) FIG. 2 illustrates a mapping 110 of the system 105 where the vehicle 104 has a sensor detecting objects within an angular range as indicated. There are several vehicles 102 indicated as 1, 2, 3, 4, 5, 6, 7, and 8, which may be moving in a forward or reverse direction or may be stationary. The vehicles are identified as 1, 2, 3, 4, 5, 6, 7, and 8 and some of the vehicles, including 2, 3, 4, 5, 6, and 8 are mapped to a Range-Doppler Map (RDM) as V2, V3, V4, V5, V6, and V8, where the range from vehicle 104 to a given vehicle is mapped with respect to the velocity of the vehicle. As there is a relationship between phase and velocity, the phase difference is graphed as corresponding to velocity. The velocity is a function of phase (i.e. velocity=f(phase)). In the present embodiments, the value of B is 180°, or π, and the value of A is some intermediary value. Understanding the velocity of a vehicle and its range provides information to anticipate where that vehicle is moving.

    (17) As velocity of a vehicle, such as vehicle 104, is measured by the phase difference of received signals, there is a limit to the ability to directly measure velocity due to the phase wrapping that occurs after +/−π. As used herein, phase difference will be generally referred to as phase, but the reader understands that this is a measure taken from a received radar signal.

    (18) FIG. 3 illustrates the mapping 120 of vehicle velocities illustrating phase unwrapping. As is shown in the mapping 120, the detectable phase range is −B (i.e. −π) to +B (i.e. +π). The phase of the vehicles is subject to wrapping when a maximum phase (i.e. +/−π) is exceeded.

    (19) As illustrated, the phase of vehicle V.sub.2 measured by the object detection system is D.sub.2 greater than −π, or −B. However, a forward wrap threshold of +π (e.g., refer to the forward wrap threshold π on FIG. 8A) is determined to be exceeded and, therefore, the phase is wrapped (the wrap thresholds will be discussed in detail in the description of FIG. 8A). The velocity of vehicle V.sub.2 cannot be directly derived given the phase-wrapping, as the actual phase of vehicle V.sub.2 is greater than +B, or π. When the phase of vehicle V.sub.2 is unwrapped, the phase difference between the actual phase of V.sub.2 and +B is D.sub.2 as is shown.

    (20) A similar situation occurs with vehicle V.sub.6, where the phase of vehicle V.sub.6 is measured to be D.sub.6 greater than −π, or −B. Similar to vehicle V.sub.2, for vehicle V.sub.6 a forward wrap threshold of +π is determined to be exceeded and, as such, the phase is wrapped. When the phase of vehicle V.sub.6 is unwrapped, the phase difference between the actual phase of V.sub.6 and +B is D.sub.6 as shown. The methods and apparatuses disclosed herein provide solutions to detect whether the phase is wrapped, to unwrap the phase, and to identify the correct velocity using optical flow concepts.

    (21) As illustrated in FIG. 4, movement of a car in three-dimensions (3-D) may be mapped to a two-dimensional (2-D) planar view using optical flow processes to indicate the movement. The vehicle 132 of system 130 is moving in a direction indicated by vectors 134. These vectors 134 are then mapped to plane 136 as vectors 138, where the mapping is indicated as dashed lines 135. The optical flow method uses a 2-D plane (e.g., plane 136) to describe the detection area of the sensor not shown). For a radar sensor, the detection area is the area covered by the transmit and receive beams of the antenna, within which the radar has the capability to detect an object. This does not necessarily consider the angular resolution of the radar, which is the capability to distinguish different objects within the detection area. The optical flow is a 2-D projection of the physical movement of an object, as viewed from the sensor, to a 2-D displacement of pixels within an image plane. Pixels refer to small units within the image plane; the term “pixel” comes from video frame processing, and is used herein to identify a small unit representing a physical location, such as in an RDM.

    (22) Continuing with FIG. 4, the vehicle 132 is moving in the direction of vectors 134 at a time instant. That instant is considered in relation to prior movement to determine an effective trajectory. These movement vectors 134 are then applied to multiple time instants, each corresponding to a scan of the environment.

    (23) FIG. 5 illustrates multiple time instances of measurements of phase of the vehicle 132. At the first time, to, the RDM identifies the vehicle 132 at area 140. As indicated, the vehicle 132 is at close range to the sensor. At time t.sub.1 the vehicle 132 has moved further away from the sensor, and has a greater range, as well as an increased velocity. This is indicated as area 142. Then at time t.sub.2 the distance from the sensor to the vehicle 132 increases, thus the range and the velocity continue to increase, as indicated by area 144.

    (24) FIG. 6 illustrates a method for detecting if radar detection results are affected by phase-wrapping and adjusting the results accordingly. The process 200 considers components of successive RDMs from radar scans of a field of view, wherein the radar scans are taken at sequential times, T(i−1), T(i), and T(i+1). In the illustrated example, the process captures sensor data, 202. This data is then presented to an optical flow process 204, to create a time derivative map (refer to FIG. 8A), 206, which is compared to threshold values, 208, to identify phase-wrapped values. The process then determines if any phase-wrap is detected, 210. If no phase-wrap is detected, 210, the process continues to the next set of sensor data, 216; else, the process determines a wrap coefficient, 212. The wrap coefficient provides information as to how many times the phase has wrapped and, thus, as to how to adjust for an accurate velocity. The process adjusts to the unwrapped value, 214. The process then continues to the next sensor data. In this way, the received data is monitored to identify a velocity that is not within the measurable range of the radar system.

    (25) FIGS. 7A, 7B, 7C, and 7D illustrate an example of the process 200 of FIG. 6 in operation for a vehicle V.sub.6, where the movement is increasing in range and velocity. The RDM 220 illustrated in FIG. 7A is not shown in complete form, but rather focuses on the coordinate area describing the vehicle V.sub.6 and its movement. The RDM 220 is an overlay for two points in time, a first coordinate location for to and a second coordinate location for t.sub.1. The RDM 220 is shown in expanded view as RDM portion 222 illustrated in FIG. 7B corresponding to time to, and portion 224 illustrated in FIG. 7C corresponding to time t.sub.1. As the velocity is constant, the vehicle V.sub.6 continues to move in an expected way with respect to the sensor. Where the sensor has a velocity different from that of the object (e.g., vehicle V.sub.6), the distance between them changes and, thus, there is a change in the range. The plot 226 depicted in FIG. 7D illustrates the change in pixel values resulting from the movement of vehicle V.sub.6. The expected value line 208 of plot 226 is an expected direct linear relationship between the time derivative of the range and the measured velocity. The exemplary value 230, which is close to the expected value line on plot 226, is an exemplary value representing the actual movement of vehicle V.sub.6.

    (26) FIG. 8A illustrates a threshold scheme 300 for determining whether a phase is wrapped (i.e. for determining phase-wrap). There is an expected direct relationship between the time derivative of the range and the measured velocity. The line describing this expected relationship (i.e. the expected value line in the plot of FIG. 8A) defines the midpoints between threshold values. The plot of FIG. 8A also includes a line representing a forward wrap threshold of +π, a line representing a reverse wrap threshold of −π, a line representing a forward wrap threshold of +2π, and a line representing a reverse wrap threshold of −2π.

    (27) The maximum velocity (V.sub.max) corresponds to a phase of +π. When a range change exceeds the forward wrap threshold of +π, this indicates that the detected object is moving faster than directly measurable with the radar system and, thus, the phase has been wrapped in a forward direction. Similarly, where the time derivative of the range is below the reverse wrap threshold of −π, the object is moving in a reverse direction and exceeds the directly measurable capability of the radar system and, as such, the phase has been wrapped in a reverse direction.

    (28) For example, in FIG. 8A, the time derivative of the range for vehicle V.sub.1 is shown to be exceeding the forward wrap threshold of +2π; this indicates that the phase has been wrapped twice in a forward direction. Also shown, the time derivative of the range for vehicle V.sub.2 is shown to be exceeding the reverse wrap threshold of −π; this indicates that the phase has been wrapped a single time in a reverse direction.

    (29) Heuristically, it may be understood that the methods disclosed herein compare independent measures of velocity. The measurement of phase has high precision, but is susceptible to the aforementioned phase-wrapping ambiguity. The measurement of the time derivative of the range is subject to high noise, but has no such ambiguity. The time derivative information can therefore be used to resolve the phase-wrapping ambiguity while retaining the precision of the phase measurement approach.

    (30) FIG. 8B illustrates a plot of exemplary transmitted and received ramp waveforms in a frequency-modulated continuous-wave (FMCW) system with a sawtooth waveform. In the plot of FIG. 8B, the x-axis denotes time (t), and the y-axis denotes frequency (f). An FMCW radar (e.g., refer to 400 of FIG. 9) may use an FMCW sawtooth waveform to determine range and velocity.

    (31) During operation of an FMCW radar, a chirp signal with a sawtooth waveform is launched into the free space using a transmit antenna (e.g., refer to 422 of FIG. 9) of the FMCW radar. A chirp signal is an FM-modulated signal of a known stable frequency whose instantaneous frequency varies linearly over a fixed period of time (sweep time) by a modulating signal. The transmitted signal hits the target (e.g., a vehicle) and reflects back to a receive antenna (e.g., refer to 432 of FIG. 9) of the FMCW radar. From a single chirp, a Fast Fourier Transform (FFT) of an intermediate frequency (IF) signal (refer to FIG. 9), which is the difference between the transmit and receive signals, can be used to determine the range profile of the scanned area. The location of a peak (in frequency space) is proportional to the distance to the corresponding target. By taking multiple such chirps, and noting how the phase of a particular peak changes between pulses, the velocity from the rate of phase change can be deduced. In practice, this is determined by performing an FFT as well.

    (32) In particular, for example, the FMCW radar emits an FMCW signal with a sawtooth waveform having a period T (refer to the transmit signal of FIG. 8B). For a simplified analysis, it is assumed that the signal received (refer to the receive signal of FIG. 8B) after refection from the target is a copy of the transmitted signal, delayed by propagation time:
    τ=(2R)/c,  (Eq. 1)
    where R is the range of the target, and c is the speed of light.

    (33) The received signal is mixed (e.g., refer to 410 of FIG. 9) with an attenuated transmit signal. After low-pass filtering (e.g., refer to 408 of FIG. 9), a low (differential) signal is obtained, referred to as a video signal. The video signal is approximately sinusoidal, and its frequency f.sub.ω, constant in the time interval T−τ, equals the change of the transmitter frequency during time τ,
    f.sub.ω=ατ,  (Eq. 2)
    where α=Δf/T is a modulation waveform slope, and Δf=f.sub.mx−f.sub.min (refer to the plot of FIG. 8B) is the maximum frequency deviation. As can be seen from equations (1) and (2), the measurement of the target range R is equivalent to the determining of the video signal frequency during the T−τ interval.

    (34) If a target with an initial range R.sub.0 (at t.sub.0=0) moves with some radial velocity v, the delay will not be constant. Under the condition v<<c, the delay will be almost a linear function of time:
    τ≈(2/c)(R.sub.0+vt)  (Eq. 3)

    (35) As the delay change is relatively slow, it can be noticed only in the phase change of the video signal. If the signal is analyzed in K number of modulation periods, the Doppler frequency can be estimated from the phase changes, thus allowing for the target velocity to be computed.

    (36) Specifically, for example for the computation of the velocity, assume that the FMCW radar transmit antenna emits a transmit signal u(t)=U cos ϕ(t), where −∞<t<∞, whose frequency is:
    f(t)=(1/2π)(dϕ(t)/dt)=f.sub.min+α(t−kT),  (Eq. 4)
    where kT−(T/2)<t<kT+(T/2), and (k=0, +/−1, . . . ), is a periodical function of time as shown on the plot of FIG. 8B.

    (37) The received signal u.sub.0(t)=U cos ϕ(t), where −∞<t<∞, reflected from a target is delayed by the propagation time z. Upon mixing the received signal u.sub.0(t) with an attenuated copy of the transmit signal u(t) and low-pass filtering, a video signal x(t)=cos ϕ.sub.ω(t), where −∞<t<∞, is obtained. The video signal differential phase ϕ.sub.ω(t) can be described by equation:
    ϕ.sub.ω(t.sub.k)=2π[f.sub.0τ.sub.0+kf.sub.dT+(f.sub.ω+f.sub.d)t.sub.k],  (Eq. 5)
    where t.sub.k=t−kT and (k=0, +/−1, . . . ), and −T/2+τ<t.sub.k<T/2, and where f.sub.d=(2v/c)f.sub.0 is a Doppler (velocity) frequency of a signal, and f.sub.ω=ατ.sub.0 is a video frequency value corresponding to a target at range R.sub.0. The maximum unambiguously measured velocity is equal to:
    V.sub.max=c/(4Tf.sub.0)  (Eq. 6)

    (38) FIG. 9 illustrates a radar system incorporating an FMCW modulated signal. The system 400 includes a radar transceiver 402 coupled to a radar control unit 450 and an RDM process unit 440. The radar transceiver 402 includes a synthesizer 404 to generate a frequency modulated signal that is transmitted at transmit (Tx) antenna 422 controlled by Tx front end module 420. The signal frequency increases linearly with time, such as a sawtooth wave, enabling time delay calculations to identify the range or distance to a detected object. The system 400 includes a mixer 410 that receives the transmit signal from synthesizer 404 and the received signal from receiver (Rx) front end module 430 via Rx antenna 432, and then outputs a signal at an intermediate frequency (IF) (i.e. IF signal). The IF is the difference in phase of the transmit and receive signals. The IF signal is provided to low pass filter (LPF) 408 for filtering and then to an analog-to-digital converter (ADC) 406 for conversion from an analog signal to a digital signal. The RDM process unit 440 uses the comparison information to determine range and velocity. The range is proportional to the frequency of the IF signal, and the velocity is proportional to the phase difference of the IF signal. The change in position of an object is reflected in the return time and, thus, the IF phase. In this way, both the range and velocity are calculated using the FMCW signal in the radar system 400.

    (39) FIG. 10 illustrates an object detection system 500 having an optical flow module 550 including a range derivative module 552, a threshold comparison module 554, and a phase resolution module 556. The range derivative module 552 is operable to calculate changes in range measurements as time derivatives, the threshold comparison module 554 is operable to compare the time derivatives to wrap threshold limits (e.g., a forward wrap threshold and a reverse wrap threshold), and the phase resolution module 556 is operable to correct a velocity estimation if at least one time derivative is outside the wrap threshold limits, thereby correcting the velocity estimation.

    (40) The optical flow module 550 operates in coordination with the radar modules, including radar capture module 504, range measurement module 506, velocity estimation module 508 and RDM process unit 540. The radar capture unit 504 receives reflected FMCW signals, and compares frequency and phase between transmitted and received signals. Changes in frequency correspond to distance or range, and changes in phase correspond to velocity of detected objects. The range measurement module 506 determines frequency changes and generates range data. Velocity estimation module 508 determines phase changes and generates velocity data. The output of modules 506 and 508 are input to RDM process unit 540 to generate RDMs. The velocity estimation module 508 may measure ambiguous velocities depending on the phase difference detected. If the phase difference exceeds the measurement limits of the object detection system 500, then the phase will wrap around and indicate an incorrect phase. This is the case where the system interprets a phase shift of π/4 as the same as a phase shift of 5π/4, 9π/4, and so forth. The optical flow module 550 identifies these ambiguous measurements of phase-wrapping and enables the system 500 to identify the correct phase and, thus, a more accurate velocity.

    (41) In such radar systems, the phase measure is limited to +/−π, as any value greater or less than this introduces an ambiguity into the velocity estimation. When the phase is outside of this range, the phase wraps around and a direct application to velocity is not accurate. To resolve this issue, the disclosure herein applies optical flow techniques. The optical flow considers the time derivative of the range relative to the velocity estimated from the phase, and gives an indication of whether the phase measure is outside the measurable range by comparing the derivatives to forward and reverse thresholds.

    (42) The disclosed radar system (e.g., radar system 400 of FIG. 9) may implement the various aspects, configurations, processes and modules described throughout this description. The radar system is configured for placement in an autonomous driving system or in another structure in an environment (e.g., buildings, billboards along roads, road signs, traffic lights, etc.) to complement and supplement information of individual vehicles, devices and so forth. The radar system scans the environment, and may incorporate infrastructure information and data, to alert drivers and vehicles as to conditions in their path or surrounding environment. The radar system is also able to identify targets and actions within the environment. The various examples described herein support autonomous driving with improved sensor performance, all-weather/all-condition detection, advanced decision-making algorithms and interaction with other sensors through sensor fusion. The radar system leverages intelligent metamaterial antenna structures and artificial intelligence (AI) techniques to create a truly intelligent digital eye for autonomous vehicles, which can include Level 1, Level 2, Level 3, Level 4, or Level 5 vehicles, i.e. any vehicle having some capability of autonomous driving, from requiring some driver assistance to full automation.

    (43) It is appreciated that the previous description of the disclosed examples is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these examples will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other examples without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

    (44) Where methods described above indicate certain events occurring in certain order, those of ordinary skill in the art having the benefit of this disclosure would recognize that the ordering may be modified and that such modifications are in accordance with the variations of the present disclosure. Additionally, parts of methods may be performed concurrently in a parallel process when possible, as well as performed sequentially. In addition, more steps or less steps of the methods may be performed. Accordingly, embodiments are intended to exemplify alternatives, modifications, and equivalents that may fall within the scope of the claims.