METHOD FOR A DETECTION AND CLASSIFICATION OF GESTURES USING A RADAR SYSTEM

20220003862 · 2022-01-06

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for a detection and classification of gestures using a radar system, particularly of a vehicle. A detection information of the radar system is provided, wherein the detection information is specific for signals received from different antenna units of an antenna array of the radar system. At least one phase-difference information is determined from the detection information, wherein the phase-difference information is specific for a phase-difference of the received signals. A neural network is applied with the phase-difference information as an input for the neural network to obtain a result specific for the detection and classification of the gestures.

Claims

1. A method for a detection and classification of gestures using a radar system of a vehicle, the method comprising: providing a detection information of the radar system, the detection information being specific for signals received from different antenna units of an antenna array of the radar system; determining at least one phase-difference information from the detection information, the phase-difference information being specific for a phase-difference of the received signals; and applying a neural network with the phase-difference information as an input for the neural network to obtain a result specific for the detection and classification of the gestures.

2. The method according to claim 1, wherein the neural network is configured as a region-based deep convolutional neural network.

3. The method according to claim 1, wherein the detection information is specific for a micro-Doppler signature of the gestures.

4. The method according to claim 1, wherein at least one spectrogram is determined from the detection information and used as the input in addition to the phase-difference information for the neural network.

5. The method according to claim 1, wherein the input is specific for multiple gestures, and the neural network is used to distinguish between these multiple gestures, so that the result is specific for a detection of individual of the multiple gestures and a classification of these individual gestures.

6. The method according to claim 1, wherein the detection information is determined by signals received from a first and second antenna unit of the antenna array specific for an elevation angle and by signals received from a third and fourth antenna unit of the antenna array specific for an azimuth angle.

7. A radar system comprising: an antenna array for a detection in an environment of the antenna array; and a data processing apparatus comprising: a detector to provide a detection information of the radar system, the detection information being specific for signals received from different antenna units of the antenna array; a determinator to determine at least one phase-difference information from the detection information, the phase-difference information being specific for a phase-difference of the received signals; and an applicator to apply a neural network with the phase-difference information as an input for the neural network to obtain a result specific for the detection and classification of the gestures.

8. The radar system according to claim 7, wherein the antenna array is configured as an L-shaped antenna array.

9. The radar system according to claim 7, wherein the radar system is configured as a frequency-modulated continuous wave radar system.

10. The radar system according to claim 7, wherein the data processing apparatus is adapted to perform the method comprising: providing a detection information of the radar system, the detection information being specific for signals received from different antenna units of an antenna array of the radar system; determining at least one phase-difference information from the detection information, the phase-difference information being specific for a phase-difference of the received signals; and applying a neural network with the phase-difference information as an input for the neural network to obtain a result specific for the detection and classification of the gestures.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0030] The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:

[0031] FIG. 1 shows a schematic visualisation of a method according to the invention,

[0032] FIG. 2 shows a further schematic visualisation of a method according to the invention,

[0033] FIG. 3 shows a further schematic visualisation of a method according to the invention, and

[0034] FIG. 4 shows a schematic visualisation of a radar system according to the invention.

DETAILED DESCRIPTION

[0035] In FIG. 1, a method 100 for a detection and classification of gestures using a radar system 1 is visualized. According to a first method step 101, a detection information 200 of the radar system 1 is provided, wherein the detection information 200 is specific for signals received from different antenna units 11, 12, 13, 14 of an antenna array 10 of the radar system 1. According to a second method step 102, at least one phase-difference information 210 from the detection information 200 is determined, wherein the phase-difference information 210 is specific for a phase-difference of the received signals. According to a third method step 103, a neural network 220 is applied with the phase-difference information 210 as an input 221 for the neural network 220 to obtain a result 222 specific for the detection and classification of the gestures.

[0036] FIG. 2 shows further details how to exemplarily generate an input 221 for the neural network 220. A first radar signal 111 can be obtained from the signal received from a first antenna unit 11 of the antenna array 10. A second radar signal 112 can be obtained from the signal received from a second antenna unit 12 of the antenna array 10. A third radar signal 113 can be obtained from the signal received from a third antenna unit 13 of the antenna array 10. A fourth radar signal 114 can be obtained from the signal received from a fourth antenna unit 14 of the antenna array 10.

[0037] Then, the first radar signal 111 can be used for a time-frequency analysis 120 so as to obtain a time-frequency spectrum 133 (spectrogram). The first radar signal 111 and the second radar signal 112 can be used to calculate a first phase-difference information 131 by using a calculation 121. The third and fourth radar signal 113, 114 can be used to calculate a second phase-difference information 132 by using the calculation 121. The first and second phase-difference information 131, 132 together with the spectrogram 133 can form the input 221 for the neural network 220.

[0038] According to FIG. 3, an exemplarily processing for determining the input 221 for the neural network 220 is described. For generating the input 221 of the neural network 220, a feature extraction network (FEN) can optionally be used. To extract features from the spectrogram 133 and the phase-difference information 131, 132, the FEN 134 can be constructed by using 7 convolutional (Conv) layers, and each of them may have a kernel size of 3×3. The kernel number of the first four Conv layers can increase from 64, 128, 256 to 512, and that of Conv layer 5, 6, and 7 can be 512. In each Conv layer, a rectified linear unit (RELU) can be used as the activation function. Besides, Conv layer 1, 2, 3 and 5 are followed by max-pooling layers with kernel size 2×2. The output of the FEN 134 is the feature maps 135. The feature maps 135 can have a dimension of W×H×512. In each pixel of the feature maps 135, nine anchors using 3 scales of 8×8, 16×16, 32×32 and 3 aspect ratios of 1:2, 1:1, 2:1 can be generated. Then, among total 9W H possible anchors, the network could give several region proposals, i.e., Regions of Interest (RoIs), which are further processed by the following layers in the network. Using the region proposals acquired by a region proposal network (RPN 136), the relevant RoIs in feature maps 135 can be selected as input of the RoI pooling layer 138 (designated as feature maps with ROI 137). For each RoI, the feature maps 135 can be cropped and then max-pooled to fixed-size feature maps 135 because of size constraint in the following fully-connected (FC) layer. Each pooled RoI can then be fed into two FC layers 139, either of which has 4096 hidden units and followed by a dropout layer 140 for preventing the network from overfitting. For each RoI, the network gives two outputs using two separate output layers 141. The output layer 141 followed by a softmax function 142 gives the predicted class, and the other gives four values, which encode the bounding box position 143 of the predicted class.

[0039] In FIG. 4, an exemplarily antenna array 10 with an L-form (i.e. an L-shaped antenna array) is shown. The antenna array 10 can comprise four antenna units, for example each configured as receiving antenna of the radar system 1. A first antenna unit 11 can be arranged with a distance 15 from a second antenna unit 12. A third antenna unit 13 can be arranged with a distance 15 from a fourth antenna unit 14. The distance 15 is for example λ/2, where A is the wavelength used with the radar system. This allows to use the pair of the first and second antenna unit 11, 12 for a calculation of the elevation angle, and the pair of the third and fourth antenna unit 13, 14 for a calculation of the azimuth angle. Furthermore, a data processing apparatus 300 of the radar system 1 is shown, which may perform this calculation.

[0040] The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.