Patent classifications
G01S7/417
METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR THE AUTOMATED LOCATING OF A VEHICLE
A method for determining a geographical location of a vehicle (10) includes using a camera/sensor device (20) of the vehicle for recording (S10) first image and sensor data (30) from surroundings of the vehicle (10) while the vehicle (10) is traveling a route. The first image and sensor data (30) are assigned geographical coordinates and are sent to a data evaluation unit (50) for creating a digital map. The method continues by using a second camera and sensor device (20) for recording (S40) second image and sensor data (30) from surroundings while the vehicle (10) is traveling the same route and sending (S50) the recorded second image and sensor data (30) to the data evaluation unit (50). The data evaluation unit (50) compares (S60) the recorded second image and sensor data (30) with the digital map of the surroundings (70) and determines (S70) a position of the vehicle (10).
VESSEL FIELD OF AWARENESS APPARATUS AND METHOD
A field of awareness (FOA) system provides an operator of a vessel with intuitive object detection and positioning information. The system may comprise an FOA cloud server and an FOA unit. The FOA cloud server may be configured to perform a machine learning training operation to modify an FOA model based on a location-based relationship between training radar data and truth data. The FOA unit may be disposed on the vessel and may comprise processing circuitry configured to apply radar data to the FOA model to perform a comparison to determine a matched model signature, an associated matched object type, and an icon representation for the object of interest. The processing circuitry also be configured to control the display device to render the icon representation of the object at a position relative to a representation of the vessel based on the relative object position.
INNOVATIVE METHOD FOR THE DETECTION OF DEFORMED OR DAMAGED STRUCTURES BASED ON THE USE OF SINGLE SAR IMAGES
The invention concerns a method (1) to detect deformations of, and/or damages to, structures permanently arranged on the earth's surface. In particular, said method (1) comprises: acquiring (11) georeferencing data indicative of geographical reference positions of predefined points of interest of a given structure to be monitored permanently arranged on the earth's surface, wherein said predefined points of interest are representative of a 3D geometry of the given structure without deformations and damages; acquiring (12) a SAR image of an area of the earth's surface where the given structure is arranged, wherein said SAR image is associated with a given reference coordinate system; transforming (13) the geographical reference positions of the predefined points of interest into corresponding expected positions in the given reference coordinate system associated with the acquired SAR image so as to carry out a reprojection of the 3D geometry of the given structure without deformations and damages on the acquired SAR image; identifying (14) in the acquired SAR image the predefined points of interest of the given structure; determining (15) actual positions in the given reference coordinate system associated with the acquired SAR image of the predefined points of interest identified in said SAR image; making a comparison (16) between the expected positions of the predefined points of interest and the corresponding actual positions in the acquired SAR image; and detecting (17) one or more deformations of, and/or one or more damages to, said given structure on the basis of the comparison made.
User identification device and method using radio frequency radar
A user identification device according to a disclosed embodiment includes a transmitter for scattering radio-frequency (RF) signals into tissues of a body part of a user, a receiver for receiving the RF signals having passed through the tissues of the body part of the user, a memory for storing parameters of a trained classification algorithm, and a processor for identifying the user by analyzing the received RF signals based on the trained classification algorithm by using the parameters of the trained classification algorithm in response to receiving the RF signals through the receiver.
Radar based object classification
A method for radar based object classification, the method may include obtaining multiple radar samples of an object; the multiple radar samples were acquired at different acquisition times; wherein the multiple radar samples comprise a plurality of first radar sample parameters; calculating second radar sample parameters for the multiple radar samples, by applying one or more non-linear functions on at least some of the plurality of first radar sample parameters of at least some of the multiple radar samples; generating an object signature that comprises temporal information and inter-parameter correlation information; wherein the generating comprises feeding, to each one of a deep neural network (DNN) and a time delay neural network (TDNN), (a) at least some of the plurality of first radar sample parameters, and (b) at least some of the second radar sample parameters; and classifying, by a classifier, the signature to a signature class.
HEART BEAT MEASUREMENTS USING A MOBILE DEVICE
Various arrangements for performing ballistocardiography using a mobile device are presented. A radar integrated circuit of a mobile device may emit frequency-modulated continuous-wave (FMCW) radar. Reflected radio waves based on the FMCW radar being reflected off objects may be received and used to create a raw radar waterfall. The raw radar waterfall may be analyzed to create a ballistocardiography waveform. Data based on the ballistocardiography waveform may be output, such as to a machine-learning application installed on the mobile device.
Semantic segmentation of radar data
Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
Multi-modal sensor data association architecture
A machine-learning architecture may be trained to determine point cloud data associated with different types of sensors with an object detected in an image and/or generate a three-dimensional region of interest (ROI) associated with the object. In some examples, the point cloud data may be associated with sensors such as, for example, a lidar device, radar device, etc.
Image classification system
A method comprising: obtaining an image; identifying a rotation angle for the image by processing the image with a first neural network; rotating the image by the identified rotation angle to generate a rotated image; classifying the image with a second neural network; and outputting an indication of an outcome of the classification, wherein the first neural network is trained, at least in part, based on a categorical distance between training data and an output that is produced by the first neural network.
Multi-sensor analysis of food
In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.