Patent classifications
G01S13/862
Multi-sensor analysis of food
In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.
SENSOR INFORMATION FUSION METHOD AND DEVICE, AND RECORDING MEDIUM RECORDING PROGRAM FOR EXECUTING THE METHOD
A sensor information fusion method of an embodiment includes obtaining N sensor tracks from each of a plurality of sensors with respect to a target located around a vehicle, calculating association costs of the N sensor tracks with respect to M reference tracks, and storing the association costs in a matrix form, and calculating an arrangement of reference tracks and sensor tracks that minimize the association costs with respect to the matrix, and outputting a sensing information result with respect to the target according to the arrangement of the reference tracks and the sensor tracks calculated by the plurality of sensors.
Deep learning based beam control for autonomous vehicles
Provided are systems and methods for a deep learning based beam control. Sensor data associated with the environment and the corresponding detected objects from a perception system are obtained. Object features and image features are extracted. The extracted object features and image features are fused into fused features. A beam control status is predicted according to the fused features, wherein the beam control status indicates a high beam illumination intensity or a low beam illumination intensity of a light emitting device.
SENSOR RECOGNITION INTEGRATION DEVICE
Provided is a sensor recognition integration device capable of reducing the load of integration processing so as to satisfy the minimum necessary accuracy required for vehicle travel control, and capable of improving processing performance of an ECU and suppressing an increase in cost. A sensor recognition integration device B006 that integrates a plurality of pieces of object information related to an object around an own vehicle detected by a plurality of external recognition sensors includes: a prediction update unit 100 that generates predicted object information obtained by predicting an action of the object; an association unit 101 that calculates a relationship between the predicted object information and the plurality of pieces of object information; an integration processing mode determination unit 102 that switches an integration processing mode for determining a method of integrating the plurality of pieces of object information on the basis of a positional relationship between a specific region (for example, a boundary portion) in an overlapping region of detection regions of the plurality of external recognition sensors and the predicted object information; and an integration target information generation unit 104 that integrates the plurality of pieces of object information associated with the predicted object information on the basis of the integration processing mode.
Extrinsic calibration of multiple vehicle sensors using combined target detectable by multiple vehicle sensors
Sensors coupled to a vehicle are calibrated, optionally using a dynamic scene with sensor targets around a motorized turntable that rotates the vehicle to different orientations. One vehicle sensor captures a representation of one feature of a sensor target, while another vehicle sensor captures a representation of a different feature of the sensor target, the two features of the sensor target having known relative positioning on the target. The vehicle generates a transformation that maps the captured representations of the two features to positions around the vehicle based on the known relative positioning of the two features on the target.
INFORMATION PROCESSING DEVICE, MOBILE DEVICE, INFORMATION PROCESSING SYSTEM, AND METHOD
To implement a configuration to calculate a manual driving recoverable time required for a driver who is executing automatic driving in order to achieve a requested recovery ratio (RRR) for each road section, and issue a manual driving recovery request notification on the basis of the calculated time. A data processing unit is included, which calculates a manual driving recoverable time required for a driver who is executing automatic driving in order to achieve a predefined requested recovery ratio (RRR) from automatic driving to manual driving and determines notification timing of a manual driving recovery request notification on the basis of the calculated time. The data processing unit acquires the requested recovery ratio (RRR) for each road section set as ancillary information of a local dynamic map (LDM), and calculates the manual driving recoverable time for each road section scheduled to travel, using learning data for each driver.
SYSTEMS AND METHODS OF COOPERATIVE DEPTH COMPLETION WITH SENSOR DATA SHARING
Systems and methods are provided for utilizing sensor data from sensors of different modalities and from different vehicles to generate a combined image of an environment. Sensor data, such as a point cloud, generated by a LiDAR sensor on a first vehicle may be combined with sensor data, such as image data, generated by a camera on a second vehicle. The point cloud and image data may be combined to provide benefits over either data individually and processed to provide an improved image of the environment of the first and second vehicles. Either vehicle can perform this processing when receiving the sensor data from the other vehicle. An external system can also do the processing when receiving the sensor data from both vehicles. The improved image can then be used by one or both of the vehicles to improve, for example, automated travel through or obstacle identification in the environment.
Ultrasonic sensors for work machine obstacle detection
A work machine includes a frame, a sensor assembly, and an ultrasonic sensor. The frame includes a first portion and a second portion that includes a front bumper and is configured to pivot with respect to the first portion for steering the work machine. The sensor assembly is positioned on the first portion or the second portion of the frame and is configured to sense data for detection of obstacles within a first area around the work machine. The ultrasonic sensor is positioned on the front bumper of the second portion and is configured to sense data for detection of obstacles within a second area around the work machine, the second area outside the first area when the second portion is in an articulated position with respect to the first portion.
METHOD FOR OPTIMIZING A SURROUNDINGS MODEL
A method for optimizing a surroundings model by at least one control unit, measured data being received from a first sensor set and at least one second sensor set. The first sensor set includes a first scanning area, and the second sensor set includes a second scanning area, the first scanning area and the second scanning area partially overlapping in an overlap area. A surroundings model is created for each sensor set based on the received measured data of the particular sensor set. The at least two surroundings models are compared to one another based on the overlap area and being verified. The at least two surroundings models are combined into an optimized surroundings model. A system, a control unit, a computer program, and a machine-readable memory medium, are also described.