Patent classifications
G06V10/803
System and Method for Image-Based Remote Sensing of Crop Plants
Methods for image-based remote sensing of crop plants include: acquiring images of the crop plants from a camera flown over the crop by an unmanned/uncrewed aerial vehicle (UAV); forming an artificial neural network (ANN); and using the trained ANN to identify and/or measure one or more phenotypic characteristics of the crop plants in the images by classification and/or regression; and/or obtaining multispectral images of the crop plants from a multispectral camera flown over the crop by the UAV; mosaicking the multispectral images together; determining crop measurement metrics; crop height model (CHM), crop coverage (CC) and crop volume (CV) representing the crop plants in three dimensions from a fusion of a digital surface model and a digital terrain model; determining various vegetation indices (VIs) based on the multispectral orthomosaic reflectance map; and determining a measurement of dry biomass using CV and fresh biomass using CV×VIs.
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
This application provides an image processing method and apparatus that can: acquire a target object through each narrow-band filter to obtain a narrow-band channel image including the target object; and fuse multiple narrow-band channel images in one-to-one correspondence with the multiple narrow-band filters to obtain a color image including a contour of the target object.
POSE FUSION ESTIMATION
Pose fusion estimation may be achieved via a first and second set of sensors receiving a first and second set of data, passing the first and second set of data through a graph-based neural network to generate a set of geometric features to be passed through a pose fusion network to generate a first and second pose estimate. A second portion of the pose fusion network may receive the set of geometric features and generate a second set of geometric features and the second pose estimate based on the set of geometric features. A first portion of the pose fusion network may receive the first set of data and the second set of geometric features and generate the first pose estimate based on a fusion of the first set of data and the second set of geometric features.
ADVANCED DRIVER ASSISTANCE SYSTEM, AND VEHICLE HAVING THE SAME
An advanced driver assistance system (ADAS) and a vehicle including the same include a camera; a plurality of distance detectors; a braking device; and a processor configured to recognize a fusion track and a plurality of single tracks based on obstacle information recognized by the camera and obstacle information recognized by at least one of the plurality of distance detectors, upon determining that the fusion track is present, obtain a cluster area in a stationary state and a cluster area in a moving state based on movement information and reference position information of the fusion track and movement information and position information of each of the single tracks, determine a possibility of collision based on the obtained cluster area, and control the braking device in response to the determined possibility of collision.
Headware with computer and optical element for use therewith and systems utilizing same
An apparatus for mounting on a head including a frame, A face-wearable near-ocular optics and a micro-display for displaying data in front of the eyes is provided. A computing device is coupled to the micro-display. At least one sensor is coupled to the computing device for receiving biometric human information.
Image processing device, image processing method, and storage medium
An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Imaging and radar fusion for multiple-object tracking
This document describes methods and systems directed at imaging sensor and radar fusion for multiple-object tracking. Using tracking-by-detection, an object is first detected in a frame captured by an imaging sensor, and then the object is tracked over several consecutive frames by both the imaging sensor and a radar system. The object is tracked by assigning a probability that the object identified in one frame is a same object identified in the consecutive frame. A probability is calculated for each data set captured by a sensor by a supervised-learning neural-network model using the data collected from the sensors. Then, the probabilities associated with each sensor are fused into a refined probability. By fusing the data gathered by the imaging sensor and the radar system in the consecutive frames, a safety system can track multiple objects more accurately and reliably than using the sensor data separately to track objects.
Intelligent cataloging method for all-media news based on multi-modal information fusion understanding
The present disclosure provides an intelligent cataloging method for all-media news based on multi-modal information fusion understanding, which obtains multi-modal fusion features by unified representation and fusion understanding of video information, voice information, subtitle bar information, and character information in the all-media news, and realizes automatic slicing, automatic cataloging description, and automatic scene classification of news using the multi-modal fusion features. The beneficial effect of the present disclosure is that it realizes the complete process of automatic comprehensive cataloging for the all-media news, and improves the accuracy and generalization of the cataloging method, and greatly reduces the manual cataloging time by generating stripping marks, news cataloging descriptions, news classification labels, news keywords, and news characters based on the fusion of multi-modes of video, audio, and text.
Concealed object detection
A method for detecting the presence of on-body concealed objects includes receiving a visible-domain camera image for a scene, determining, using the visible-domain camera image, a region of interest where a subject is present, receiving an infrared-domain camera image and a millimeter-wave (mmwave) radar image that each cover the region of interest, determining emissivity information for the region of interest using the infrared-domain camera image, determining reflectivity information for the region of interest using the mmwave radar image and determining a concealed object classification for the subject based on the emissivity information and the reflectivity information. A corresponding system and computer program product for executing the above method are also disclosed herein.
System and method for oncoming vehicle detection and alerts for a waste collection vehicle
An object detection, tracking and alert system for use in connection with a waste collection vehicle is provided. The system can determine if an external moving object in the surrounding environment of the waste collection vehicle, such as another vehicle or a bicycle, is moving directly towards the waste collection vehicle, and then send one or more alerts to the driver and/or riders on the waste collection vehicle as well as any other waste collection vehicles in the surrounding area.