G06V10/803

SYSTEMS AND METHODS FOR TRAFFIC SIGN VALIDATION

A driver assistance system for a vehicle includes image obtaining unit configured to obtain data in proximity to the vehicle, determine a regulation value based on the data, and at least one sensor unit configured to provide state information related to a state of the vehicle. Processing unit also includes being configured to determine whether a zone condition applies based on the data, confirm a validity of the detected regulation value based on the state information, the determined regulation value, the zone condition, and an age of the data, the processing unit is configured to revoke the validity, and wherein, upon determination of a zone condition, the processing unit is configured to increase the predetermined threshold duration. The processing unit is also configured to cause the regulation value to be displayed when the validity is confirmed and to prevent display of the regulation value when the validity is revoked.

VEHICLE DETECTION AND TRACKING BASED ON WHEELS USING RADAR AND VISION
20170243478 · 2017-08-24 ·

A system and method are provided for detecting remote vehicles relative to a host vehicle using wheel detection. The system and method include tracking wheel candidates based on wheel detection data received from a plurality of object detection devices, comparing select parameters relating to the wheel detection data for each of the tracked wheel candidates, and identifying a remote vehicle by determining if a threshold correlation exists between any of the tracked wheel candidates based on the comparison of select parameters.

Systems and methods for persona identification using combined probability maps

Disclosed herein are systems and methods for persona identification using combined probability maps. An embodiment takes the form of a method that includes obtaining at least one frame of pixel data; processing the at least one frame of pixel data to generate a hair-identification probability map; and generating a persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map.

MACHINE-LEARNING-ASSISTED SELF-IMPROVING OBJECT-IDENTIFICATION SYSTEM AND METHOD

A system and method of identifying and tracking objects comprises registering an identity of a person who visits an area designated for holding objects, capturing an image of the area designated for holding objects, submitting a version of the image to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, detecting an object in the version of the image, associating the registered identity of the person with the detected object, retraining the deep neural network using the version of the image if the deep neural network is unable to recognize the detected object, and tracking a location of the detected object while the detected object is in the area designated for holding objects.

SYSTEMS AND METHODS FOR PHENOTYPING

The present invention relates to the field of phenotyping, particularly to systems and methods for collecting, retrieval and processing of data for accurate and sensitive analysis and prediction of a phenotype of an object, particularly of a plant.

METHOD AND SYSTEM FOR COLLABORATIVE MULTI-SATELLITE REMOTE SENSING
20170235996 · 2017-08-17 ·

The present invention is to incorporate a novel two-step image registration algorithm that can achieve sub-pixel accuracy in order to provide a method and system that can significantly improve the remote monitoring performance of coral reefs and volcanoes using a future NASA remote imager known as HyspIRI, by increasing the spatial and temporal resolution of remote sensing data from multiple satellites. Our invention focuses on change detection, multiple images registration, target detection, coral reef and volcano monitoring. The objectives are achieved by accurate and early change detection in coral health, and volcanic activities, such as, by detecting color changes in crater lakes; accurate bottom-type classification in coral reefs; accurate concentration estimation of SO.sub.2, volcanic ashes, etc.; high temporal resolution of monitoring so that early mitigation steps can be activated; and high spatial resolution in multispectral and hyperspectral images. The same system can also be applied to other remote monitoring applications, such as, soil moisture monitoring.

HUMAN ACTIVITY RECOGNITION FUSION METHOD AND SYSTEM FOR ECOLOGICAL CONSERVATION REDLINE

A human activity recognition fusion method and system for ecological protection red line is disclosed. The method includes: obtaining a pre-stage remote sensing image and a post-stage remote sensing image of a target ecological protection red line region, and performing a data pre-processing; inputting the pre-processed pre-stage remote sensing image and the post-stage remote sensing image into a human activity recognition model after a pre-training; identifying a human activity pattern of the target ecological protection red line region as a first detection result; segmenting, calculating and analyzing the latest image data corresponding to the target ecological protection red line region based on a geographical country situation data to obtain a change pattern as a second detection result; and fusing the first detection result and the second detection result to obtain a change detection pattern of the target ecological protection red line region.

RGB-D FUSION INFORMATION-BASED OBSTACLE TARGET CLASSIFICATION METHOD AND SYSTEM, AND INTELLIGENT TERMINAL
20220309297 · 2022-09-29 ·

An RGB-D fusion information-based obstacle target classification method includes: collecting an original image through a binocular camera within a target range, and acquiring a disparity map of the original image; collecting a color-calibrated RGB image through a reference camera of the binocular camera within the target range; acquiring an obstacle target through disparity clustering in accordance with the disparity map and the color-calibrated RGB image, and acquiring a target disparity map and a target RGB image of the obstacle target; calculating depth information about the obstacle target in accordance with the target disparity map; and acquiring a classification result of the obstacle target through RGB-D channel information fusion in accordance with the depth information and the target RGB image.

Image processing method and apparatus, computer device, and storage medium

Methods, systems, apparatus, and non-transitory computer readable storage media for image processing are provided. In one aspect, an image processing method includes: generating a garment deformation template image and a first human body template image based on a first semantic segmentation image of a human body in a first image and a target garment image, generating a target garment deformation image by performing deformation on the target garment image based on the garment deformation template image, obtaining a second human body template image by adjusting the first human body template image based on a second semantic segmentation image of the human body in the first image and the garment deformation template image, and transforming the first image into a second image including the human body wearing a target garment based on the target garment deformation image, the second human body template image, and the garment deformation template image.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20220036046 · 2022-02-03 · ·

An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.