Patent classifications
G06V10/7515
PROCESSING DEVICE
Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle. The processing device includes a stereo matching unit 200 that measures a parallax of the pair of images and generates a parallax image, a step candidate extraction unit 300 that extracts a step candidate of a road on which the vehicle travels from the parallax image generated by the stereo matching unit 200, a line segment candidate extraction unit 400 that extracts a line segment candidate from the images acquired by the stereo camera unit 100, an analysis unit 500 that performs collation between the step candidate extracted by the step candidate extraction unit 300 and the line segment candidate extracted by the line segment candidate extraction unit 400 and analyzes validity of the step candidate based on the collation result and an inclination of the line segment candidate, and a three-dimensional object detection unit 600 that detects a step present on the road based on the analysis result of the analysis unit 500.
Multi-imaging mode image alignment
Methods and systems for aligning images of a specimen generated with different modes of an imaging subsystem are provided. One method includes separately aligning first and second images generated with first and second modes, respectively, to a design for the specimen. For a location of interest in the first image, the method includes generating a first difference image for the location of interest and the first mode and generating a second difference image for the location of interest and the second mode. The method also includes aligning the first and second difference images to each other and determining information for the location of interest from results of the aligning.
SYSTEMS AND METHODS FOR OBJECT DETECTION
A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.
Urban remote sensing image scene classification method in consideration of spatial relationships
An urban remote sensing image scene classification method in consideration of spatial relationships is provided and includes following steps of: cutting a remote sensing image into sub-images in an even and non-overlapping manner; performing a visual information coding on each of the sub-images to obtain a feature image Fv; inputting the feature image Fv into a crossing transfer unit to obtain hierarchical spatial characteristics; performing convolution of dimensionality reduction on the hierarchical spatial characteristics to obtain dimensionality-reduced hierarchical spatial characteristics; and performing a softmax model based classification on the dimensionality-reduced hierarchical spatial characteristics to obtain a classification result. The method comprehensively considers the role of two kinds of spatial relationships being regional spatial relationship and long-range spatial relationship in classification, and designs three paths in a crossing transfer unit for relationships fusion, thereby obtaining a better urban remote sensing image scene classification result.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
According to an embodiment, an image processing device includes one or more processors. The one or more processors are configured to: acquire an image; detect a first repeated pattern from the image; detect an object included in the first repeated pattern; and output the object as a second repeated pattern.
SYSTEM AND METHOD FOR LATERAL VEHICLE DETECTION
A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
Methods and apparatuses for updating user authentication data
A method for updating biometric authentication data authenticates an input image using an enrollment database (DB) over a first length of time, the authentication including generating information for authenticating the input image, and updates the enrollment DB based on the first length time and the information for authenticating the input image.
METHOD FOR IDENTIFYING AUTHENTICITY OF AN OBJECT
A method for identifying authenticity of an object, the method includes maintaining, in an identification server system, a reference image of an original object, the reference image and provided to represent all equivalent original objects, receiving, in the identification server system, one or more input images of the object to be identified, and generating, by the identification server system, a target image from the one or more input images. The method further includes aligning, by the identification server system, the target image with the reference image and analysing, by the identification server system, the target image in relation to the aligned reference image for identifying authenticity of the object.
SENSOR COMPENSATION USING BACKPROPAGATION
An embodiment includes training a first convolutional neural network (CNN) using a plurality of training images to generate first and second trained CNNs, and then adding an interface layer to the second trained CNN. The embodiment processes a first and second images in a sequence of images using the first trained CNN to generate a first and second result vectors. The embodiment also processes the second image using the second trained CNN and sensor data input to the interface layer to generate a third result vector. The embodiment modifies the sensor data using a compensation value. The embodiment compares the third result vector to the second result vector to generate an error value, and then calculates a modified compensation value using the error value. The embodiment then generates a sensor-compensated trained CNN based on the second trained CNN with the modified compensation value.
Method and system for classification of an object in a point cloud data set
A method for classifying an object in a point cloud includes computing first and second classification statistics for one or more points in the point cloud. Closest matches are determined between the first and second classification statistics and a respective one of a set of first and second classification statistics corresponding to a set of N classes of a respective first and second classifier, to estimate the object is in a respective first and second class. If the first class does not correspond to the second class, a closest fit is performed between the point cloud and model point clouds for only the first and second classes of a third classifier. The object is assigned to the first or second class, based on the closest fit within near real time of receiving the 3D point cloud. A device is operated based on the assigned object class.