G06V10/76

METHOD AND APPARATUS FOR A MANIFOLD VIEW OF SPACE
20190347515 · 2019-11-14 ·

An autonomous vehicle vision system for estimating a category of a detected object in an object pose unknown to the system includes a neural network to apply a mapping process to a region of interest in an image including the detected object in the object pose to obtain a point in a 3D manifold space. The system includes an object detector to estimate the category of the detected object in the object pose in the region of interest based on a relationship between the point representing the detected object in the object pose and a plurality of separate object clusters in the 3D manifold space. The system further includes a planner to select an improved route based on a predicted behavior of the category of the detected object in the object pose. The system also includes a controller to control operation of an autonomous vehicle according to the improved route.

Accelerated precomputation of reduced deformable models
10474927 · 2019-11-12 · ·

Technologies are disclosed for precomputation of reduced deformable models. In such precomputation, a Krylov subspace iteration may be used to construct a series of inertia modes for an input mesh. The inertia modes may be condensed into a mode matrix. A set of cubature points may be sampled from the input mesh, and cubature weights of the set of cubature points may be calculated for each of the inertia modes in the mode matrix. A training dataset may be generated by iteratively adding training samples to the training dataset until a training error metric converges, wherein each training sample is generated from an inertia mode in the mode matrix and corresponding cubature weights. The reduced deformable model may be generated, including inertia modes in the training dataset and corresponding cubature weights.

Systems and methods for color and pattern analysis of images of wearable items
10402648 · 2019-09-03 · ·

Disclosed are methods, systems, and non-transitory computer-readable medium for color and pattern analysis of images including wearable items. For example, a method may include receiving an image depicting a wearable item, identifying the wearable item within the image by identifying a face of an individual wearing the wearable item or segmenting a foreground silhouette of the wearable item from background image portions of the image, determining a portion of the wearable item identified within the image as being a patch portion representative of the wearable item depicted within the image, deriving one or more patterns of the wearable item based on image analysis of the determined patch portion of the image, deriving one or more colors of the wearable item based on image analysis of the determined patch portion of the image, and transmitting information regarding the derived one or more colors and information regarding the derived one or more patterns.

Processing device, processing method, and information storage device
10395092 · 2019-08-27 · ·

A processing device includes a processor including hardware. The processor is configured to implement an image acquisition process of acquiring a tissue image obtained by capturing an image of a tissue and a process of determining a property of the tissue image acquired, and setting a plurality of identification criteria for identifying a state of the tissue as a normal state or an abnormal state, based on the tissue image and the property of the tissue image.

METHOD OF AND SYSTEM FOR PERFORMING OBJECT RECOGNITION IN DATA ACQUIRED BY ULTRAWIDE FIELD OF VIEW SENSORS
20240153261 · 2024-05-09 ·

There is provided a method and system for training an object recognition machine learning model to perform object recognition in data acquired by ultrawide field of view (UW FOV) sensors to thereby obtain a distortion-aware object recognition model. The object recognition model comprises convolution layers each associated with a set of kernels. During training on a UW FOV labelled training dataset, deformable kernels are learned in a manifold space, mapped back to Euclidian space and used to perform convolutions to obtain output feature maps which are used to perform object recognition predictions. Model parameters of the distortion-aware object recognition model may be transferred to other architectures of object recognition models, which may be further compressed for deployment on embedded systems such as electronic devices on board autonomous vehicles.

COMPUTER SYSTEMS AND COMPUTER-IMPLEMENTED METHODS SPECIALIZED IN TRACKING FACES ACROSS VISUAL REPRESENTATIONS
20190236335 · 2019-08-01 ·

Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.

PHOTOGRAPH-BASED ASSESSMENT OF DENTAL TREATMENTS AND PROCEDURES

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20190164008 · 2019-05-30 ·

An image processing apparatus includes a setting unit, a detection unit, a matrix calculation unit, a calculation unit, an obtaining unit, a deriving unit, a patch correction unit, and a generation unit. The setting unit sets a target patch in an input image. The detection unit detects a plurality of similar patches in the input image. The matrix calculation unit calculates a covariance matrix representing correlation between pixels based on the plurality of similar patches. The calculation unit calculates eigenvalues and eigenvectors of the covariance matrix. The obtaining unit obtains a noise amount in the input image. The deriving unit derives a correction matrix based on the eigenvalues, the eigenvectors, the noise amount, and the number of similar patches. The patch correction unit corrects values of pixels in the similar patches based on the correction matrix. The generation unit generates an output image by combining the corrected similar patches.

Systems and Methods for Data Representation in an Optical Measurement System

An illustrative method includes accessing, by a computing device, a model simulating light scattered by a simulated target, the model comprising a plurality of parameters. The method further includes generating, by the computing device, a set of possible histogram data using the model with a plurality of values for the parameters. The method further includes determining, by the computing device, a set of components that represent the set of possible histogram data, the set of components having a reduced dimensionality from the set of possible histogram data.

Value bill identifying method

Provided is a value bill identifying method, which includes: step 1, collecting, by a color collection device including multiple color sensors, color data of a to-be-detected value bill and preprocessing the collected color data; step 2, extracting a feature from the preprocessed color data; step 3, matching the extracted feature with feature template sets corresponding to each type of value bills, to obtain matching scores, and regarding a feature template with the highest score as a matched template of the color data; and step 4, determining a type of the value bill based on a matching result.