G06V10/7715

IMAGE RESTORATION METHOD AND APPARATUS
20230005114 · 2023-01-05 ·

The present embodiment provides an image restoration method and apparatus which generate independent different restoration models by performing learning for each of different resolutions, receive a distorted image, and apply a restoration model corresponding to the resolution of the distorted image among the independent different restoration models to restore the distorted image into an improved upscaled image centering on a restoration target object within the distorted image.

A System and a Method for Generating an Image Recognition Model and Classifying an Input Image
20230237769 · 2023-07-27 ·

A method of generating an image recognition model for recognising an input image and a system thereof are provided. The method includes appending at least one feature extraction layer to the image recognition model, extracting a plurality of feature vectors from a set of predetermined images, grouping the plurality of feature vectors into a plurality of categories, clustering the plurality of feature vectors of each of the plurality of categories into at least one cluster, determining at least one centroid for each of the at least one cluster, such that each of the at least one cluster comprises at least one centroid, such that each of the at least one centroid is represented by a feature vector, generating a classification layer based on the feature vector of the at least one centroid of the plurality of categories, and appending the classification layer to the image recognition model. In addition, a method of classifying an input image and a system thereof are provided.

ESTIMATION METHOD, ESTIMATION APPARATUS AND PROGRAM

An estimation step according to an embodiment causes a computer to execute: a calculation step of using a plurality of images obtained by a plurality of imaging devices imaging a three-dimensional space in which a plurality of objects reside, to calculate representative points of pixel regions representing the objects among pixel regions of the images; a position estimation step of estimating positions of the objects in the three-dimensional space, based on the representative points calculated by the calculation step; an extraction step of extracting predetermined feature amounts from image regions representing the objects; and an attitude estimation step of estimating attitudes of the objects in the three-dimensional space, through a preliminarily learned regression model, using the positions estimated by the position estimation step, and the feature amounts extracted by the extraction step.

CLEANING AREA ESTIMATION DEVICE AND METHOD FOR ESTIMATING CLEANING AREA
20230000302 · 2023-01-05 ·

A cleaning area estimation device (30) includes an estimation unit (33) that estimates dirt information (D2) about an inside of a cleaning area on the basis of image information (D1) obtained by imaging a cleaning area by an imaging device (10), and a generation unit (34) that generates map information (D3) indicating a map of the dirt information about the cleaning area on the basis of the estimated time-series dirt information (D2).

Learning observation representations by predicting the future in latent space

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network that is configured to process an input observation to generate a latent representation of the input observation. In one aspect, a method includes: obtaining a sequence of observations; for each observation in the sequence of observations, processing the observation using the encoder neural network to generate a latent representation of the observation; for each of one or more given observations in the sequence of observations: generating a context latent representation of the given observation; and generating, from the context latent representation of the given observation, a respective estimate of the latent representations of one or more particular observations that are after the given observation in the sequence of observations.

Generating saliency masks for inputs of models using saliency metric

An example system includes a processor to receive an input and a model trained to classify inputs. The processor is to iteratively generate a perturbed input that optimizes a saliency metric including a classification term, a sparsity term, and a smoothness term, while keeping parameters of the model constant. The processor is to also detect that a predefined number of iterations is exceeded or a convergence of values of the perturbed input. The processor is to further generate a saliency mask based on a perturbation of the perturbed input in response to detecting the predefined number of iterations is exceeded or the convergence.

Machine learning-based root cause analysis of process cycle images

The technology disclosed relates to classification of process cycle images to predict success or failure of process cycles. The technology disclosed includes capturing and processing images of sections arranged on an image generating chip in genotyping process. Image description features of production cycle images are created and given as input to classifiers. A trained classifier separates successful production images from unsuccessful or failed production images. The failed production images are further classified by a trained root cause classifier into various categories of failure.

3D segmentation using space carving and 2D convolutional neural networks

A system for generating a 3D segmentation of a target volume is provided. The system accesses views of an X-ray scan of a target volume. The system applies a 2D CNN to each view to generate a 2D multi-channel feature vector for each view. The system applies a space carver to generate a 3D channel volume for each channel based on the 2D multi-channel feature vectors. The system then applies a linear combining technique to the 3D channel volumes to generate a 3D multi-label map that represents a 3D segmentation of the target volume.

Image processing method and image processing system

An image processing method includes analyzing multiple images data based on Illumination-invariant Feature Network (IF-NET) with an image processing device to generate corresponding sets of eigenvector, in which image data includes a first image data related to at least one first feature of the sets of eigenvector, and a second image data related to at least one second feature of the sets of eigenvector; choosing a corresponding first training set of tiles and second training set of tiles from the first image data and second image data with an image processing device based on IF-NET, and computing on both training set of tiles to generate a least one loss value; and adjusting IF-NET based on a least one loss value. An image processing system is also disclosed herein.

META-OPTIC ACCELERATORS FOR OBJECT CLASSIFIERS
20230237790 · 2023-07-27 ·

A system for identifying objects in images is provided. The system may include an optical front end and a digital back end. The optical front end includes a metalens that duplicates a received image into multiple images, and a metasurface that receives the duplicate images and outputs a feature map based on the received images. The feature map may be equivalent to the computationally expensive convolution operations previously performed by a neural network. The feature map is provided to the digital back end, which uses a neural network to classify the object. Because the feature map included the convolution operations, the digital back end can classify the object more quickly and using fewer computing resources than previous systems.