G06V10/449

Surface defect inspection method and surface defect inspection apparatus

A surface defect inspection method includes: acquiring an original image by capturing an image of a subject of an inspection; generating texture feature images by applying a filtering process using spatial filters to the original image; generating a feature vector at each position of the original image, by extracting a value at a corresponding position from each of the texture feature images, for each of the positions of the original image; generating an abnormality level image representing an abnormality level for each position of the original image, by calculating, for each of the feature vectors, an abnormality level in a multi-dimensional distribution formed by the feature vectors; and detecting a part having the abnormality level that is higher than a predetermined level in the abnormality level image as a defect portion or a defect candidate portion.

NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING ANALYSIS PROGRAM, ANALYSIS APPARATUS, AND ANALYSIS METHOD

An analysis method implemented by a computer includes: generating a refine image by changing an incorrect inference image such that a correct label score of inference is maximized, the incorrect inference image being an input image when an incorrect label is inferred in an image recognition process; and narrowing, based on a score of a label, a predetermined region to specify an image section that causes incorrect inference, the score of the label being inferred by inputting to an inferring process an image obtained by replacing the predetermined region in the incorrect inference image with the refine image.

Predicting recurrence in early stage non-small cell lung cancer (NSCLC) with integrated radiomic and pathomic features

Embodiments predict early stage NSCLC recurrence, and include processors configured to access a pathology image of a region of tissue demonstrating early stage NSCLC; extract a set of pathomic features from the pathology image; access a radiological image of the region of tissue; extract a set of radiomic features from the radiological image; generate a combined feature set that includes at least one member of the set of pathomic features, and at least one member of the set of radiomic features; compute a probability that the region of tissue will experience NSCLC recurrence based, at least in part, on the combined feature set; and classify the region of tissue as recurrent or non-recurrent based, at least in part, on the probability. Embodiments may display the classification, or generate a personalized treatment plan based on the classification.

IMAGE BASED COUNTERFEIT DETECTION

Systems and methods for authenticating material samples are provided. Digital images of the samples are processed to extract computer-vision features, which are used to train a classification algorithm along with location and optional time information. The extracted features/information of a test sample are evaluated by the trained classification algorithm to identify the test sample. The results of the evaluation are used to track and locate counterfeits or authentic products.

INSTANTANEOUS SEARCH AND COMPARISON METHOD FOR LARGE-SCALE DISTRIBUTED PALM VEIN MICRO-FEATURE DATA
20200342204 · 2020-10-29 · ·

The invention proposes an instantaneous search and comparison method for large-scale distributed palm vein micro-feature data, which consists of three parts: 1) feature extraction and calculation of palm vein micro-feature images; 2) building a feature database; 3) search and comparison. The technical solution provided by this invention, referring to the idea of GIS and web search method, is applied to the search and comparison of the palm vein micro-feature data, which enables instantaneous recognition on massive palm vein micro-feature data under large-scale, large-traffic and high-frequency application scenarios, and solves the technical problem that traditional palm vein recognition methods can not be applied to large-scale and large-traffic scenarios due to low speed thereof.

DEEP LEARNING BASED ADAPTIVE ARITHMETIC CODING AND CODELENGTH REGULARIZATION
20200334534 · 2020-10-22 ·

A deep learning based compression (DLBC) system applies trained models to compress binary code of an input image to a target codelength. For a set of binary codes representing the quantized coefficents of an input image, the DLBC system applies a first model that is trained to predict feature probabilities based on the context of each bit of the binary codes. The DLBC system compresses the binary code via adaptive arithmetic coding based on the determined probability of each bit. The compressed binary code represents a balance between a reconstruction quality of a reconstruction of the input image and a target compression ratio of the compressed binary code.

DEEP LEARNING BASED ADAPTIVE ARITHMETIC CODING AND CODELENGTH REGULARIZATION
20200334535 · 2020-10-22 ·

A deep learning based compression (DLBC) system applies trained models to compress binary code of an input image to a target codelength. For a set of binary codes representing the quantized coefficents of an input image, the DLBC system applies a first model that is trained to predict feature probabilities based on the context of each bit of the binary codes. The DLBC system compresses the binary code via adaptive arithmetic coding based on the determined probability of each bit. The compressed binary code represents a balance between a reconstruction quality of a reconstruction of the input image and a target compression ratio of the compressed binary code.

Gabor cube feature selection-based classification method and system for hyperspectral remote sensing images
10783371 · 2020-09-22 · ·

The present invention provides a Gabor cube feature selection-based classification method for hyperspectral remote sensing images, comprising the following steps: generating three-dimensional Gabor filters according to set frequency and direction parameter values; convoluting hyperspectral remote sensing images with the three-dimensional Gabor filters to obtain three-dimensional Gabor features; selecting three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the three-dimensional Gabor features; and classifying the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features. The present invention is based on the three-dimensional Gabor features, and the used three-dimensional Gabor features contain rich local change information of a signal and are competent in feature characterizing. Using a Fisher discriminant criterion not only makes full use of high-level semantics hidden among the features, but also eliminates redundant information and reduces the classification time complexity.

ADAPTIVE IMAGE CROPPING FOR FACE RECOGNITION
20200293807 · 2020-09-17 ·

By adding a side network to a face recognition network, output of early convolution blocks may be used to determine relative bounding box values. The relative bounding box values may be used to refine existing boundary box value with an eye on improving the generation, by the face recognition network, of embedding vectors.

Tiling format for convolutional neural networks

Systems, apparatuses, and methods for converting data to a tiling format when implementing convolutional neural networks are disclosed. A system includes at least a memory, a cache, a processor, and a plurality of compute units. The memory stores a first buffer and a second buffer in a linear format, where the first buffer stores convolutional filter data and the second buffer stores image data. The processor converts the first and second buffers from the linear format to third and fourth buffers, respectively, in a tiling format. The plurality of compute units load the tiling-formatted data from the third and fourth buffers in memory to the cache and then perform a convolutional filter operation on the tiling-formatted data. The system generates a classification of a first dataset based on a result of the convolutional filter operation.