G06V10/76

METHODS AND SYSTEMS FOR DETECTING COMPONENT WEAR

A monitoring system for determining component wear is provided. The monitoring system includes a memory device configured to store a reference model of a component and a component wear monitoring (CWM) device configured to receive a component image of a first component being inspected, detect a plurality of manmade structural features in the received component image, adjust the component image to mask out at least some of the plurality of manmade structural features from the received component image, compare the adjusted component image with the reference model to determine one or more potential defect areas in the first component, analyze each of the one or more defect areas to determine a state of the potential defect areas, and output the state of the one or more potential defect areas to a user.

Base calling using convolution
12237052 · 2025-02-25 · ·

We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.

PICTURE PROCESSING METHOD AND APPARATUS
20170147904 · 2017-05-25 ·

The present disclosure provides a picture processing method and apparatus and belongs to the field of information technologies. The method includes: scanning pictures under a picture directory; generating at least one similar picture group according to attribute information of each scanned picture; acquiring one representative picture from each similar picture group according to attribute information of each picture included in each similar picture group; and displaying a representative picture in each similar picture group and other pictures in the each similar picture group with different markings when the at least one similar picture group is displayed. In the present disclosure, a similar picture group is acquired according to attribute information of each picture, and a representative picture in each similar picture group and other pictures are displayed with different markings. The process can select a similar picture for a user without participation of the user, and a saved picture recommended to the user is distinguished from other pictures for display, so that a checking process is more convenient and takes a shorter time.

Early therapy response assessment of lesions

For therapy response assessment, texture features are input for machine learning a classifier and for using a machine learnt classifier. Rather than or in addition to using formula-based texture features, data driven texture features are derived from training images. Such data driven texture features are independent analysis features, such as features from independent subspace analysis. The texture features may be used to predict the outcome of therapy based on a few number of or even one scan of the patient.

Image Characteristic Estimation Method and Device
20170069112 · 2017-03-09 ·

An image characteristic estimation method and device is presented, where content of the method includes extracting at least two eigenvalues of input image data, and executing the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed. Selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors. In this way, accuracy of estimation is effectively improved.

PHOTOGRAPH-BASED ASSESSMENT OF DENTAL TREATMENTS AND PROCEDURES

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

EXTRACTING GRADIENT FEATURES FROM NEURAL NETWORKS

A method for extracting a representation from an image includes inputting an image to a pre-trained neural network. The gradient of a loss function is computed with respect to parameters of the neural network, for the image. A gradient representation is extracted for the image based on the computed gradients, which can be used, for example, for classification or retrieval.

SUPERVISED FACIAL RECOGNITION SYSTEM AND METHOD

A computer executed method for supervised facial recognition comprising the operations of preprocessing, feature extraction and recognition. Preprocessing may comprise dividing received face images into several subimages, converting the different face image (or subimage) dimensions into a common dimension and/or converting the datatypes of all of the face images (or subimages) into an appropriate datatype. In feature extraction, 2D DMWT is used to extract information from the face images. Application of the 2D DMWT may be followed by FastICA. FastICA, or, in cases where FastICA is not used, 2D DMWT, may be followed by application of the l.sub.2-norm and/or eigendecomposition to obtain discriminating and independent features. The resulting independent features are fed into the recognition phase, which may use a neural network, to identify an unknown face image.

Text based image search
12322198 · 2025-06-03 · ·

Method and system for building a machine learning model for finding visual targets from text queries, the method comprising the steps of receiving a set of training data comprising text attribute labelled images, wherein each image has more than one text attribute label. Receiving a first vector space comprising a mapping of words, the mapping defining relationships between words. Generating a visual feature vector space by grouping images of the set of training data having similar attribute labels. Mapping each attribute label within the training data set on to the first vector space to form a second vector space. Fusing the visual feature vector space and the second vector space to form a third vector space. Generating a similarity matching model from the third vector space.

BASE CALLING USING CONVOLUTION
20250191695 · 2025-06-12 ·

We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.