G06V10/76

BASE CALLING USING CONVOLUTION
20240242779 · 2024-07-18 ·

We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.

System and method for retina template matching in teleophthalmology

A retina image template matching method is based on the registration and comparison between the images captured with portable low-cost fundus cameras (e.g., a consumer grade camera typically incorporated into a smartphone or tablet computer) and a baseline image. The method solves the challenges posed by registering small and low-quality retinal template images captured with such cameras. Our method combines dimension reduction methods with a mutual information (MI) based image registration technique. In particular, principle components analysis (PCA) and optionally block PCA are used as a dimension reduction method to localize the template image coarsely to the baseline image, then the resulting displacement parameters are used to initialize the MI metric optimization for registration of the template image with the closest region of the baseline image.

Supervised facial recognition system and method

A computer executed method for supervised facial recognition comprising the operations of preprocessing, feature extraction and recognition. Preprocessing may comprise dividing received face images into several subimages, converting the different face image (or subimage) dimensions into a common dimension and/or converting the datatypes of all of the face images (or subimages) into an appropriate datatype. In feature extraction, 2D DMWT is used to extract information from the face images. Application of the 2D DMWT may be followed by FastICA. FastICA, or, in cases where FastICA is not used, 2D DMWT, may be followed by application of the l.sub.2-norm and/or eigendecomposition to obtain discriminating and independent features. The resulting independent features are fed into the recognition phase, which may use a neural network, to identify an unknown face image.

COMPUTER SYSTEMS AND COMPUTER-IMPLEMENTED METHODS SPECIALIZED IN PROCESSING ELECTRONIC IMAGE DATA
20180349680 · 2018-12-06 ·

Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.

Image characteristic estimation method and device

An image characteristic estimation method and device is presented, where content of the method includes extracting at least two eigenvalues of input image data, and executing the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed. Selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors. In this way, accuracy of estimation is effectively improved.

SYSTEM AND METHOD FOR RETINA TEMPLATE MATCHING IN TELEOPHTHALMOLOGY

A retina image template matching method is based on the registration and comparison between the images captured with portable low-cost fundus cameras (e.g., a consumer grade camera typically incorporated into a smartphone or tablet computer) and a baseline image. The method solves the challenges posed by registering small and low-quality retinal template images captured with such cameras. Our method combines dimension reduction methods with a mutual information (MI) based image registration technique. In particular, principle components analysis (PCA) and optionally block PCA are used as a dimension reduction method to localize the template image coarsely to the baseline image, then the resulting displacement parameters are used to initialize the MI metric optimization for registration of the template image with the closest region of the baseline image.

ACCELERATED PRECOMPUTATION OF REDUCED DEFORMABLE MODELS
20180247158 · 2018-08-30 ·

Technologies are disclosed for precomputation of reduced deformable models. In such precomputation, a Krylov subspace iteration may be used to construct a series of inertia modes for an input mesh. The inertia modes may be condensed into a mode matrix. A set of cubature points may be sampled from the input mesh, and cubature weights of the set of cubature points may be calculated for each of the inertia modes in the mode matrix. A training dataset may be generated by iteratively adding training samples to the training dataset until a training error metric converges, wherein each training sample is generated from an inertia mode in the mode matrix and corresponding cubature weights. The reduced deformable model may be generated, including inertia modes in the training dataset and corresponding cubature weights.

COMPUTER SYSTEMS AND COMPUTER-IMPLEMENTED METHODS SPECIALIZED IN PROCESSING ELECTRONIC IMAGE DATA
20180218198 · 2018-08-02 ·

Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.

Object Detection Device Incorporating Quantum Computing and Game Theoretic Optimization and Related methods

An object detection device may include a variational autoencoder (VAE) configured to encode image data to generate a latent vector, and decode the latent vector to generate new image data. The object detection device may also include a quantum computing circuit configured to perform quantum subset summing, and a processor. The processor may be configured to generate a game theory reward matrix for a plurality of different deep learning models, cooperate with the quantum computing circuit to perform quantum subset summing of the game theory reward matrix, select a deep learning model from the plurality thereof based upon the quantum subset summing of the game theory reward matrix, and process the new image data using the selected deep learning model for object detection.

Scene change detection with novel view synthesis
12125150 · 2024-10-22 · ·

A method for detecting changes in a scene includes accessing a first set of images and corresponding pose data in a first coordinate system associated with a first user session of an augmented reality (AR) device and accessing a second set of images and corresponding pose data in a second coordinate system associated with a second user session. The method identifies the first set of images corresponding to a second image from the second set of images based on the pose data of the first set of images being determined spatially closest to the pose data of the second image after aligning the first coordinate system and the second coordinate system. A trained neural network generates a synthesized image from the first set of images. Features of the second image are subtracted from features of the synthesized image. Area of changes are identified based on the subtracted features.