G06T7/00

Biomarker Prediction Using Optical Coherence Tomography

Deep learning methods and systems for detecting biomarkers within optical coherence tomography volumes using such deep learning methods and systems are provided. Embodiments predict the presence or absence of clinically useful biomarkers in OCT images using deep neural networks. The lack of available training data for canonical deep learning approaches is overcome in embodiments by leveraging a large external dataset consisting of foveal scans using transfer learning. Embodiments represent the three-dimensional OCT volume by “tiling” each slice into a single two dimensional image, and adding an additional component to encourage the network to consider local spatial structure. Methods and systems, according to embodiments are able to identify the presence or absence of AMD-related biomarkers on par with clinicians. Beyond identifying biomarkers, additional models could be trained, according to embodiments, to predict the progression of these biomarkers over time.

SYSTEM AND METHOD FOR MEASURING DISTORTED ILLUMINATION PATTERNS AND CORRECTING IMAGE ARTIFACTS IN STRUCTURED ILLUMINATION IMAGING

A method for measuring distorted illumination patterns and correcting image artifacts in structured illumination microscopy. The method includes the steps of generating an illumination pattern by interfering multiple beams, modulating a scanning speed or an intensity of a scanning laser, or projecting a mask onto an object; taking multiple exposures of the object with the illumination pattern shifting in phase; and applying Fourier transform to the multiple exposures to produce multiple raw images. Thereafter, the multiple raw images are used to form and then solve a linear equation set to obtain multiple portions of a Fourier space image of the object. A circular 2-D low pass filter and a Fourier Transform are then applied to the portions. A pattern distortion phase map is calculated and then corrected by making a coefficient matrix of the linear equation set varying in phase, which is solved in the spatial domain.

LEARNING APPARATUS, LEARNING METHOD, AND RECORDING MEDIUM
20230052101 · 2023-02-16 · ·

In a learning apparatus, an acquisition unit acquires image data and label data corresponding to the image data. An object candidate extraction unit extracts each object candidate rectangle from the image data. A correct answer data generation unit generates a background object label corresponding to each background object included in each object candidate rectangle as correct answer data corresponding to the object candidate rectangle by using the label data. A prediction unit predicts a classification using each object candidate rectangle and outputs a prediction result. An optimization unit optimizes the object candidate extraction unit and the prediction unit using the prediction result and the correct answer data.

CRACK DETECTION DEVICE, CRACK DETECTION METHOD AND COMPUTER READABLE MEDIUM

In a crack detection device (10), an image acquisition unit (21) acquires image data acquired by taking an image of a road surface from an oblique direction with respect to the road surface, An image classification unit (22) classifies image data acquired into an acceptable range with a resolution higher than a standard value, and an unacceptable range with a resolution equal to or less than the standard value. A data output unit (23) outputs acceptable data being image data of a part classified into the acceptable range as data to detect a crack on the road surface. An image display unit (24) displays data output.

QUANTITATIVE DYNAMIC MRI (QDMRI) ANALYSIS AND VIRTUAL GROWING CHILD (VGC) SYSTEMS AND METHODS FOR TREATING RESPIRATORY ANOMALIES

A method of analyzing thoracic insufficiency syndrome (TIS) in a subject by performing quantitative dynamic magnetic resonance imaging (QdMRI) analysis. The QdMRI analysis includes performing four-dimensional (4D) image construction of a TIS subject's thoracic cavity. The 4D image includes a sequence of two dimensional (2D) images of the TIS subject's thoracic cavity over a respiratory cycle of the TIS subject. The QdMRI analysis also includes segmenting a region of interest (ROI) within the 4D image, determining TIS measurements within the ROI, comparing the TIS measurements to normal measurements determined from ROIs in 4D images of the thoracic cavities of normal subjects that are not afflicted by TIS, and outputting quantitative markers indicating deviation of the thoracic cavity of the TIS subject relative to the thoracic cavities of the normal subjects.

BLOOD FLOW FIELD ESTIMATION APPARATUS, LEARNING APPARATUS, BLOOD FLOW FIELD ESTIMATION METHOD, AND PROGRAM

A blood flow field estimation apparatus is provided, including an estimation unit that uses a learned model obtained in advance by performing machine learning to learn a relationship between organ tissue three-dimensional structure data including image data of a plurality of organ cross-sectional images serving as cross-sectional images of an organ and having each pixel provided with two or more bit depths and image position information serving as information indicating a position of an image reflected on each of the organ cross-sectional images in the organ, and a blood flow field in the organ, and estimates the blood flow field in the organ of an estimation target, based on the organ tissue three-dimensional structure data of the organ of the estimation target, and an output unit that outputs an estimation result of the estimation unit.

INDIVIDUAL OBJECT IDENTIFICATION SYSTEM, INDIVIDUAL OBJECT IDENTIFICATION PROGRAM, AND RECORDING MEDIUM

An individual object identification system includes: an image acquisition processor configured to perform an image acquisition process to acquire an image of a subject acquired using imaging equipment; a feature point extraction processor that extracts a feature point from the image; a local feature amount calculation processor that calculates a local feature amount of the feature point; a local feature amount group classification processor that performs classification into a predetermined number of local feature amount groups; a global feature amount calculation processor that calculates a global feature amount based on each of the local feature amount groups; a searching target image registration processor that registers a plurality of images that are searching targets; a global feature amount registration processor that registers the global feature amount related to each of the registered images in a global feature amount registration unit; a narrowing processor that narrows down the plurality of registered images to registered images each having the global feature amount highly correlated with a global feature amount of an identification image; and a determination processor that compares the registered images as candidates with the identification image to determine the registered image having the largest number of corresponding points.

DEEP LEARNING-BASED VIDEO EDITING METHOD, RELATED DEVICE, AND STORAGE MEDIUM

A deep learning-based video editing method can allow for automated editing of a video, reducing or eliminating user input, saving time and labor investments, and thereby improving video editing efficiency. Attribute recognition is performed on an object in a target video using a deep learning model. A target object is selected that satisfies an editing requirement of the target video. A plurality of groups of pictures associated with the target object from the target video are obtained using editing. An edited video corresponding to the target video is generated using the plurality of groups of pictures.

METHOD FOR TRAINING IMAGE PROCESSING MODEL

This disclosure relates to a model training method and apparatus and an image processing method and apparatus. The model training method includes: obtaining a first sample image and a first standard region proportion corresponding to a first object in the first sample image; obtaining a standard region segmentation result corresponding to the first sample image based on the first standard region proportion; and training a first initial segmentation model based on the first sample image and the standard region segmentation result, to obtain a first target segmentation model.

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND MEDIUM
20230049656 · 2023-02-16 ·

The present disclosure provides a method of processing an image, a device, and a medium. The method of processing the image includes: performing an image processing on an original image to obtain a component image for brightness of the original image; determining at least one of the original image and the component image as an image to be processed; classifying a pixel in the image to be processed, so as to obtain a classification result; processing the image to be processed according to the classification result, so as to obtain a target image; and determining an image quality of the original image according to the target image.