G06T5/60

Laparoscopic image smoke removal method based on generative adversarial network

A laparoscopic image smoke removal method based on a generative adversarial network, and belongs to the technical field of computer vision. The method includes: processing a laparoscopic image sample to be processed using a smoke mask segmentation network to acquire a smoke mask image; inputting the laparoscopic image sample to be processed and the smoke mask image into a smoke removal network, and extracting features of the laparoscopic image sample to be processed using a multi-level smoke feature extractor to acquire a light smoke feature vector and a heavy smoke feature vector; and acquiring, according to the light smoke feature vector, the heavy smoke feature vector and the smoke mask image, a smoke-free laparoscopic image by filtering out smoke information and maintaining a laparoscopic image by using a mask shielding effect. The method has the technical effects of robustness and ability of being embedded into a laparoscopic device for use.

Video content removal using flow-guided adaptive learning
11935214 · 2024-03-19 · ·

Presented herein are systems and methods for an end-to-end solution for object removal from a video using adaptive learning. In one or more embodiments, using a pre-trained image inpainting model, in-scene training data may be generated using optical-flow guided sampling. In one or more embodiments, the sampled patches are used to generate a training dataset, which is used to further train the image inpainting model until reaching a stop condition. The adaptively trained inpainting model may be used to generate a modified video in which the desired object or objects have been removed and the corresponding removed portion(s) have been filled (or inpainted) to preserve the integrity of the image.

DOMAIN SPECIFIC IMAGE QUALITY ASSESSMENT

In a technique to assess the blurriness of an image, an image of a face is received, the image including a depiction of lips. A processing device determines a region of interest in the image, wherein the region of interest comprises an area inside of the lips. The processing device applies a focus operator to the pixels within the region of interest, and calculates a sharpness metric for the region of interest using an output of the focus operator. The processing device determines whether the sharpness metric satisfies a sharpness criterion, and one or more additional operations are performed responsive to determining that the sharpness metric satisfies the sharpness criterion.

Fusion network-based method for image super-resolution and non-uniform motion deblurring

Disclosed is a fusion network-based method for image super-resolution and non-uniform motion deblurring. The method achieves, for the first time, restoration of a low-resolution non-uniform motion-blurred image based on a deep neural network. The network uses two branch modules to respectively extract features for image super-resolution and non-uniform motion deblurring, and achieves, by means of a feature fusion module that is trainable, adaptive fusion of outputs of the two branch modules for extracting features. Finally, an upsampling reconstruction module achieves a non-uniform motion deblurring and super-resolution task. According to the method, a self-generated set of training data is configured to perform offline training on a network, thereby achieving restoration of the low-resolution non-uniform motion-blurred image.

Computational microscopy based-system and method for automated imaging and analysis of pathology specimens

Described herein are systems and methods for assessing a biological sample. The methods include: characterizing a speckled pattern to be applied by a diffuser; positioning a biological sample relative to at least one coherent light source such that at least one coherent light source illuminates the biological sample; diffusing light produced by the at least one coherent light source; capturing a plurality of illuminated images with the embedded speckle pattern of the biological sample based on the diffused light; iteratively reconstructing the plurality of speckled illuminated images of the biological sample to recover an image stack of reconstructed images; stitching together each image in the image stack to create a whole slide image, wherein each image of the image stack at least partially overlaps with a neighboring image; and identifying one or more features of the biological sample. The methods may be performed by a near-field Fourier Ptychographic system.

Video quality assessment method and apparatus

A video quality assessment apparatus and method are provided. The video quality assessment apparatus includes a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: identify whether a frame included in a video is a fully-blurred frame or a partially-blurred frame based on a blur level of the frame, obtain, in response to the frame being the fully-blurred frame, an analysis-based quality score with respect to the fully-blurred frame; obtain, in response to the frame being the partially-blurred frame, a model-based quality score with respect to the partially-blurred frame; and process the video based on at least one of the analysis-based quality score or the model-based quality score to obtain a processed video.

Automated digital parameter adjustment for digital images

Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.

DENTAL IMAGING SYSTEM UTILIZING ARTIFICIAL INTELLIGENCE
20240078668 · 2024-03-07 ·

A system and a method for training and utilizing deep convolutional neural networks to perform diagnostic and image enhancement operations on images of dentition uses digital images and other auxiliary parameters as inputs for a convolutional neural network. The neural network can output a tooth segmentation map, tooth identifiers, a probability map indicating the presence of caries, cavities or other dental anomalies/conditions, and recommended parameters for use in image enhancement algorithms.

SYSTEM AND METHOD FOR END-TO-END DIFFERENTIABLE JOINT IMAGE REFINEMENT AND PERCEPTION
20240070546 · 2024-02-29 ·

System and method for end-to-end differentiable joint image refinement and perception are provided. A learning machine employs an image acquisition device for acquiring a set of training raw images. A processor determines a representation of a raw image, initializes a set of image representation parameters, defines a set of analysis parameters of an image analysis network configured to process the image's representation, and jointly trains the set of representation parameters and the set of analysis parameters to optimize a combined objective function. A module for transforming pixel-values of the raw image to produce a transformed image comprising pixels of variance-stabilized values, a module for successively performing processes of soft camera projection and image projection, and a module for inverse transforming the transformed pixels are disclosed. The image projection performs multi-level spatial convolution, pooling, subsampling, and interpolation.

EFFICIENT FLOW-GUIDED MULTI-FRAME DE-FENCING

The present disclosure provides methods, apparatuses, and computer-readable mediums for performing multi-frame de-fencing by a device. In some embodiments, a method includes obtaining an image burst having at least one portion of a background scene obstructed by an opaque obstruction. The method further includes generating a plurality of obstruction masks marking the at least one portion of the background scene obstructed by the opaque obstruction in images of the image burst. The method further includes computing a motion of the background scene, with respect to a keyframe selected from the plurality of images, by applying an occlusion-aware optical flow model. The method further includes reconstructing the selected keyframe by providing a combination of features to an image fusion and inpainting network. The method further includes providing, to the user, the reconstructed keyframe comprising an unobstructed version of the background scene of the image burst.