G06T2207/20216

Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks

A system for performing adaptive focusing of a microscopy device comprises a microscopy device configured to acquire microscopy images depicting cells and one or more processors executing instructions for performing a method that includes extracting pixels from the microscopy images. Each set of pixels corresponds to an independent cell. The method further includes using a trained classifier to assign one of a plurality of image quality labels to each set of pixels indicating the degree to which the independent cell is in focus. If the image quality labels corresponding to the sets of pixels indicate that the cells are out of focus, a focal length adjustment for adjusting focus of the microscopy device is determined using a trained machine learning model. Then, executable instructions are sent to the microscopy device to perform the focal length adjustment.

SYSTEMS AND METHODS FOR REAL-TIME VIDEO ENHANCEMENT
20230038871 · 2023-02-09 ·

A computer-implemented method is provided for improving live video quality. The method comprises: acquiring, using a medical imaging apparatus, a stream of consecutive image frames of a subject, and the stream of consecutive image frames are acquired with reduced amount of radiation dose; applying a deep learning network model to the stream of consecutive image frames to generate an image frame with improved quality; and displaying the image frame with improved quality in real-time on a display.

Ultrasonic diagnostic apparatus, medical image processing apparatus, and non-transitory computer medium storing computer program

The ultrasonic diagnostic apparatus according to the present embodiment includes processing circuitry. The processing circuitry is configured to: acquire multiple position data associated with respective multiple two-dimensional image data of ultrasonic related to multiple cross sections; smooth the acquired multiple position data; and arrange the multiple two-dimensional image data in accordance with the smoothed multiple position data to generate volume data.

Methods and apparatuses for outputting information and calibrating camera

Embodiments of the present disclosure relate to methods and apparatuses for outputting information and calibrating a camera. The method may include: acquiring a first image, a second image, and a third image, the first image being an image photographed by a to-be-calibrated camera, the second image being a high-precision map image including a target area indicated by the first image, and the third image being a reflectance image including the target area; fusing the second image and the third image to obtain a fused image; determining a matching point pair based on points selected by a user in the first image and the fused image; and calibrating the to-be-calibrated camera based on coordinates of the matching point pair.

Systems and methods for improving soft tissue contrast, multiscale modeling and spectral CT

Systems and methods for improving soft tissue contrast, characterizing tissue, classifying phenotype, stratifying risk, and performing multi-scale modeling aided by multiple energy or contrast excitation and evaluation are provided. The systems and methods can include single and multi-phase acquisitions and broad and local spectrum imaging to assess atherosclerotic plaque tissues in the vessel wall and perivascular space.

Free-viewpoint method and system

A method of generating a 3D reconstruction of a scene, the scene comprising a plurality of cameras positioned around the scene, comprises: obtaining the extrinsics and intrinsics of a virtual camera within a scene; accessing a data structure so as to determine a camera pair that is to be used in reconstructing the scene from the viewpoint of the virtual camera; wherein the data structure defines a voxel representation of the scene, the voxel representation comprising a plurality of voxels, at least some of the voxel surfaces being associated with respective camera pair identifiers; wherein each camera pair identifier associated with a respective voxel surface corresponds to a camera pair that has been identified as being suitable for obtaining depth data for the part of the scene within that voxel and for which the averaged pose of the camera pair is oriented towards the voxel surface; identifying, based on the obtained extrinsics and intrinsics of the virtual camera, at least one voxel that is within the field of view of the virtual camera and a corresponding voxel surface that is oriented towards the virtual camera; identifying, based on the accessed data structure, at least one camera pair that is suitable for reconstructing the scene from the viewpoint of the virtual camera, and generating a reconstruction of the scene from the viewpoint of the virtual camera based on the images captured by the cameras in the identified at least one camera pair.

METHOD, SYSTEM, AND IMAGE PROCESSING DEVICE FOR CAPTURING AND/OR PROCESSING ELECTROLUMINESCENCE IMAGES, AND AN AERIAL VEHICLE

A method (400) of capturing and processing electroluminescence (EL) images (1910) of a PV array (40) is disclosed herein. In a described embodiment, the method 400 includes controlling the aerial vehicle (20) to fly along a flight path to capture EL images (1910) of corresponding PV array subsections (512b) of the PV array (40), deriving respective image quality parameters from at least some of the captured EL images, dynamically adjusting a flight speed of the aerial vehicle along the flight path, based on the respective image quality parameters for capturing the EL images (1910) of the PV array subsections (512b), extracting a plurality of frames (1500) of the PV array subsection (512b) from the EL images (1910); determining a reference frame having a highest image quality of the PV array subsection (512b) from among the extracted frames (2100); performing image alignment of the extracted frames (2100) to the reference frame to generate image aligned frames (2130), and processing the image aligned frames (2130) to produce an enhanced image (2140) of the PV array subsection (512b) having a higher resolution than the reference frame. A system, image processing device, and aerial vehicle for the method thereof are also disclosed.

Extended tissue types for increased granularity in cardiovascular disease phenotyping

Systems and methods for improving soft tissue contrast, characterizing tissue, classifying phenotype, stratifying risk, and performing multi-scale modeling aided by multiple energy or contrast excitation and evaluation are provided. The systems and methods can include single and multi-phase acquisitions and broad and local spectrum imaging to assess atherosclerotic plaque tissues in the vessel wall and perivascular space.

REMOVING AN ARTIFACT FROM AN IMAGE

An inspection tool comprises an imaging system configured to image a portion of a semiconductor substrate. The inspection tool may further comprise an image analysis system configured to obtain an image of a structure on the semiconductor substrate from the imaging system, encode the image of the structure into a latent space thereby forming a first encoding. the image analysis system may subtract an artifact vector, representative of an artifact in the image, from the encoding thereby forming a second encoding; and decode the second encoding to obtain a decoded image.

INFORMATION PROCESSING APPARATUS, IMAGING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
20230020328 · 2023-01-19 ·

There is provided an information processing apparatus including a processor and a memory connected to or built into the processor. The processor is configured to process a captured image by using an AI method that uses a neural network and perform composition processing of combining a first image obtained by processing the captured image by using the AI method and a second image obtained by processing the captured image without using the AI method.