G06T2207/20004

METHODS FOR CONVERTING AN IMAGE AND CORRESPONDING DEVICES
20230050498 · 2023-02-16 ·

The invention concerns a method for converting an input image comprising an input luminance component made of elements into an output image comprising an output luminance component made of elements, the respective ranges of the output luminance component values and input luminance component element values being of different range extension. the method comprises for the input image: computing a value of a general variable representative of at least two input luminance component element values; transforming each input luminance component element value into a corresponding output luminance component element value according to the computed general variable value; and converting the input image using the determined output luminance component element values. The transforming step uses a set of pre-determined output values organized into a 2D Look-Up-Table (2D LUT) comprising two input arrays indexing a set of chosen input luminance component values and a set of chosen general variable values respectively, each pre-determined output value matching a pair of values made of an indexed input luminance component value and an indexed general variable value, the input luminance component element value being transformed into the output luminance component element value using at least one predetermined output value.

SYSTEM AND METHODS FOR ULTRASOUND ACQUISITION WITH ADAPTIVE TRANSMITS
20230025182 · 2023-01-26 ·

Methods and systems are provided for dynamically selecting ultrasound transmits. In one example, a method includes dynamically updating a number of transmit lines and/or a pattern of transmit lines for acquiring an ultrasound image based on a prior ultrasound image and a task to be performed with the ultrasound image, and acquiring the ultrasound image with an ultrasound probe controlled to operate with the updated number of transmit lines and/or the updated pattern of transmit lines.

Three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging

For three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging, the three-dimension segmentation is output by a machine-learnt multi-task generator. The machine-learnt multi-task generator is trained from 3D information, such as a sparse ICE volume assembled from the 2D ICE images. The machine-learnt multi-task generator is trained to output both the 3D segmentation and a complete volume. The 3D segmentation may be used to project to 2D as an input with an ICE image to another network trained to output a 2D segmentation for the ICE image. Display of the 3D segmentation and/or 2D segmentation may guide ablation of tissue in the patient.

COLLABORATIVE TRACKING
20220405959 · 2022-12-22 ·

An imaging system can receive an image of a portion of an environment. The environment can include an object, such as a hand or a display. The imaging device can identify a data stream from an external device, for instance by detecting the data stream in the image or by receiving the data stream wirelessly from the external device. The imaging device can detect a condition based on the image and/or the data stream, for instance by detecting that the object is missing from the image, by detecting that a low resource at the imaging device, and/or by detecting visual media content displayed by a display in the image. Upon detecting the condition, imaging device automatically determines a location of the object (or a portion thereof) using the data stream and/or the image. The imaging device generates and/or outputs content that is based on the location of the object.

Endoscope system, processor device, and method of operating endoscope system for discriminating a region of an observation target
11510599 · 2022-11-29 · ·

An endoscope system includes a light source unit, an image sensor, an image acquisition unit, a first image generation unit, a second image generation unit, and a region discrimination unit. The light source unit emits a plurality of types of illumination light beams with different wavelengths. The image acquisition unit acquires images corresponding to the respective illumination light beams. The first image generation unit generates a first image (white light image) serving as a base of a display image. The second image generation unit generates a second image (bright/dark region discrimination image or the like), using at least one image having a different corresponding wavelength from that of the image used for the generation of the first image. The region discrimination unit discriminates the regions in the observation target, using the second image.

CAMERA DEVICE AND IMAGE PROCESSING METHOD
20220375131 · 2022-11-24 · ·

A camera device includes an imaging unit for capturing an image of an imaging area in which a subject is present, a memory for storing a camera parameter related to imaging, a detection unit for detecting the subject from the captured image, a first determination unit for primarily determining a color of a target portion of the detected subject, a second determination unit for adjusting a determination result of a predetermined color corresponding to the target portion determined by the first determination unit based on the camera parameter, when the color of the target portion determined by the first determination unit is the predetermined color having a plurality of gradations, and a communication unit for transmitting an adjustment result of the predetermined color corresponding to the determined target portion and information on the target portion to an external device in association with each other.

CONTENT ADAPTIVE FILTERING VIA RINGING ESTIMATION AND SUPPRESSION
20230096874 · 2023-03-30 ·

Systems, apparatuses, and methods for implementing content adaptive processing via ringing estimation and suppression are disclosed. A ring estimator estimates the amount of ringing when a wide filter kernel is used for image processing. The amount of ringing can be specified as an under-shoot or an over-shoot. A blend factor calculation unit determines if the estimated amount of ringing is likely to be visually objectionable. If the ringing is likely to be visually objectionable, then the blend factor calculation unit generates a blend factor value to suppress the objectionable ringing. The blend factor value is generated for each set of source pixels based on this determination. The blend factor value is then applied to how the blending is mixed between narrow and wide filters for the corresponding set of source pixels. The preferred blending between the narrow and wide filters is changeable on a pixel-by-pixel basis during image processing.

METHOD AND ELECTRONIC DEVICE FOR MULTI-FUNCTIONAL IMAGE RESTORATION

A method for performing multi-functional image restoration by an electronic device with a trained Machine Learning (ML) model is provided. The method includes receiving an image and determining channels of the image, and determining whether a number of restructuring needed for the channels is one. When the restructuring needed for the channels not one, then the method includes restructuring each channel into a first channel set, generating first inferences of the image corresponding to each channel by feeding the first channel set to the trained ML model, and generating a final inference image by combining the first inferences. When the number of restructuring needed for the channels is one, then the method includes restructuring the channels into a second channel set, and generating a second inference of the image by feeding the second channel set to the trained ML model.

Methods of spatial normalization of positron emission tomography images

An adaptive template image for registering a PET or a SPECT image includes a template image model including variability of values for each voxel in a template image according to one or more control parameters.

SYSTEMS AND METHODS FOR ACCURATE AND RAPID POSITRON EMISSION TOMOGRAPHY USING DEEP LEARNING
20220343496 · 2022-10-27 ·

A computer-implemented method is provided for improving image quality with shortened acquisition time. The method comprises: determining an accelerated image acquisition parameter for imaging a subject using a medical imaging apparatus; acquiring, using the medical imaging apparatus, a medical image of the subject according to the accelerated image acquisition parameter; applying a deep network model to the medical image to generate a corresponding transformed medical image with improved quality; and combining the medical image and the corresponding transformed medial image using an adaptive mixing algorithm to generate output image.