G06T5/004

Using morphological operations to process frame masks in video content

A computer implemented method can decode a frame of video data comprising an array of pixels to obtain decoded luma values and decoded chroma values corresponding to the array of pixels, and extract a frame mask based on the decoded luma values. The frame mask can include an array of mask values respectively corresponding to the array of pixels. A mask value indicates whether a corresponding pixel is in foreground or background of the frame. The method can perform a morphological operation to the frame mask to change one or more mask values to indicate their corresponding pixels are removed from the foreground and added to the background of the frame. The method can also identify foreground pixels after performing the morphological operation to the frame mask, and render a foreground image for display based on the decoded luma values and decoded chroma values of the foreground pixels.

ITERATIVE DIGITAL SUBTRACTION IMAGING FRO EMOBLIZATION PROCEDURES

Method and related system (IPS) for visualizing in particular a volume of a substance during its deposition at a region of interest (ROI). A difference image is formed from a projection image and a mask image. The difference image is then analyzed to derive more accurate motion information about a motion or shape of the substance. The method or system (IPS) is capable of operating in an iterative manner. The proposed system and method can be used for processing fluoroscopic X-ray frame acquired by an imaging arrangement (100) during an embolization procedure.

APPARATUS FOR CORRECTION OF COLLIMATOR PENUMBRA IN AN X-RAY IMAGE
20230218258 · 2023-07-13 ·

The present invention relates to an apparatus (10) for correction of collimator penumbra in an X-ray image. The apparatus comprises an input unit (20), a processing unit (30), and an output unit (40). The input unit is configured to provide the processing unit with X-ray data. The processing unit is configured to determine at least one collimator corrected X-ray image of an object. The determination comprises application of an intensity modulation mask to the X-ray data. The intensity modulation mask accounts for intensity variation across a detector of an X-ray acquisition system caused by at least one collimator blade of the X-ray acquisition system, and the X-ray acquisition system was used to acquire the X-ray data. The output unit is configured to output the at least one collimator corrected X-ray image of the object.

Boundary-aware object removal and content fill
11551337 · 2023-01-10 · ·

Systems and methods for removing objects from images are disclosed. An image processing application identifies a boundary of each object of a set of objects in an image. The image processing application identifies a completed boundary for each object of the set of objects by providing the object to a trained model. The image processing application determines a set of masks. Each mask corresponds to an object of the set of objects and represents a region of the image defined by an intersection of the boundary of the object and the boundary of a target object to be removed from the image. The image processing application updates each mask by separately performing content filling on the corresponding region. The image processing application creates an output image by merging each of the updated masks with portions of the image.

Device and method of displaying images
11553157 · 2023-01-10 · ·

This application relates to an image display device and method. In one aspect, the image display device includes a communication interface, a user interface, a memory and a processor. The processor may receive, from a first terminal through the communication interface, a stream including a plurality of images captured by the first terminal. The processor may also determine whether the received stream includes a first image in which no face is detected among the plurality of images. The processor may further, in response to determining that the received stream includes the first image, perform image processing on the first image to generate a second image. The processor may further display, through the user interface, the plurality of images by replacing the first image with the second image.

Images for perception modules of autonomous vehicles

Disclosed are devices, systems and methods for processing an image. In one aspect a method includes receiving an image from a sensor array including an x-y array of pixels, each pixel in the x-y array of pixels having a value selected from one of three primary colors, based on a corresponding x-y value in a mask pattern. The method may further include generating a preprocessed image by performing preprocessing on the image. The method may further include performing perception on the preprocessed image to determine one or more outlines of physical objects.

OPHTHALMOLOGY INSPECTION DEVICE AND PUPIL TRACKING METHOD
20230000344 · 2023-01-05 ·

A pupil tracking method includes: retrieving an external eye image of a subject, wherein the external eye image includes a pupil of the subject; performing an image preprocessing on the external eye image, wherein the image preprocessing includes performing a binary conversion on the external eye image to obtain a binary image; finding out a contour boundary of each feature in the binary image, and finding out a pupil feature based on a variance of a distance from the contour boundary of each feature to a corresponding reference point; fitting the contour boundary of the pupil feature by a boundary fitting method to find a center coordinate of the pupil feature. The abovementioned pupil tracking method can track the pupil of the subject's eyeball without using a stereo camera. An ophthalmology inspection device using the abovementioned pupil tracking method is also disclosed.

USING MORPHOLOGICAL OPERATIONS TO PROCESS FRAME MASKS IN VIDEO CONTENT

A computer implemented method can decode a frame of video data comprising an array of pixels to obtain decoded luma values and decoded chroma values corresponding to the array of pixels, and extract a frame mask based on the decoded luma values. The frame mask can include an array of mask values respectively corresponding to the array of pixels. A mask value indicates whether a corresponding pixel is in foreground or background of the frame. The method can perform a morphological operation to the frame mask to change one or more mask values to indicate their corresponding pixels are removed from the foreground and added to the background of the frame. The method can also identify foreground pixels after performing the morphological operation to the frame mask, and render a foreground image for display based on the decoded luma values and decoded chroma values of the foreground pixels.

SHARPENING OF IMAGES IN NON-LINEAR AND LINEAR FORMATS
20220405889 · 2022-12-22 ·

Systems, apparatuses, and methods for performing optimized sharpening of images in non-linear and linear formats are disclosed. A system includes a blur filter and a sharpener. The blur filter receives an input image or video frame and provides blurred output pixels to a sharpener unit. The sharpener unit operates in linear or non-linear space depending on the format of the input frame. The sharpener unit includes one or more optimizations to generate sharpened pixel data in an area-efficient manner. The sharpened pixel data is then driven to a display.

System for determining embedding using spatial data

Images of a hand may be used to identify users. Quality, detail, and so forth of these images may vary. An image is processed to determine a first spatial mask. A first neural network comprising many layers uses the first spatial mask at a first layer and a second spatial mask at a second layer to process images and produce an embedding vector representative of features in the image. The first spatial mask provides information about particular portions of the input image, and is determined by processing the image with an algorithm such as an orientation certainty level (OCL) algorithm. The second spatial mask is determined using unsupervised training and represents weights of particular portions of the input image as represented at the second layer. The use of the masks allows the first neural network to learn to use or disregard particular portions of the image, improving overall accuracy.