Patent classifications
G06T5/77
Systems and methods for removing occluding objects in surgical images and/or video
The present disclosure is directed to systems and methods for removing an occluding object from a surgical image. An image capture device is inserted into a patient and captures an initial image of a surgical site inside the patient during a surgical procedure. A controller receives the image and determines that the occluding object is present in the initial image. The controller executes a removal algorithm that includes controlling the image capture device to perform a plurality of movements, controlling the image capture device to capture a plurality of images, wherein each image among the plurality of images corresponds to a movement among the plurality of movements, and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.
Dead pixel correction for digital PET reconstruction
A PET detector array (8) comprising detector pixels acquires PET detection counts along lines of response (LORs). The counts are reconstructed to generate a reconstructed PET image (36, 46). The reconstructing is corrected for missing LORs which are missing due to dead detector pixels of the PET detector array. The correction may be by estimating counts along the missing LORs (60) by interpolating counts along LORs (66) neighboring the missing LORs. The interpolation may be iterative to handle contiguous groups of missing detector pixels. The correction may be by computing a sensitivity matrix having matrix elements corresponding to image elements (80, 82) of the reconstructed PET image. In this case, each matrix element is computed as a summation over all LORs intersecting the corresponding image element excepting the missing LORs. The computed sensitivity matrix is used in the reconstructing.
IMAGE PROCESSING METHOD AND APPARATUS, FACIAL RECOGNITION METHOD AND APPARATUS, AND COMPUTER DEVICE
This application relates to an image processing method and apparatus, a facial recognition method and apparatus, a computer device, and a readable storage medium. The image processing method includes: obtaining a target image comprising an object wearing glasses; inputting the target image to a glasses-removing model comprising a plurality of sequentially connected convolution squeeze and excitation networks; obtaining feature maps of feature channels of the target image through convolution layers of the convolution squeeze and excitation networks; obtaining global information of the feature channels according to the feature maps through squeeze and excitation layers of the convolution squeeze and excitation networks, learning the global information, and generating weights of the feature channels; weighting the feature maps of the feature channels according to the weights through weighting layers of the convolution squeeze and excitation networks, respectively, and generating weighted feature maps; and generating a glasses-removed image corresponding to the target image according to the weighted feature maps through the glasses-removing model. The glasses in the image can be effectively removed using the method.
AUTOMATIC SYNTHESIS OF A CONTENT-AWARE SAMPLING REGION FOR A CONTENT-AWARE FILL
Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware sampling region for a hole-filling algorithm such as content-aware fill. Given a source image and a hole (or other target region to fill), a sampling region can be synthesized by identifying a band of pixels surrounding the hole, clustering these pixels based on one or more characteristics (e.g., color, x/y coordinates, depth, focus, etc.), passing each of the resulting clusters as foreground pixels to a segmentation algorithm, and unioning the resulting pixels to form the sampling region. The sampling region can be stored in a constraint mask and passed to a hole-filling algorithm such as content-aware fill to synthesize a fill for the hole (or other target region) from patches sampled from the synthesized sampling region.
Automated Image Synthesis Using a Comb Neural Network Architecture
An image synthesis system includes a computing platform having a hardware processor and a system memory storing a software code including a neural encoder and multiple neural decoders each corresponding to a respective persona. The hardware processor executes the software code to receive target image data, and source data that identifies one of the personas, and to map the target image data to its latent space representation using the neural encoder. The software code further identifies one of the neural decoders for decoding the latent space representation of the target image data based on the persona identified by the source data, uses the to identified neural decoder to decode the latent space representation of the target image data as the persona identified by the source data to produce a swapped image data, and blends the swapped image data with the target image data to produce one or more synthesized images.
IMAGE PROCESSING DEVICE, MICROSCOPE SYSTEM, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device is configured to: generate a histogram of pixel values of a plurality of pixels contained in an image; set a background pixel value by using a peak value of the generated histogram; set a noise range with respect to the set background pixel value; and replace the pixel values that fall in the set noise range with a single arbitrary pixel value.
IMAGE PROCESSING DEVICE, FLIGHT VEHICLE, AND COMPUTER-READABLE STORAGE MEDIUM
There is provided an image processing device including: an image acquisition unit for acquiring a first image including, as a subject, a first region captured by a first camera which captures an image from a first altitude toward a direction of an altitude lower than the first altitude, and a second image including, as a subject, the first region captured by a second camera which captures an image from a second altitude toward a direction of an altitude lower than the second altitude; and a flying object detection unit for detecting a flying object at an altitude lower than the first altitude and the second altitude, based on a difference between the first region in the first image and the first region in the second image.
Systems and methods for generating and transmitting image sequences based on sampled color information
In one embodiment, a method for generating completed frames from sparse data may access sample datasets associated with a sequence of frames, respectively. Each sample dataset may comprise incomplete pixel information of the associated frame. The system may generate, using a first machine-learning model, the sequence of frames, each having complete pixel information, based on the sample datasets. The first machine-learning model is configured to retain spatio-temporal representations associated with the generated frames. The system may then access a next sample dataset comprising incomplete pixel information of a next frame after the sequence of frames. The system may generate, using the first machine-learning model, the next frame based on the next sample dataset. The next frame has complete pixel information comprising the incomplete pixel information of the next sample dataset and additional pixel information generated based on the next sample dataset and the spatio-temporal representations retained by the model.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
An image processing device 500 for detecting and correcting areas affected by a cloud in an input image is provided. The image processing device 500 includes: endmember extraction unit 501 that extracts a set of spectra of one or more endmembers from the input image; cloud spectrum acquisition unit 502 that acquires one cloud spectrum in the input image; endmember selection unit 503 that compares the endmember spectra with the cloud spectrum, removes one or more spectra from the set of spectra of the endmember spectra, the one or more spectra being same as or similar to the one cloud spectrum, and outputs a resultant set of spectra as an authentic set of spectra; and an unmixing unit 504 that derives, for each pixel in the input image, one or more fractional abundances of the one or more endmembers and a fractional abundance of cloud in the authentic set of spectra, and detects one or more cloud pixels in the input image.
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure provides an image processing method and apparatus, an electronic device and a storage medium. The method includes: acquiring an initial image and a corresponding style image with brightness and chroma being separately represented; determining a first to-be-processed area in the style image, and determining a second to-be-processed area corresponding to the first to-be-processed area in the initial image; replacing a brightness component of the first to-be-processed area with a brightness component of the second to-be-processed area for the style image; filtering a chroma component of the first to-be-processed area for the style image; and generating an output image according to a processed style image.