Patent classifications
G06T5/50
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
SYSTEMS AND METHODS FOR LOW FIELD MR/PET IMAGING
Systems and methods of PET attenuation correction using low-field MR image data includes receiving a first set of image data and a set of low-field magnetic resonance (MR) image data. An attenuation correction map is generated from the low-field MR image data using a first trained neural network. At least one attenuation correction process is applied to the first set of image data based on the attenuation correction map to generate at least one clinical attenuation-corrected image.
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A method and apparatus for processing an image signal, an electronic device, and a computer-readable storage medium. The method includes: obtaining a digital image signal of a target image, the target image including object imaging corresponding to an object, identifying a first area of the object imaging in the target image from the digital image signal, removing the object imaging from the target image based on the first area, to obtain a background image corresponding to an original background, performing image inpainting processing on the first area of the background image to obtain a filled image, the filled image including the original background and a perspective background connected to the original background, identifying a second area in the object imaging, and removing an imaging portion corresponding to the second area from the object imaging, and superimposing the obtained adjusted object imaging on the first area.
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A method and apparatus for processing an image signal, an electronic device, and a computer-readable storage medium. The method includes: obtaining a digital image signal of a target image, the target image including object imaging corresponding to an object, identifying a first area of the object imaging in the target image from the digital image signal, removing the object imaging from the target image based on the first area, to obtain a background image corresponding to an original background, performing image inpainting processing on the first area of the background image to obtain a filled image, the filled image including the original background and a perspective background connected to the original background, identifying a second area in the object imaging, and removing an imaging portion corresponding to the second area from the object imaging, and superimposing the obtained adjusted object imaging on the first area.
SEMANTIC IMAGE EXTRAPOLATION METHOD AND APPARATUS
Disclosed are a semantic image extrapolation method and a semantic image extrapolation apparatus. The present invention provides a technique for generating an empty region for image-extension in an image by using an extrapolated segmentation map and an inpainting technique. The present invention is to provide, considering that there is no information in an empty region for image-extension in an image, a semantic image extrapolation method, of first generating an extrapolated segmentation map on the basis of a segmentation map from an input image, and filling the empty region for image-extension in the image with information on the basis of the extrapolated segmentation map and the input image.
SEMANTIC IMAGE EXTRAPOLATION METHOD AND APPARATUS
Disclosed are a semantic image extrapolation method and a semantic image extrapolation apparatus. The present invention provides a technique for generating an empty region for image-extension in an image by using an extrapolated segmentation map and an inpainting technique. The present invention is to provide, considering that there is no information in an empty region for image-extension in an image, a semantic image extrapolation method, of first generating an extrapolated segmentation map on the basis of a segmentation map from an input image, and filling the empty region for image-extension in the image with information on the basis of the extrapolated segmentation map and the input image.
SYSTEM AND METHOD FOR MEASURING DISTORTED ILLUMINATION PATTERNS AND CORRECTING IMAGE ARTIFACTS IN STRUCTURED ILLUMINATION IMAGING
A method for measuring distorted illumination patterns and correcting image artifacts in structured illumination microscopy. The method includes the steps of generating an illumination pattern by interfering multiple beams, modulating a scanning speed or an intensity of a scanning laser, or projecting a mask onto an object; taking multiple exposures of the object with the illumination pattern shifting in phase; and applying Fourier transform to the multiple exposures to produce multiple raw images. Thereafter, the multiple raw images are used to form and then solve a linear equation set to obtain multiple portions of a Fourier space image of the object. A circular 2-D low pass filter and a Fourier Transform are then applied to the portions. A pattern distortion phase map is calculated and then corrected by making a coefficient matrix of the linear equation set varying in phase, which is solved in the spatial domain.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
METHOD OF FUSING IMAGE, AND METHOD OF TRAINING IMAGE FUSION MODEL
A method of fusing an image, a method of training an image fusion model, an electronic device, and a storage medium. The method of fusing the image includes: encoding a stitched image obtained by stitching a foreground image and a background image, so as to obtain a feature map; and decoding the feature map to obtain a fused image, wherein the feature map is decoded by: performing a weighting on the feature map by using an attention mechanism, so as to obtain a weighted feature map; performing a fusion on the feature map according to feature statistical data of the weighted feature map, so as to obtain a fused feature; and decoding the fused feature to obtain the fused image.