Patent classifications
G06T5/005
SYSTEMS AND METHODS FOR IMAGE REPROJECTION
An imaging system receives depth data (corresponding to an environment) from a depth sensor and first image data (a depiction of the environment) from an image sensor. The imaging system generates, based on the depth data, first motion vectors corresponding to a change in perspective of the depiction of the environment in the first image data. The imaging system generates, using grid inversion based on the first motion vectors, second motion vectors that indicate respective distances moved by respective pixels of the depiction of the environment in the first image data for the change in perspective. The imaging system generates second image data by modifying the first image data according to the second motion vectors. The second image data includes a second depiction of the environment from a different perspective than the first image data. Some image reprojection applications (e.g., frame interpolation) can be performed without the depth data.
Generating deterministic digital image matching patches utilizing a parallel wavefront search approach and hashed random number
The present disclosure relates to systems, methods, and non-transitory computer readable media for generating deterministic enhanced digital images based on parallel determinations of pixel group offsets arranged in pixel waves. For example, the disclosed systems can utilize a parallel wave analysis to propagate through pixel groups in a pixel wave of a target region within a digital image to determine matching patch offsets for the pixel groups. The disclosed systems can further utilize the matching patch offsets to generate a deterministic enhanced digital image by filling or replacing pixels of the target region with matching pixels indicated by the matching patch offsets.
METHOD AND APPARATUS FOR COMBINING WARPED IMAGES BASED ON DEPTH DISTRIBUTION
Disclosed herein is a method for blending warped images based on depth distribution. The method includes generating images warped to a virtual viewpoint using input images, generating a blended warped image based on the warped images, and generating a final virtual viewpoint image by applying inpainting to the blended warped image.
CONTEXTUAL PRIORITY BASED MULTIMEDIA MODIFICATION
A computer-implemented method for multimedia modification is disclosed. The computer-implemented method includes classifying one or more objects detected within a user's field of view through an augmented reality environment. The computer-implemented method further includes determining a context of the user based, at least in part, on the one or more classified objects detected within the user's field of view. The computer-implemented method further includes generating a priority score for the one or more classified objects based, at least in part, on the context of the user. The computer-implemented method further includes modifying an object detected within the user's field of view based, at least in part, on the priority score of the object.
DETECTION OF ARTIFACTS IN MEDICAL IMAGES
There is provided a method of re-classifying a clinically significant feature of a medical image as an artifact, comprising: feeding a target medical image captured by a specific medical imaging sensor at a specific setup into a machine learning model, obtaining a target feature map as an outcome of the machine learning model, wherein the target feature map includes target features classified as clinically significant, analyzing the target feature map with respect to sample feature map(s) obtained as an outcome of the machine learning model fed a sample medical image captured by at least one of: the same specific medical imaging sensor and the same specific setup, wherein the sample feature map(s) includes sample features classified as clinically significant, identifying target feature(s) depicted in the target feature map having attributes matching sample feature(s) depicted in the sample feature map(s), and re-classifying the identified target feature(s) as an artifact.
Computer-generated image processing including volumetric scene reconstruction
An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.
Image enhancement system and method based on generative adversarial network (GAN) model
An image enhancement system and method based on a generative adversarial network (GAN) model. The image enhancement system includes an acquiring unit, a training unit and an enhancement unit. The acquiring unit is configured to acquire a first image of a driving environment captured by a camera of a first vehicle and a second image of the driving environment captured by a camera of a second vehicle. The training unit is configured to train a GAN by using the first training image to obtain an image enhancement model. The enhancement unit is configured to enhance the second image by inputting the second image into the image enhancement model.
METHOD AND SYSTEM FOR REPLACING SCENE TEXT IN A VIDEO SEQUENCE
To replace text in a digital video image sequence, a system will process frames of the sequence to: define a region of interest (ROI) with original text in each of the frames; use the ROIs to select a reference frame from the sequence; select a target frame from the sequence; determine a transform function between the ROI of the reference frame and the ROI of the target frame; replace the original text in the ROI of the reference frame with replacement text to yield a modified reference frame ROI; and use the transform function to transform the modified reference frame ROI to a modified target frame ROI in which the original text is replaced with the replacement text. The system will then insert the modified target frame ROI into the target frame to produce a modified target frame. This process may repeat for other target frames of the sequence.
MULTI-TASK TEXT INPAINTING OF DIGITAL IMAGES
A multi-task text infilling system receives a digital image and identifies a region of interest of the image that contains original text. The system uses a machine learning model to determine, in parallel: a foreground image that includes the original text; a background image that omits the original text; and a binary mask that distinguishes foreground pixels from background pixels, The system receives a target mask that contains replacement text. The system then applies the target mask to blend the background image with the foreground layer image and yield a modified digital image that includes the replacement text and omits the original text.
Systems and methods for determining ring artifact
The embodiments of the present disclosure disclose methods and systems for determining a ring artifact. The method for determining the ring artifact may include: obtaining an original image; mapping a plurality of pixels in the original image to a polar coordinate image; determining a protection region in the polar coordinate image; obtaining a smooth image by smoothing at least one region in the polar coordinate image other than the protection region; generating a residual image based on the polar coordinate image and the smooth image; determining a location of the ring artifact in the original image based on the residual image. In the present disclosure, the original image may be mapped to a trapezoidal region or a triangular region in the polar coordinate image, and the gradient angle image may be used for image processing, which may reduce the influence of noise. An accurate location of the ring artifact may be determined, and information for imaging device detection and air correction may be provided.