G06T5/005

METHOD AND DEVICE FOR GENERATING THREE-DIMENSIONAL IMAGE BY USING PLURALITY OF CAMERAS

A method, performed by an electronic device, of generating a three-dimensional (3D) image, includes: obtaining a first image through a first camera of the electronic device and obtaining a second image through a second camera of the electronic device; obtaining depth information of a pixel included in the first image; identifying, based on the depth information, a first layer image and a second layer image from the first image; inpainting, based on the first image and the second image, at least a part of the first layer image; and generating, based on the second layer image and the inpainted first layer image, the 3D image including a plurality of layers.

ELECTRONIC APPARATUS, AND METHOD FOR DISPLAYING IMAGE ON DISPLAY DEVICE

Disclosed are an electronic apparatus, and a method for displaying an image on a display device. The electronic apparatus comprises: a display device; an image acquisition device, which is configured to acquire a surrounding image of the display device; and a processor, which is configured to determine a background image of the display device according to the surrounding image, acquire a target range, and a target object in the background image, determine a target image according to the background image, the target range and the target object, and control the display device to display the target image, wherein the target image does not include the target object.

Modification of objects in film

A computer-implemented method of processing video data comprising a first sequences of image frames containing a first instance of an object. The method includes isolating said first instance of the object within the first sequence of image frames, determining, using the isolated first instance of the object, first parameter values for a synthetic model of the object, modifying the first parameter values for the synthetic model of the object, rendering a modified first instance of the object using a trained machine learning model and the modified first parameter values for the synthetic model of the object, and replacing at least part of the first instance of the object within the first sequence of image frames with a corresponding at least part of the modified first instance of the object.

DEPTH ACQUISITION DEVICE AND DEPTH ACQUISITION METHOD

A depth acquisition device includes a memory and a processor. The processor performs: acquiring timing information indicating a timing at which a light source irradiates a subject with infrared light; acquiring, from the memory, an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory, a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.

Systems, Methods, and Media for Generating Visualization of Physical Environment in Artificial Reality

In one embodiment, a computing system determines one or more depth measurements associated with a first physical object. The system captures an image including image data associated with the first physical object. The system identifies and associates a plurality of first pixels with a first representative depth value based on the one or more depth measurements. The system determines, for an output pixel of an output image, that (1) a portion of a virtual object is visible from a viewpoint and (2) the output pixel overlaps with a portion of the first physical object. The system determines that the portion of the first physical object is associated with the plurality of first pixels and renders the output image from the viewpoint. Occlusion at the output pixel is determined based on a comparison between the first representative depth value and a depth value associated with the portion of the virtual object.

Image Content Removal Method and Related Apparatus
20230217097 · 2023-07-06 ·

This application discloses an image content removal method, and relates to the field of computer vision. The method includes: enabling a camera application; displaying a photographing preview interface of the camera application; obtaining a first preview picture and a first reference frame picture that are captured by a camera; determining a first object in the first preview picture as a to-be-removed object; and determining to-be-filled content in the first preview picture based on the first reference frame picture, where the to-be-filled content is image content that is of a second object and that is shielded by the first object in the first preview picture. The terminal generates a first restored picture based on the to-be-filled content and the first preview picture. In this way, image content that a user does not want in a picture or a video shot by the user can be removed.

Image modification using detected symmetry

Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map. The image modification module produces a manipulated image by manipulating the original image under global symmetry constraints imposed by the global symmetry association map.

TRAINING APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
20230215144 · 2023-07-06 · ·

The training apparatus (2000) performs a first phase training and a second phase training of a discriminator (10). The discriminator (10) acquires a ground-view image and an aerial-view image, and determines whether the acquired ground-view image matches the acquired aerial-view image. The first phase training is performed using a ground-view image and a first level negative example of aerial-view image. The first level negative example of aerial-view image includes scenery of a different type from scenery in the ground-view image. The second phase training is performed using the ground-view image and a second level negative example of aerial-view image. The second level negative example of aerial-view image includes scenery of a same type as scenery in the ground-view image.

IMAGE INPAINTING BASED ON MULTIPLE IMAGE TRANSFORMATIONS

Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.

Boundary-aware object removal and content fill
11551337 · 2023-01-10 · ·

Systems and methods for removing objects from images are disclosed. An image processing application identifies a boundary of each object of a set of objects in an image. The image processing application identifies a completed boundary for each object of the set of objects by providing the object to a trained model. The image processing application determines a set of masks. Each mask corresponds to an object of the set of objects and represents a region of the image defined by an intersection of the boundary of the object and the boundary of a target object to be removed from the image. The image processing application updates each mask by separately performing content filling on the corresponding region. The image processing application creates an output image by merging each of the updated masks with portions of the image.