G06T2200/21

DYNAMIC RANGE EXPANSION HIGHLIGHT INFORMATION RESTORATION
20170316553 · 2017-11-02 · ·

Systems and methods are provided for generating high dynamic range (HDR) content using existing standard dynamic range (SDR) content. HDR content is generated by restoring lost detail in the SDR content using source content from which the SDR content was derived. The HDR content generated from the SDR content also preserves any color characteristics, such as color grading, of the SDR content in conjunction with the restoration of lost detail.

APPARATUS AND METHOD FOR APPLYING A TWO-DIMENSIONAL IMAGE ON A THREE-DIMENSIONAL MODEL
20170309058 · 2017-10-26 ·

A method and apparatus for applying a two-dimensional image on a three-dimensional model composed of a polygonal mesh. The method comprises generating an adjacency structure for all triangles within the mesh, identifying a triangle within membrane containing the desired centre point, calculating spatial distances between the three vertices and desired centre checking each triangle edge to see if the distances show an intersection, if a collision is detected add the triangle to the list and iteratively processing all the triangles in the list calculate the spatial data of the single unknown vertex, check the two edges of the triangle to see if the calculated distances show an intersection, if an intersection occurs add this new triangle to the list; transforming into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.

Apparatus and method for low dynamic range and high dynamic range image alignment

An imaging system includes an image sensor to capture a sequence of images, including a low dynamic range (LDR) image and a high dynamic range (HDR) image, and a processor coupled to the image sensor to receive the LDR image and the HDR image. The processor receives instructions to perform operations to segment the LDR image and HDR image into a plurality of segments. The processor also scans the plurality of LDR and HDR image segments to find a first image segment in the plurality of LDR image segments and a second image segment in the plurality of HDR image segments. The processor then finds interest points in the first and second image segments, and determines an alignment parameter based on matched interest points. The LDR image and the HDR image are combined in accordance with the alignment parameter.

Layered Lightfields for Occlusion Handling
20170337730 · 2017-11-23 ·

For occlusion handing in lightfield rendering, layered lightfields are created. Rather than use one lightfield for one camera position and orientation, multiple lightfields representing different depths or surfaces at different depths relative to that camera position and orientation are created. By using layered lightfields for the various camera positions and orientations, the camera may be located within the convex hull or scanned object. The depths of the layers are used to select the lightfields for a given camera position and location.

Method and system for processing an image

A method of processing an image is disclosed. The method comprises decomposing the image into a plurality of channels, each being characterized by a different depth-of-field, and accessing a computer readable medium storing an in-focus dictionary defined over a plurality of dictionary atoms, and an out-of-focus dictionary defined over a plurality of sets of dictionary atoms, each set corresponding to a different out-of-focus condition. The method also comprises computing one or more sparse representations of the decomposed image over the dictionaries.

DEPTH-ASSIGNED CONTENT FOR DEPTH-ENHANCED VIRTUAL REALITY IMAGES

According to various embodiments of the invention, a system and method are provided for enabling interaction with, manipulation of, and control of depth-assigned content in depth-enhanced pictures, such as virtual reality images. Depth-assigned content can be assigned to a specified depth value. When a depth-enhanced picture is refocused at a focus depth substantially different from the specified assigned depth value, the depth-assigned content may be omitted, grayed out, blurred, or otherwise visually distinguished. In this manner, content associated with an in-focus image element can be visually distinguished from content associated with an out-of-focus image element. For example, in at least one embodiment, depth-assigned content is visible only when an image element associated with the content is in focus (or nearly in focus). According to various embodiments of the invention, many different types of interactions are facilitated among depth-assigned content, depth-enhanced virtual reality images, and other content.

Method for estimating a depth for pixels, corresponding device and computer program product

A method is proposed for estimating a depth for pixels in a matrix of M images. Such method comprises, at least for one set of N images among the M images, 2<N≤M, a process comprising: —determining depth maps for the images in the set of N images delivering a set of N depth maps; —for at least one current pixel for which a depth has not yet been estimated: —deciding if a candidate depth corresponding to a depth value in the set of N depth maps is consistent or not with the other depth map(s) of the set of N depth maps; —selecting the candidate depth as being the estimated depth for the current pixel if the candidate depth is decided as consistent. The process is enforced iteratively with a new N value which is lower than the previous N value used in the previous iteration of the process.

DEPTH REFINEMENT METHOD AND SYSTEM OF SPARSE DEPTH IMAGE IN MULTI-APERTURE CAMERA
20170294021 · 2017-10-12 ·

Disclosed are a depth refinement system and a method for a sparse depth image in a multi-aperture camera. The method includes providing a sparse depth map generated based on an image obtained through each of a plurality of apertures included in the multi-aperture camera, wherein the sparse depth map includes depths of pixels included in the image, and performing a depth noise reduction (DNR) based on the sparse depth map.

Image signal processor for generating depth map from phase detection pixels and device having the same

An image signal processor including a CPU is provided. The CPU receives image data and positional information of phase detection pixels from an imaging device, extracts first phase detection pixel data and second phase detection pixel data from the image data using the positional information of phase detection pixels, computes first phase graphs from the first phase detection pixel data based upon moving a first window, computes second phase graphs from the second phase detection pixel data based upon moving a second window, computes disparities of the phase detection pixels using the first phase graphs and the second phase graphs, and generates a depth map using the disparities.

Systems and methods for quadrilateral mesh generation

Systems and methods are provided for quadrilateral mesh generation. The system includes one or more data processors and a non-transitory computer-readable storage medium. The data processors are configured to: receive a geometric structure representing a physical object; determine a directional field; determine a size field; select one or more locations from a region of the geometric structure, the locations being associated with local data; and generate one or more quadrilateral mesh elements based at least in part on the directional field, the size field, and the local data. The non-transitory computer-readable storage medium is configured to store data related to the structure, data related to the directional field, data related to the size field, and the local data.