H04N13/02

Pixel geometries for spatially multiplexed stereo 3D displays
20170371192 · 2017-12-28 ·

A 3D image pixel in a spatially multiplexed stereo 3D display includes a first left-eye subpixel and a second left-eye subpixel that are both driven when displaying the left-eye image. The 3D image pixel also includes a first right-eye subpixel and a second right-eye subpixel that are both driven when displaying the right-eye image. The subpixels may all have a square shape. Single color emitters in the subpixels of the same eye may be driven by the same electronics. A 3D image pixel in a second spatially multiplexed stereo 3D display includes a left-eye pixel driven when displaying the left-eye image and a right-eye pixel driven when displaying the right-eye image. The pixels may all have a rectangular shape, and the horizontal measurement of the pixels may be greater than the vertical measurement of the pixels.

AUDIENCE SEGMENTATION BASED ON VIEWING ANGLE OF A USER VIEWING A VIDEO OF A MULTI-ANGLE VIEWING ENVIRONMENT

Audience segmentation can be based on a viewing angle of a user viewing a video of a multi-angle viewing environment. During playback, a sequence of the user-controlled viewing angles of the video are recorded. The sequence represents the viewing angle of the user at a given point in time. Based on the sequences of several users, a predominant sequence of viewing angles of the video is determined. One or more audience segment tags are assigned to the predominant sequence of viewing angles. During subsequent playbacks of the video, the sequence(s) of user-controlled viewing angles of the video are recorded. The recorded sequence(s) of the subsequent user(s) are compared to the predominant sequence of viewing angles of the video, and the subsequent user(s) are assigned to an audience segment based on the comparison and the corresponding audience segment tags.

SYSTEMS AND METHODS FOR SCANNING THREE-DIMENSIONAL OBJECTS

A method for computing a three-dimensional (3D) model of an object includes: receiving, by a processor, a first chunk including a 3D model of a first portion of the object, the first chunk being generated from a plurality of depth images of the first portion of the object; receiving, by the processor, a second chunk including a 3D model of a second portion the object, the second chunk being generated from a plurality of depth images of the second portion of the object; computing, by the processor, a registration of the first chunk with the second chunk, the registration corresponding to a transformation aligning corresponding portions of the first and second chunks; aligning, by the processor, the first chunk with the second chunk in accordance with the registration; and outputting, by the processor, a 3D model corresponding to the first chunk merged with the second chunk.

COMPACT, LOW COST VCSEL PROJECTOR FOR HIGH PERFORMANCE STEREODEPTH CAMERA

A VCSEL projector and method for using the same are disclosed. In one embodiment, the apparatus comprises a vertical cavity surface emitting laser (VCSEL) array comprising a plurality of VCSELs; a micro-lens array coupled to the VCSEL array and having a plurality of lenses, and each of the plurality of lenses is positioned over a VCSEL in the VCSEL array; and a projection lens coupled to the micro-lens array (MLA), where light emitted by the VCSEL array is projected as a sequence of patterns onto an object by the projection lens.

POSITIONAL AUDIO ASSIGNMENT SYSTEM
20170374486 · 2017-12-28 ·

In some implementations, a positional audio assignment system is used to improve a user's immersion during content playback within a virtual reality setting. Data representing a video viewable to a user identifying spatial positions assigned to one or more objects within the video is initially obtained. Audio data encoding one or more audio streams corresponding to each of the one or more objects is also obtained. User input data associated with playback of the video is then received. A gaze point of the user based on the received user input data is then determined. The gaze point of the user is then evaluated with respect to the spatial positions assigned to the one or more objects. The audio output provided to the user is then selectively adjusted based on evaluating the gaze point with respect to the spatial positions assigned to the one or more objects.

Stereoscopic imaging method and system that divides a pixel matrix into subgroups

A stereoscopic imaging method where a pixel matrix is divided into groups such that parallax information is received by one pixel group and original information is received by another pixel group. The parallax information may, specifically, be based on polarized information received by subgroups of the one pixel, group and by processing all of the information received multiple images are rendered by the method.

Image processing apparatus, image capturing apparatus, and image processing program

Images can be processed using an image processing apparatus including: an image data obtaining section that obtains at least two pieces of parallax image data from an image capturing element that includes color filters and opening masks so that one color filter and one opening mask correspond to one of at least a part of photoelectric conversion elements and outputs the at least two pieces of parallax image data; and a correcting section that corrects color imbalance of a corresponding pixel caused between the at least two pieces of parallax image data, based on at least one of a position of the at least a part of photoelectric conversion elements in the image capturing element and an opening displacement of the opening mask.

Stereoscopic moving picture generating apparatus and stereoscopic moving picture generating method
09854223 · 2017-12-26 · ·

A stereoscopic picture generating apparatus comprising: a storage unit to get stored with a first image containing partial images and a second image containing partial images corresponding respectively to the partial images contained in the first image; and an arithmetic unit to extract a first position defined as an existing position of a first partial image contained in the first image and a second position defined as an existing position of a second partial image contained in the first image, to calculate a first differential quantity defined as a difference between the first position and the second position, to calculate a third position defined as a new existing position of a third partial image contained in the second image that corresponds to the first partial image based on the first differential quantity, and to generate a third image based on the third position of the third partial image.

DEPTH IMAGE PROVISION APPARATUS AND METHOD
20170374352 · 2017-12-28 ·

Apparatuses, methods and storage media for providing a depth image of an object are described. In some embodiments, the apparatus may include a projector to perform a controlled motion, to project a light pattern on different portions of the scene at different time instances, and an imaging device coupled with the projector, to generate pairs of images (a first image of a pair from a first perspective, and a second image of the pair from a second perspective), of different portions of the scene in response to the projection of the light pattern on respective portions. The apparatus may include a processor coupled with the projector and the imaging device, to control the motion of the projector, and generate the depth image of the object in the scene, based on processing of the generated pairs of images of the portions of the scene. Other embodiments may be described and claimed.

Accumulating charge from multiple imaging exposure periods

Embodiments related to accumulating charge during multiple exposure periods in a time-of-flight depth camera are disclosed. For example, one embodiment provides a method including accumulating a first charge on a photodetector during a first exposure period for a first light pulse, transferring the first charge to a charge storage mechanism, accumulating a second charge during a second exposure period for the first light pulse, and transferring the second charge to the charge storage mechanism. The method further includes accumulating an additional first charge during a first exposure period for a second light pulse, adding the additional first charge to the first charge to form an updated first charge, accumulating an additional second charge on the photodetector for a second exposure period for the second light pulse, and adding the additional second charge to the second charge to form an updated second charge.