Patent classifications
H04N13/04
AUDIENCE SEGMENTATION BASED ON VIEWING ANGLE OF A USER VIEWING A VIDEO OF A MULTI-ANGLE VIEWING ENVIRONMENT
Audience segmentation can be based on a viewing angle of a user viewing a video of a multi-angle viewing environment. During playback, a sequence of the user-controlled viewing angles of the video are recorded. The sequence represents the viewing angle of the user at a given point in time. Based on the sequences of several users, a predominant sequence of viewing angles of the video is determined. One or more audience segment tags are assigned to the predominant sequence of viewing angles. During subsequent playbacks of the video, the sequence(s) of user-controlled viewing angles of the video are recorded. The recorded sequence(s) of the subsequent user(s) are compared to the predominant sequence of viewing angles of the video, and the subsequent user(s) are assigned to an audience segment based on the comparison and the corresponding audience segment tags.
PRESENTATION OF SCENES FOR BINOCULAR RIVALRY PERCEPTION
Embodiments herein relate to the display of enhanced stereographic imagery in augmented or virtual reality. In various embodiments, an apparatus to display enhanced stereographic imagery may include one or more processors, an image generation module to generate an enhanced stereoscopic image of a scene having a first two-dimensional (2D) image of the scene and a second 2D image of the same scene that is visually or optically different than the first 2D image to create binocular rivalry perception of the scene when the first and second 2D images are respectively presented to a first and a second eye of a user, and a display module to display the enhanced stereoscopic image to the user, with the first 2D image presented to the first eye of the user and the second 2D image presented to the second eye of the user. Other embodiments may be described and/or claimed.
DISCONTINUITY-AWARE REPROJECTION
In various embodiments, methods and systems reprojecting three-dimensional (3D) virtual scenes using discontinuity depth late stage reprojection are provided. A reconstruction point, that indicates camera pose information, is accessed. The reconstruction point is associated with a plurality of sample points of a three-dimensional (3D) virtual scene. One or more closest sample points, relative to the reconstruction point, are identified, from the plurality of sample points. Each of the one or more closest sample points is associated with a cube map of color data and depth data. A relative convergence score is determined for each of the one or more closest sample points based on performing a depth-aware cube map late stage reprojection operation in relation to the reconstruction point. A subset of the one or more closest sample points is identified based on the relative convergence score. A reconstructed 3D virtual image is generated using the subset.
Display device
A display device includes a display unit in which pixels having first and second regions are arrayed in a matrix, the first region emitting first color light for displaying a stereoscopic image including images of a plurality of viewing points and the second region emitting second color light in order to display the stereoscopic image. A separation unit separates optically the images of the respective viewing points from each other so that the images of different viewing points are observed by different eyes of a viewer, wherein in a region on the display unit in which the image of a predetermined viewing point is displayed, widths of the first and second regions in a parallax direction of the stereoscopic image are approximately the same and widths of the first and second regions in a vertical direction, which is approximately perpendicular to the parallax direction, are different.
Three-dimensional image processing apparatus and method for adjusting location of sweet spot for displaying multi-view image
A three-dimensional image processing apparatus and a method for controlling a location of a sweet spot for displaying a multi-view image are disclosed. A receiver receives a multi-view image including a plurality of view images. A controller detects a plurality of users from an image obtained by taking a watching zone, acquires user location information indicating locations of the plurality of detected users, calculates distance information indicating a distance between the detected users by using the acquired user location information, and controls a location of a sweet spot for viewing the plurality of view images on the basis of the calculated distance information and a length of a dead zone of the multi-view image.
Video projector system
Some embodiments provide for a modular video projector system having a light engine module and an optical engine module. The light engine module can provide narrow-band laser light to the optical engine module which modulates the laser light according to video signals received from a video processing engine. Some embodiments provide for an optical engine module having a sub-pixel generator configured to display video or images at a resolution of at least four times greater than a resolution of modulating elements within the optical engine module. Systems and methods for reducing speckle are presented in conjunction with the modular video projector system.
Method and apparatus for displaying stereoscopic information related to ultrasound sectional plane of target object
A method of displaying stereoscopic information related to an ultrasound sectional plane of a target object includes setting a line of interest on the ultrasound sectional plane of the target object based on a received input; obtaining an ultrasound signal of the ultrasound sectional plane of the target object along the set line of interest; converting the obtained ultrasound signal to represent the stereoscopic information in a three-dimensional manner; and displaying the stereoscopic information related to the ultrasound sectional plane of the target object.
Image processing
An image processing method includes capturing an image of the head of a user of a head mountable display device. The position of the head mountable display device is detected in the image captured by a camera. A region of the user's face that is occluded by the head mountable display device is identified. The portion of the captured image corresponding to the head mountable display device is at least partially replaced with a corresponding portion of a 3D facial model, to provide a modified version of the captured image.
Stereoscopic endoscope system
A stereoscopic endoscope system includes a stereoscopic endoscope and an identification information combining section. The stereoscopic endoscope is provided with an R image pickup section and an R output section which are provided on a right side of an endoscope body, an L image pickup section and an L output section which are provided on a left side of the endoscope body, and an R memory and an L memory which store correction information for either right or left and identification information. The right and left image pickup sections and the right and left memories are correctly or incorrectly combined and are connected to the right and left output sections. The identification information combining section performs image combination of an image and identification information inputted from the R output section or an image and identification information inputted from the L output section and outputs a combined image.
MULTI-VIEW PIXEL DIRECTIONAL BACKLIGHT MODULE AND NAKED-EYE 3D DISPLAY DEVICE
A multi-view pixel directional backlight module and a naked-eye 3D display device are provided. The multi-view pixel directional backlight module includes at least two rectangular light guide plates closely stacked together. A light-emerging surface of the rectangular light guide plate is provided with multiple pixel arrays. Light emitted by pixels in a same pixel array is pointed to a same viewing angle, and different pixel arrays have different viewing angles. At least one side of each rectangular light guide plate is provided with a light source group. Light emitted by the light source group enters the corresponding light guide plate, then emerges from pixels of respective pixel arrays on the light-emerging surface of the light guide plate, and is totally reflected at positions other than positions of the pixels within the light guide plate. Each of the pixels is a nano-diffraction grating.