Patent classifications
H04N13/371
Dynamic convergence adjustment in augmented reality headsets
Systems and methods are disclosed that dynamically and laterally shift each virtual object displayed by an augmented reality headset by a respective distance as the respective virtual object is displayed to change virtual depth from a first virtual depth to a second virtual depth. The respective distance may be determined based on a lateral distance between a first convergence vector of a user's eye with the respective virtual object at the first virtual depth and a second convergence vector of the user's eye with the respective virtual object at the second virtual depth along the display, and may be based on an interpupillary distance. In this manner, display of the virtual object may be adjusted such that the gazes of the user's eyes may converge where the virtual object appears to be.
Information processing apparatus and information processing method for suppressing crosstalk while suppressing degradation in image quality
An information processing apparatus (30) includes: an estimation unit (33B) that estimates the crosstalk amount based on a relative positional relationship between a viewing position of a viewer of a display device (10) and a pixel a screen of the display device (10); and a generation unit (33C) that generates an image to be displayed on the display device (10) by correcting a value of each of a plurality of pixels of the screen based on the crosstalk amount.
Video processing
An apparatus includes a video display to display video images to a user; a gaze detector configured to detect a gaze direction for one or both eyes of the user while the user views the display; a head tracker configured to detect a head orientation of the user; an image processor configured to generate the video images for display by the video display; the image processor being responsive to one or more control functions dependent upon the gaze direction detected by the gaze detector; and a controller configured to detect a predetermined condition and, in response to detection of the predetermined condition, to control the image processor to be responsive to one or more control functions dependent upon the head orientation detected by the head tracker in place of the one or more control functions dependent upon the gaze direction detected by the gaze detector.
Video processing
An apparatus includes a video display to display video images to a user; a gaze detector configured to detect a gaze direction for one or both eyes of the user while the user views the display; a head tracker configured to detect a head orientation of the user; an image processor configured to generate the video images for display by the video display; the image processor being responsive to one or more control functions dependent upon the gaze direction detected by the gaze detector; and a controller configured to detect a predetermined condition and, in response to detection of the predetermined condition, to control the image processor to be responsive to one or more control functions dependent upon the head orientation detected by the head tracker in place of the one or more control functions dependent upon the gaze direction detected by the gaze detector.
LIGHT TRANSMITTING DISPLAY SYSTEM, IMAGE OUTPUT METHOD THEREOF AND PROCESSING DEVICE THEREOF
A light transmitting display system, an image output method thereof and a processing device thereof are provided. The light transmitting display device is located between a background object and a user. The image output method includes the following steps. The locations of the user, the light transmitting display device and the background object are detected. A coordinate conversion relationship between the user, the light transmitting display device and the background object is established. An eyes midpoint and an eyes offset of the user are detected. A left eye viewpoint and a right eye viewpoint of the user on the light transmitting display device are calculated according to the coordinate conversion relationship, the eyes midpoint and the eyes offset of the user. As a head-mounted device is switched, an image is alternately displayed at the left eye viewpoint and the right eye viewpoint.
LIGHT TRANSMITTING DISPLAY SYSTEM, IMAGE OUTPUT METHOD THEREOF AND PROCESSING DEVICE THEREOF
A light transmitting display system, an image output method thereof and a processing device thereof are provided. The light transmitting display device is located between a background object and a user. The image output method includes the following steps. The locations of the user, the light transmitting display device and the background object are detected. A coordinate conversion relationship between the user, the light transmitting display device and the background object is established. An eyes midpoint and an eyes offset of the user are detected. A left eye viewpoint and a right eye viewpoint of the user on the light transmitting display device are calculated according to the coordinate conversion relationship, the eyes midpoint and the eyes offset of the user. As a head-mounted device is switched, an image is alternately displayed at the left eye viewpoint and the right eye viewpoint.
THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY SYSTEM, AND MOVABLE OBJECT
A three-dimensional display device includes a display panel, a shutter panel, an obtainer, an input unit, and a controller. The display panel includes subpixels that display a parallax image. The obtainer obtains an illuminance level. The input unit receives a position of a pupil. The controller causes a set of subpixels included in the subpixels to display a black image based on the illuminance level. The controller determines an origin position. The origin position is a position of the pupil for a viewable section to have a center aligning with a center of a set of consecutive subpixels in an interocular direction. The set of consecutive subpixels is included in the subpixels and displaying the first image or the second image corresponding to the viewable section. The controller controls the display panel based on a displacement of the pupil from the origin position in the interocular direction.
METHODS AND APPARATUS FOR DELIVERING CONTENT AND/OR PLAYING BACK CONTENT
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of 3D content corresponding to an event, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position.
METHODS AND APPARATUS FOR DELIVERING CONTENT AND/OR PLAYING BACK CONTENT
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of 3D content corresponding to an event, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position.
Sequential encoding and decoding of volumetric video
The invention relates to methods, apparatuses, systems and computer program products for coding volumetric video. A first texture picture coded, said first texture picture comprising a first projection of first volumetric texture data of a first source volume of a scene model and a second projection of second volumetric texture data of said first source volume of said scene model, said first projection being from said first source volume to a first projection surface, and said second projection being from said first source volume to a second projection surface, said second volumetric texture data having been obtained by removing at least a part of said first volumetric texture data that has been successfully projected in said first projection. A a first geometry picture is coded, said geometry picture representing a mapping of said first projection surface to said first source volume and a mapping of said second projection surface to said first source volume. Projection geometry information of said first and second projections is coded, said projection geometry information comprising information of position of said first and second projection surfaces in said scene model.