Patent classifications
H04N13/371
Stereoscopic image display device
A stereoscopic image display device includes: a display panel including a plurality of pixels arranged in a matrix format; and a viewpoint division unit dividing light of a left-eye image and light of a right-eye image displayed by the plurality of dots and transferring the divided light to a plurality of viewpoints corresponding to each dot, wherein the viewpoint division unit includes a plurality of openings and a light blocking unit, and when a horizontal directional width of each of the plurality of openings corresponds to an m number of dots (m is a natural number), a number dots of n adjacent in the horizontal direction displaying the left-eye image and the right-eye image is equal to n=2m+1 or n=2(m+1).
VIDEO PROCESSING
An apparatus includes a video display to display video images to a user; a gaze detector configured to detect a gaze direction for one or both eyes of the user while the user views the display; a head tracker configured to detect a head orientation of the user; an image processor configured to generate the video images for display by the video display; the image processor being responsive to one or more control functions dependent upon the gaze direction detected by the gaze detector; and a controller configured to detect a predetermined condition and, in response to detection of the predetermined condition, to control the image processor to be responsive to one or more control functions dependent upon the head orientation detected by the head tracker in place of the one or more control functions dependent upon the gaze direction detected by the gaze detector.
VIDEO PROCESSING
An apparatus includes a video display to display video images to a user; a gaze detector configured to detect a gaze direction for one or both eyes of the user while the user views the display; a head tracker configured to detect a head orientation of the user; an image processor configured to generate the video images for display by the video display; the image processor being responsive to one or more control functions dependent upon the gaze direction detected by the gaze detector; and a controller configured to detect a predetermined condition and, in response to detection of the predetermined condition, to control the image processor to be responsive to one or more control functions dependent upon the head orientation detected by the head tracker in place of the one or more control functions dependent upon the gaze direction detected by the gaze detector.
DYNAMIC COVERGENCE ADJUSTMENT IN AUGMENTED REALITY HEADSETS
Systems and methods are disclosed that dynamically and laterally shift each virtual object displayed by an augmented reality headset by a respective distance as the respective virtual object is displayed to change virtual depth from a first virtual depth to a second virtual depth. The respective distance may be determined based on a lateral distance between a first convergence vector of a user's eye with the respective virtual object at the first virtual depth and a second convergence vector of the user's eye with the respective virtual object at the second virtual depth along the display, and may be based on an interpupillary distance. In this manner, display of the virtual object may be adjusted such that the gazes of the user's eyes may converge where the virtual object appears to be.
IMAGE GENERATING APPARATUS AND METHOD THEREFOR
An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values. The terms first and second may be replaced by left and right, respectively or vice verse. E.g. the terms first-eye view pose, second-eye view pose, reference first-eye image, and reference second-eye image may be replaced by left-eye view pose, right-eye view pose, reference left-eye image, and reference right-eye image, respectively.
Image generating apparatus and method therefor
An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values. The terms first and second may be replaced by left and right, respectively or vice verse. E.g. the terms first-eye view pose, second-eye view pose, reference first-eye image, and reference second-eye image may be replaced by left-eye view pose, right-eye view pose, reference left-eye image, and reference right-eye image, respectively.
Counterrotation of display panels and/or virtual cameras in a HMD
A head-mounted display (HMD) system may include a HMD with a housing and a pair of display panels, mounted within the housing, that are counterrotated in orientation. A compositor of the HMD system may also be configured to provide camera pose data with counterrotated camera orientations to an executing application (e.g., a video game application), and to resample the frames received from the application, with or without rotational adjustments in the clockwise and counterclockwise directions depending on whether the display panels of the HMD are upright-oriented or counterrotated in orientation. A combined approach may use the counterrotated camera orientations in combination with counterrotated display panels to provide a HMD with optimized display performance.
Counterrotation of display panels and/or virtual cameras in a HMD
A head-mounted display (HMD) system may include a HMD with a housing and a pair of display panels, mounted within the housing, that are counterrotated in orientation. A compositor of the HMD system may also be configured to provide camera pose data with counterrotated camera orientations to an executing application (e.g., a video game application), and to resample the frames received from the application, with or without rotational adjustments in the clockwise and counterclockwise directions depending on whether the display panels of the HMD are upright-oriented or counterrotated in orientation. A combined approach may use the counterrotated camera orientations in combination with counterrotated display panels to provide a HMD with optimized display performance.
Methods and apparatus for delivering content and/or playing back content
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of 3D content corresponding to an event, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position.
Methods and apparatus for delivering content and/or playing back content
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of 3D content corresponding to an event, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position.