Patent classifications
H04N13/366
Method of outputting three-dimensional image and electronic device performing the method
A method and apparatus for outputting a three-dimensional (3D) image are provided. To output a 3D image, a stereo image is generated based on viewpoints of a user and rendered into a 3D image. Since the stereo image is generated based on the viewpoints of the user, the user views a different side of an object appearing in the 3D image depending on a viewpoint of the user.
Display system providing concentric light field and monocular-to-binocular hybridization
A display system for realizing concentric light field with monocular-to-binocular hybridization, and methods thereof. At least some embodiments include a display arranged to emit or transmit light rays based on image content from a content engine, and an optical subsystem arranged to configure the light rays into a concentric light field. The concentric light field provides a virtual image in a large, contiguous spatial region, such that each eye of the human viewer can detect monocular depth from the light field, to provide a large field of view.
Display system providing concentric light field and monocular-to-binocular hybridization
A display system for realizing concentric light field with monocular-to-binocular hybridization, and methods thereof. At least some embodiments include a display arranged to emit or transmit light rays based on image content from a content engine, and an optical subsystem arranged to configure the light rays into a concentric light field. The concentric light field provides a virtual image in a large, contiguous spatial region, such that each eye of the human viewer can detect monocular depth from the light field, to provide a large field of view.
EYE POSITIONING APPARATUS AND METHOD, AND 3D DISPLAY DEVICE AND METHOD
An eye positioning apparatus is provided, comprising: an eye positioner, comprising a first black-and-white camera configured to shoot first black-and-white images and a second black-and-white camera configured to shoot second black-and-white images; and an eye positioning image processor, configured to identify presence of eyes based on at least one of the first black-and-white images and the second black-and-white images and determine eye space positions based on the eyes identified in the first black-and-white images and the second black-and-white images. The apparatus can determine the eye space positions of a user at high accuracy, thereby improving 3D display quality. An eye positioning method, a 3D display device and method, a computer-readable storage medium, and a computer program product are also provided.
3D DISPLAY DEVICE AND 3D IMAGE DISPLAY METHOD
The present disclosure relates to the technical field of 3D display, and discloses a 3D display device, comprising: a multi-viewpoint 3D display screen, which comprises a plurality of composite pixels, wherein each composite pixel of the plurality of composite pixels comprises a plurality of composite subpixels, and each composite subpixel of the plurality of composite subpixels comprises a plurality of subpixels corresponding to a plurality of viewpoints of the 3D display device; a viewing angle determining apparatus, configured to determine a user viewing angle of a user; a 3D processing apparatus, configured to render, based on the user viewing angle, corresponding subpixels of the plurality of composite subpixels according to depth-of-field (DOF) information of a 3D model. The device may solve a problem of 3D display distortion. The present disclosure further discloses a 3D image display method, a computer-readable storage medium, and a computer program product.
STEREOSCOPIC-IMAGE PLAYBACK DEVICE AND METHOD FOR GENERATING STEREOSCOPIC IMAGES
A method for generating stereoscopic images is provided. The method includes: creating a three-dimensional mesh to obtain a stereoscopic scene and capturing a two-dimensional image of the stereoscopic scene; performing image preprocessing to obtain a first image in response to the two-dimensional image not being a side-by-side image; utilizing a graphics processing pipeline to perform depth estimation on the first image to obtain a depth image, to update the three-dimensional mesh according to a depth setting of the depth image, and to map the three-dimensional mesh to a corresponding coordinate system; utilizing the graphics processing pipeline to project the first image onto the mapped three-dimensional mesh to obtain an output three-dimensional mesh, and to capture an output side-by-side image from the output three-dimensional mesh; and utilizing the graphics processing pipeline to weave a left-eye and right-eye image into an output image, and to display the output image.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device includes: a light emitting unit that emits a near-infrared ray to a target; a light emission controlling unit that controls a light emission amount of the light emitting unit on a basis of a distance value between the target and the light emitting unit; and a detecting unit that detects a feature point on the basis of a captured image of the target irradiated with the near-infrared ray.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device includes: a light emitting unit that emits a near-infrared ray to a target; a light emission controlling unit that controls a light emission amount of the light emitting unit on a basis of a distance value between the target and the light emitting unit; and a detecting unit that detects a feature point on the basis of a captured image of the target irradiated with the near-infrared ray.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.