Patent classifications
H04N13/15
WEARABLE ELECTRONIC DEVICE AND METHOD OF OUTPUTTING THREE-DIMENSIONAL IMAGE
A wearable electronic device includes a left-eye display configured to output light of a first color corresponding to a 3D left-eye image, a right-eye display configured to output light of a second color corresponding to a 3D right-eye image, a left-eye optical waveguide configured to adjust a path of the light of the first color and output the light of the first color, a right-eye optical waveguide configured to adjust a path of the light of the second color and output the light of the second color, a left-eye display control circuit configured to supply a driving power and a control signal to the left-eye display, a right-eye display control circuit configured to supply a driving power and a control signal to the right-eye display, a communication module configured to communicate with a mobile electronic device, and a second control circuit configured to supply a driving power and a control signal to the communication module.
Stereoscopic camera with fluorescence visualization
A stereoscopic camera with fluorescence visualization is disclosed. An example stereoscopic camera includes a visible light source, a near-infrared light source, and a near-ultraviolet light source. The stereoscopic camera also includes a light filter assembly having left and right filter magazines positioned respectively along left and right optical paths and configured to selectively enable certain wavelengths of light to pass through. Each of the left and right filter magazines includes an infrared cut filter, a near-ultraviolent cut filter, and a near-infrared bandpass filter. A controller of the camera is configured to provide for a visible light mode, an indocyanine green (“ICG”) fluorescence mode, and a 5-aminolevulinic acid (“ALA”) fluorescence mode by synchronizing the activation of the light sources with the selection of the filters. A processor of the camera combines image data from the different modes to enable fluorescence emission light to be superimposed on visible light stereoscopic images.
Method and System for Encoding a 3D Scene
A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.
Method and System for Encoding a 3D Scene
A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.
Method and apparatus for generating image using LiDAR
According to an aspect of embodiments, a method of generating an image by using LiDAR includes performing a reconstruction of a two-dimensional reflection intensity image, the performing of the reconstruction including projecting three-dimensional reflection intensity data that are measured by using the LiDAR as the two-dimensional reflection intensity image, and the method includes generating a color image by applying a projected two-dimensional reflection intensity image to a Fully Convolutional Network (FCN).
Distributed Virtual Reality
Aspects of the present disclosure relate to distributed virtual reality. In examples, a depth buffer and a color buffer are generated at a presenter device as part of rendering a virtual environment. The virtual environment may be perceived by a user in three dimensions (3D), for example via a virtual reality (VR) headset. Virtual environment information comprising the depth buffer and the color buffer may be transmitted to a viewer device, where it is used to render the virtual environment for display to a viewer. For example, the viewer may similarly view the virtual environment in 3D via a VR headset. A viewer perspective (e.g., from which the virtual environment is generated for the viewer) may differ from a presenter perspective and may be manipulated by the viewer, thereby decoupling the viewer's perception of the virtual environment from that of the presenter.
Distributed Virtual Reality
Aspects of the present disclosure relate to distributed virtual reality. In examples, a depth buffer and a color buffer are generated at a presenter device as part of rendering a virtual environment. The virtual environment may be perceived by a user in three dimensions (3D), for example via a virtual reality (VR) headset. Virtual environment information comprising the depth buffer and the color buffer may be transmitted to a viewer device, where it is used to render the virtual environment for display to a viewer. For example, the viewer may similarly view the virtual environment in 3D via a VR headset. A viewer perspective (e.g., from which the virtual environment is generated for the viewer) may differ from a presenter perspective and may be manipulated by the viewer, thereby decoupling the viewer's perception of the virtual environment from that of the presenter.
Rendering wide color gamut, two-dimensional (2D) images on three-dimensional (3D) capable displays
A system and method for displaying image data comprise receiving 2D video data, generating, from the video data, a first plurality of intensity values of virtual primaries of a first virtual color gamut and a second plurality intensity values of a second virtual color gamut, the first plurality of intensity values being below a luminance threshold and approximating a predefined color gamut and the second plurality of intensity values being above the luminance threshold, converting the first plurality of intensity values into a third plurality of intensity values of predefined primaries of a first projection head of a display system and the second plurality of intensity values into a fourth plurality of intensity values of predefined primaries of a second projection head of the display system, and dynamically adjusting pixel levels of spatial modulators of the display system based on the third plurality and the fourth plurality of intensity values.
Rendering wide color gamut, two-dimensional (2D) images on three-dimensional (3D) capable displays
A system and method for displaying image data comprise receiving 2D video data, generating, from the video data, a first plurality of intensity values of virtual primaries of a first virtual color gamut and a second plurality intensity values of a second virtual color gamut, the first plurality of intensity values being below a luminance threshold and approximating a predefined color gamut and the second plurality of intensity values being above the luminance threshold, converting the first plurality of intensity values into a third plurality of intensity values of predefined primaries of a first projection head of a display system and the second plurality of intensity values into a fourth plurality of intensity values of predefined primaries of a second projection head of the display system, and dynamically adjusting pixel levels of spatial modulators of the display system based on the third plurality and the fourth plurality of intensity values.
SYSTEMS AND METHODS FOR DISPLAYING STEREOSCOPIC RENDERED IMAGE DATA CAPTURED FROM MULTIPLE PERSPECTIVES
A method includes receiving video data of a user, the video data comprising a first captured image and a second captured image, generating a two-dimensional planar proxy of the user, determining a pose comprising a location and orientation of the two-dimensional planar proxy within a three-dimensional virtual environment, rendering one or more display images for one or more displays of an artificial-reality device based on the two-dimensional planar proxy having the determined pose and at least one of the first and second captured images, displaying the rendered one or more display images using the one or more displays, respectively, determining that a viewing angle of the artificial-reality device relative to the two-dimensional planar proxy exceeds a predetermined maximum threshold, and based on the determination that the viewing angle exceeds the predetermined maximum threshold, ceasing to display the one or more display images.