H04N13/395

Multi-depth augmented reality display

A system includes an image realisation device for forming a source image and projection optics for rendering a display image on a display screen, wherein the display image is a virtual image corresponding to the source image. The projection optics have an optical axis, and the image realisation device includes a first image realisation surface at a first distance along the optical axis and a second image realisation surface at a second, different distance along the optical axis. The first and second image realisation surfaces overlap, and the first and second image realisation surfaces include multiple regions, each region switchable between a transparent state and an image realisation state such that the source image may be formed on a region of the first or second image realisation surface and projected through the projection optics to render the display image on the display screen at a first or second apparent depth.

Transparent display device, and three-dimensional image display apparatus comprising same
11627304 · 2023-04-11 · ·

A transparent display device includes an image display bar in which a plurality of light-emitting elements are arranged, and a bar driving unit which moves the image display bar along a predetermined path and provides transparent display using afterimages resulting from the movement of the light-emitting elements, wherein the transparency of the transparent display using afterimages is determined by the equation, transparency (%)=((A−B)/A)*100, where A denotes the entire display area of the transparent display, and B denotes the area of the image display bar.

METHOD AND DEVICE FOR PROCESSING IMAGE CONTENT
20230072247 · 2023-03-09 ·

A method and system are provided for processing image content. In one embodiment the method comprises receiving a plurality of captured contents showing same scene as captured by one or more cameras having a different focal length and depth maps and generating a consensus cube by obtaining depth map estimations from said received contents. The visibility of different objects in then analysed to create a soft visibility cube that provides visibility information about each content. A color cube is then generated by using information from the consensus and soft visibility cube. The color cube is then used to combine different received contents and generate a single image for the plurality of contents received.

METHOD AND DEVICE FOR PROCESSING IMAGE CONTENT
20230072247 · 2023-03-09 ·

A method and system are provided for processing image content. In one embodiment the method comprises receiving a plurality of captured contents showing same scene as captured by one or more cameras having a different focal length and depth maps and generating a consensus cube by obtaining depth map estimations from said received contents. The visibility of different objects in then analysed to create a soft visibility cube that provides visibility information about each content. A color cube is then generated by using information from the consensus and soft visibility cube. The color cube is then used to combine different received contents and generate a single image for the plurality of contents received.

SYSTEM AND METHOD FOR DISPLAYING A THREE-DIMENSIONAL IMAGE

A method or system can be configured to receive content associated with a scene; optionally, format the content as a three-dimensional image; render the content or three-dimensional image in a display-readable format; optionally, authenticate the display; and display the formatted content such that the formatted content is perceivable as three-dimensional for one or more viewers.

SYSTEM AND METHOD FOR DISPLAYING A THREE-DIMENSIONAL IMAGE

A method or system can be configured to receive content associated with a scene; optionally, format the content as a three-dimensional image; render the content or three-dimensional image in a display-readable format; optionally, authenticate the display; and display the formatted content such that the formatted content is perceivable as three-dimensional for one or more viewers.

Geometry buffer slice tool

A method for visualizing a three-dimensional volume for use in a virtual reality environment is performed by uploading two-dimensional images for evaluation, creating planar depictions of the two-dimensional images, and using thresholds to determine if voxels should be drawn. A voxel volume is created from the planar depictions and voxels. A user defines a plane to be used for slicing the voxel volume, and sets values of the plane location and plane normal. The slice plane is placed within the voxel volume and defines a desired remaining portion of the volumetric plane to be displayed. All but the desired remaining portion of the voxel volume is not drawn and the remaining portion is displayed.

Geometry buffer slice tool

A method for visualizing a three-dimensional volume for use in a virtual reality environment is performed by uploading two-dimensional images for evaluation, creating planar depictions of the two-dimensional images, and using thresholds to determine if voxels should be drawn. A voxel volume is created from the planar depictions and voxels. A user defines a plane to be used for slicing the voxel volume, and sets values of the plane location and plane normal. The slice plane is placed within the voxel volume and defines a desired remaining portion of the volumetric plane to be displayed. All but the desired remaining portion of the voxel volume is not drawn and the remaining portion is displayed.

LIGHT FIELD DISPLAY METROLOGY

Examples of a light field metrology system for use with a display are disclosed. The light field metrology may capture images of a projected light field, and determine focus depths (or lateral focus positions) for various regions of the light field using the captured images. The determined focus depths (or lateral positions) may then be compared with intended focus depths (or lateral positions), to quantify the imperfections of the display. Based on the measured imperfections, an appropriate error correction may be performed on the light field to correct for the measured imperfections. The display can be an optical display element in a head mounted display, for example, an optical display element capable of generating multiple depth planes or a light field display.

Depth based foveated rendering for display systems

Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include monitoring eye orientations of a user of a display system based on detected sensor information. A fixation point is determined based on the eye orientations, the fixation point representing a three-dimensional location with respect to a field of view. Location information of virtual objects to present is obtained, with the location information indicating three-dimensional positions of the virtual objects. Resolutions of at least one virtual object is adjusted based on a proximity of the at least one virtual object to the fixation point. The virtual objects are presented to a user by display system with the at least one virtual object being rendered according to the adjusted resolution.