Patent classifications
H04N13/122
AERIAL IMAGE DISPLAY DEVICE
An aerial image display device includes an image display unit to display an image; an aerial image formation optical system to cause diffuse light emitted from the image display unit to form an image again in a different space by reflecting the light multiple times and allowing the light to pass through; and processing circuitry to acquire viewpoint position information on an observer viewing a point where the diffuse light is caused to form the image again by the aerial image formation optical system, and to control the image from the image display unit depending on an angle formed by a straight line connecting an end point of a retroreflective sheet and an eye of the observer and a straight line extending from the eye of the observer and reaching the retroreflective sheet right in front.
METHOD OF ASYNCHRONOUS REPROJECTION OF AN IMAGE OF A 3D SCENE
Invention relates to processing images of 3D scenes and to a method of asynchronous reprojection in a system of virtual or augmented reality, including (1) receiving color data and depth data of an initial 3D scene image for view A; (2) determining visual features of the initial 3D scene image and weights of the visual features, based on the color data, and determining depth of the visual features of the 3D scene image, based on the depth data; (3) generating a low polygonal grid for reprojection; (4) performing reprojection of the initial 3D scene image for view B different from view A by displacement of low polygonal grid nodes depending on the weights and depths of the image visual features. The method assures a high 3D scene frame rate, reduces image distortions at item borders during reprojection, and decreases data volume for 3D scene image transmitted via a communication channel.
METHOD OF ASYNCHRONOUS REPROJECTION OF AN IMAGE OF A 3D SCENE
Invention relates to processing images of 3D scenes and to a method of asynchronous reprojection in a system of virtual or augmented reality, including (1) receiving color data and depth data of an initial 3D scene image for view A; (2) determining visual features of the initial 3D scene image and weights of the visual features, based on the color data, and determining depth of the visual features of the 3D scene image, based on the depth data; (3) generating a low polygonal grid for reprojection; (4) performing reprojection of the initial 3D scene image for view B different from view A by displacement of low polygonal grid nodes depending on the weights and depths of the image visual features. The method assures a high 3D scene frame rate, reduces image distortions at item borders during reprojection, and decreases data volume for 3D scene image transmitted via a communication channel.
Information processing apparatus, information processing method, and storage medium
An information processing apparatus includes an acquisition unit configured to acquire an image from an image capturing apparatus that captures an image of a real space, an estimation unit configured to estimate a position or orientation of the image capturing apparatus in the real space, based on the image, a creation unit configured to create a map including at least one keyframe, a setting unit configured to set an observation space of a user, a generation unit configured to analyze a relationship between the observation space set by the setting unit and the map created by the creation unit, and generate a model representing the keyframe included in the map, and a control unit configured to cause a display unit to display a combined image of an image based on the generated model and the image.
Information processing apparatus, information processing method, and storage medium
An information processing apparatus includes an acquisition unit configured to acquire an image from an image capturing apparatus that captures an image of a real space, an estimation unit configured to estimate a position or orientation of the image capturing apparatus in the real space, based on the image, a creation unit configured to create a map including at least one keyframe, a setting unit configured to set an observation space of a user, a generation unit configured to analyze a relationship between the observation space set by the setting unit and the map created by the creation unit, and generate a model representing the keyframe included in the map, and a control unit configured to cause a display unit to display a combined image of an image based on the generated model and the image.
CAMERA, HEAD-UP DISPLAY SYSTEM, AND MOVABLE OBJECT
A camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of a head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display.
CAMERA, HEAD-UP DISPLAY SYSTEM, AND MOVABLE OBJECT
A camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of a head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display.
JOINT DEPTH PREDICTION FROM DUAL-CAMERAS AND DUAL-PIXELS
Example implementations relate to joint depth prediction from dual cameras and dual pixels. An example method may involve obtaining a first set of depth information representing a scene from a first source and a second set of depth information representing the scene from a second source. The method may further involve determining, using a neural network, a joint depth map that conveys respective depths for elements in the scene. The neural network may determine the joint depth map based on a combination of the first set of depth information and the second set of depth information. In addition, the method may involve modifying an image representing the scene based on the joint depth map. For example, background portions of the image may be partially blurred based on the joint depth map.
Binocular display with digital light path length modulation
A near-eye display system comprising a image source, a modulation stack, and an imaging assembly. The modulation stack, in one embodiment, comprises one or more digital light path length modulators.
Binocular display with digital light path length modulation
A near-eye display system comprising a image source, a modulation stack, and an imaging assembly. The modulation stack, in one embodiment, comprises one or more digital light path length modulators.