Patent classifications
G06F3/013
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to an embodiment of the present technology includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
The present technology relates to an information processing device and an information processing method, which are capable of allowing users at remote locations to each grasp more deeply the condition of the space where the partner is present. Provided is an information processing device including a control unit, wherein, between a first space where a first imaging device and a first display device are installed and a second space where a second imaging device and a second display device are installed, when a captured image captured by the imaging device in one of the spaces is displayed by the display device in the other space in real time, the control unit performs a control for presenting a state of the second space in an ineffective region excluding an effective region in which a captured image captured by the second imaging device is displayed, in a display region of the first display device. The present technology can be applied to, for example, a video communication system.
Light field near-eye display and method thereof for generating virtual reality images
A method for generating virtual reality images and used in a light field near-eye display includes steps of: shifting a display image according to at least one change vector of a plurality of eye movement parameters, and calculating a compensation mask according to a simulated image and superimposing the compensation mask on a target image to generate a superimposed target image, wherein brightness distributions of the simulated image and the compensation mask are opposite to each other. The light field near-eye display is also provided. In this way, the light field near-eye display for generating virtual reality images and the method thereof can achieve the purpose of improving the uniformity of the image and expanding the eye box size.
NEAR-EYE DISPLAY DEVICE
The present invention relates to a near-eye display device. The a near-eye display device includes a display, a first lens disposed in front of the display so as to be spaced apart from the display by a predetermined distance, a dynamic aperture adjustment element disposed adjacent to the first lens to dynamically control an aperture size of the first lens and a horizontal position of the aperture on a plane perpendicular to an optical axis, a main optics lens disposed to be spaced apart from the first lens by a predetermined distance, and a control system configured to control the dynamic aperture adjustment element.
AVATAR ANIMATION IN VIRTUAL CONFERENCING
According to a general aspect, a method can include receiving a photo of a virtual conference participant, and a depth map based on the photo, and generating a plurality of synthesized images based on the photo. The plurality of synthesized images can have respective simulated gaze directions of the virtual conference participant. The method can also include receiving, during a virtual conference, an indication of a current gaze direction of the virtual conference participant. The method can further include animating, in a display of the virtual conference, an avatar corresponding with the virtual conference participant. The avatar can be based on the photo. Animating the avatar can be based on the photo, the depth map and at least one synthesized image of the plurality of synthesized images, the at least one synthesized image corresponding with the current gaze direction.
XR RENDERING FOR 3D AUDIO CONTENT AND AUDIO CODEC
A device includes a memory configured to store instructions and also includes one or more processors configured to execute the instructions to obtain audio data corresponding to a sound source and metadata indicative of a direction of the sound source. The one or more processors are configured to execute the instructions to obtain direction data indicating a viewing direction associated with a user of a playback device. The one or more processors are configured to execute the instructions to determine a resolution setting based on a similarity between the viewing direction and the direction of the sound source. The one or more processors are also configured to execute the instructions to process the audio data based on the resolution setting to generate processed audio data.
METHOD AND DEVICE FOR LATENCY REDUCTION OF AN IMAGE PROCESSING PIPELINE
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
VEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY CONTROL METHOD AND COMPUTER-READABLE STORAGE MEDIUM
The appearance of content that guides a traveling path of a vehicle is improved. When a display control ECU shifts a path line, which is expressed by path line information included in map information for navigation, in a vehicle transverse direction in accordance with a distance along the vehicle transverse direction between a position of a vehicle and the path line, the display control ECU causes content to be displayed on a HUD at a corresponding position on the shifted path line.
HOLOGRAPHIC DISPLAY SYSTEM
A display system for a vehicle includes a display unit mounted to the vehicle and is selectively operable in a first mode as a holographic display and in a second mode as a mirror. Holographic images may include rear view images obtained from a camera or computer generated graphics. Holographic images are displayed at a virtual image plane behind the display to reduce the operator's eyes accommodation.
INTERNET OF THINGS CONFIGURATION USING EYE-BASED CONTROLS
In an approach to an Internet of Things configuration using eye-based controls, one or more computer processors receive an initiation of eye control of one or more computing devices from a user. One or more computer processors identify an eye gaze direction of the user. Based on the identified eye gaze direction, one or more computer processors determine one or more target devices of the one or more computing devices. One or more computer processors determine one or more activities associated with the one or more target devices. One or more computer processors determine one or more eye control commands associated with the one or more activities. One or more computer processors display the one or more eye control commands associated with the one or more activities in the field of view of the user.