H04N13/344

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

Methods and apparatus for encoding, communicating and/or using images

Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.

Head mounted display apparatus

The occlusion is faithfully expressed even in the binocular vision in the AR display by a head mounted display apparatus or the like. A head mounted display apparatus 10 includes a lens 12, a lens 13, a camera 14, a camera 15, and a control processor 16. A CG image for a right eye is displayed on the lens 12. A CG image for a left eye is displayed on the lens 13. The camera 14 captures an image for the right eye. The camera 15 captures an image for the left eye. The control processor 16 generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the cameras 14 and 15 and projects the generated CG image for the right eye and CG image for the left eye onto the lenses 12 and 13. A center of a lens of the camera 14 is provided at the same position as a center of the lens 12. A center of a lens of the camera 15 is provided at the same position as a center of the lens 13.

Head mounted display apparatus

The occlusion is faithfully expressed even in the binocular vision in the AR display by a head mounted display apparatus or the like. A head mounted display apparatus 10 includes a lens 12, a lens 13, a camera 14, a camera 15, and a control processor 16. A CG image for a right eye is displayed on the lens 12. A CG image for a left eye is displayed on the lens 13. The camera 14 captures an image for the right eye. The camera 15 captures an image for the left eye. The control processor 16 generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the cameras 14 and 15 and projects the generated CG image for the right eye and CG image for the left eye onto the lenses 12 and 13. A center of a lens of the camera 14 is provided at the same position as a center of the lens 12. A center of a lens of the camera 15 is provided at the same position as a center of the lens 13.

Electronic apparatus, control method for electronic apparatus, and non-transitory computer-readable storage medium
11558599 · 2023-01-17 · ·

An electronic apparatus comprising: a processor; and a memory storing a program which, when executed by the processor, causes the electronic apparatus to: acquire a VR image; read viewpoint information indicating a plurality of viewpoints with respect to the VR image; control a display device so that a part of the VR image corresponding to each viewpoint is automatically switched over in order and displayed, on a screen on a basis of the viewpoint information; and change the viewpoint so that a predetermined subject is included in the part of the VR image displayed on the screen.

Apparatus equipped with depth control function for enabling augmented reality
11556007 · 2023-01-17 · ·

The present invention relates to an apparatus equipped with a depth of field control function for enabling augmented reality, which is capable of controlling the depth of field and focus and also preventing an image from being blurred due to diffraction by using a pseudo-pinhole effect. The apparatus includes: a display unit configured to generate a virtual image; a circular depth of field control unit configured to have a size in a range from 50 to 700 μm, and also configured to reflect the virtual image generated in the display unit, to increase the depth of field of the virtual image, and to then enable the virtual image to reach an eye of the user; and a frame part configured such that the display unit and the depth of field control unit are installed thereon or therein, and also configured to enable the user to wear the apparatus for enabling augmented reality.

Apparatus equipped with depth control function for enabling augmented reality
11556007 · 2023-01-17 · ·

The present invention relates to an apparatus equipped with a depth of field control function for enabling augmented reality, which is capable of controlling the depth of field and focus and also preventing an image from being blurred due to diffraction by using a pseudo-pinhole effect. The apparatus includes: a display unit configured to generate a virtual image; a circular depth of field control unit configured to have a size in a range from 50 to 700 μm, and also configured to reflect the virtual image generated in the display unit, to increase the depth of field of the virtual image, and to then enable the virtual image to reach an eye of the user; and a frame part configured such that the display unit and the depth of field control unit are installed thereon or therein, and also configured to enable the user to wear the apparatus for enabling augmented reality.

SIMULATION SIGHTING BINOCULARS, AND SIMULATION SYSTEM AND METHODS
20230009683 · 2023-01-12 ·

Simulation sighting binoculars include a camera, a screen, a pair of lenses which are arranged to face the screen, and electronic circuitry which is configured to: obtain a video frame from the camera; transmit, to a simulation platform, geolocation information and orientation information for the camera; receive simulation graphical elements and spatial positioning information for corresponding virtual objects; carry out a two-dimensional rendering of the virtual objects, in order to display them in a projection window E2; superimpose the two-dimensional rendering of the virtual objects in the projection window E2 and the video frame in a projection window E3; obtain a mixed-reality stereoscopic image using a pair of virtual cameras reproducing binocular vision which is adapted to the pair of lenses; and carry out, on the screen, a right-eye and left-eye display of the obtained mixed-reality stereoscopic image.

SIMULATION SIGHTING BINOCULARS, AND SIMULATION SYSTEM AND METHODS
20230009683 · 2023-01-12 ·

Simulation sighting binoculars include a camera, a screen, a pair of lenses which are arranged to face the screen, and electronic circuitry which is configured to: obtain a video frame from the camera; transmit, to a simulation platform, geolocation information and orientation information for the camera; receive simulation graphical elements and spatial positioning information for corresponding virtual objects; carry out a two-dimensional rendering of the virtual objects, in order to display them in a projection window E2; superimpose the two-dimensional rendering of the virtual objects in the projection window E2 and the video frame in a projection window E3; obtain a mixed-reality stereoscopic image using a pair of virtual cameras reproducing binocular vision which is adapted to the pair of lenses; and carry out, on the screen, a right-eye and left-eye display of the obtained mixed-reality stereoscopic image.

TELEPRESENCE THROUGH OTA VR BROADCAST STREAMS

Techniques are described for expanding and/or improving the Advanced Television Systems Committee (ATSC) 3.0 television protocol in robustly delivering the next generation broadcast television services. Telepresence is provided through over-the-air (OTA) virtual reality (VR) broadcast streams.