H04N2213/006

SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
20230231983 · 2023-07-20 ·

There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.

Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
11477434 · 2022-10-18 · ·

Some embodiments of an example method may include receiving an input image with depth information; mapping the input image to a set of focal plane images; orienting the set of focal plane images using head orientation information to provide stereo disparity between left and right eyes; and displaying the oriented set of focal plane images. Some embodiments of another example method may include: receiving a description of three-dimensional (3D) content; receiving, from a tracker, information indicating motion of a viewer relative to a real-world environment; responsive to receiving the information indicating motion of the viewer, synthesizing motion parallax by altering multi-focal planes of the 3D content; and rendering an image to the multi-focal plane display using the altered multi-focal plane rendering.

IMAGE PROCESSING APPARATUS AND METHOD, AND PROGRAM
20230156170 · 2023-05-18 ·

The present technology relates to an image processing apparatus and method and a program that enable natural representation of light in an image in accordance with a viewpoint. The image processing apparatus calculates information indicating a change in a light source region between an input image and a viewpoint-converted image that is obtained by performing viewpoint conversion on the input image on the basis of a specified viewpoint, and causes a change in representation of light in the viewpoint-converted image on the basis of the calculated information indicating a change in the light source region. The present technology can be applied to an image display system that generates a pseudo stereoscopic image with motion parallax from one image.

ADAPTIVE STREAMING OF AN IMMERSIVE VIDEO SCENE
20170374411 · 2017-12-28 ·

Client configured for retrieving a video data representation of an immersive video scene streamed by a server using a streaming protocol, wherein the server is configured for providing a plurality of streams to the client, wherein each of the streams comprises a portion of the immersive video scene, the client comprising: a sending interface; a reception interface; a viewing direction receiving unit; and a stream selecting unit; wherein the sending interface is configured for transmitting a streaming request for streaming the one or more selected streams as the video data representation of the immersive video scene, wherein the sending interface receives from the stream selecting unit a selected stream information identifying the one or more selected streams, wherein the selected stream information is created by the stream selecting unit based on the viewing direction of the user of the client and based on the manifest.

Graphical system with enhanced stereopsis
09848186 · 2017-12-19 · ·

A computer system that provides stereoscopic images is described. During operation, the computer system generates the stereoscopic images at a location corresponding to a viewing plane based on data having a discrete spatial resolution, where the stereoscopic images include image parallax. Then, the computer system scales objects depicted in the stereoscopic images so that depth acuity associated with the image parallax is increased, where the scaling is based on the spatial resolution and a viewing geometry associated with a display. For example, the viewing geometry may include a distance from an individual that views the stereoscopic images on the display and the display. Alternatively, the viewing geometry may include a focal point of the individual. Next, the computer system provides the resulting stereoscopic images to the display. In this way, the computer system may optimize the depth acuity for data having discrete sampling.

Display apparatus

A display apparatus includes a screen formed in an arcuate shape centering on a center axis and a projection device configured to project, along projecting directions orthogonal to the center axis and different from one another, images corresponding to the projecting directions on the inner circumferential surface or the outer circumferential surface of the screen. The screen includes a retroreflective layer having a reflection surface directed to the projection device and a diffusion layer arranged on the projection device side with respect to the retroreflective layer and configured to diffuse, when transmitting light made incident from the retroreflective layer, the light wider in a first direction along the center axis than in a second direction, which is a circumferential direction centering on the center axis.

Display device and display method for three dimensional displaying
09749612 · 2017-08-29 · ·

In the present disclosure, it is provided a display device, which may include: a detection unit, configured to detect position information with respect to viewer's eyes; a processing unit, configured to obtain the position information with respect to the viewer's eyes from the detection unit, and obtain a screen display image corresponding to a visual angle of the eyes based on the position information with respect to the eyes and parameters for a stereo image to be displayed; and a display unit, configured to obtain the screen display image from the processing unit, and display the screen display image on the display unit.

Microscope video processing device and medical microscope system

The present invention is intended to convert a video input from a surgical microscope into a three-dimensional video. A microscope video processing device 100 includes: a microscope video acquisition unit that acquires a microscope video output from an surgical microscope 200; a video conversion unit that converts the microscope video acquired by the microscope video acquisition unit into a three-dimensional video; a surgical instrument position determination unit that determines the position of a surgical instrument in the three-dimensional video converted by the video conversion unit; a distance calculation unit that calculates a distance between a preset patient's preset surgery target region and the position of the surgical instrument determined by the surgical instrument position determination unit; and a video output unit that outputs to a display unit an output video in which distance information indicative of the distance calculated by the distance calculation unit is displayed in the three-dimensional video.

TRUNCATED SQUARE PYRAMID GEOMETRY AND FRAME PACKING STRUCTURE FOR REPRESENTING VIRTUAL REALITY VIDEO CONTENT
20170280126 · 2017-09-28 ·

Techniques and systems are described for mapping 360-degree video data to a truncated square pyramid shape. A 360-degree video frame can include 360-degrees' worth of pixel data, and thus be spherical in shape. By mapping the spherical video data to the planes provided by a truncated square pyramid, the total size of the 360-degree video frame can be reduced. The planes of the truncated square pyramid can be oriented such that the base of the truncated square pyramid represents a front view and the top of the truncated square pyramid represents a back view. In this way, the front view can be captured at full resolution, the back view can be captured at reduced resolution, and the left, right, up, and bottom views can be captured at decreasing resolutions. Frame packing structures can also be defined for 360-degree video data that has been mapped to a truncated square pyramid shape.

Methods and system for simulated 3D videoconferencing
09769422 · 2017-09-19 · ·

A system and method for manipulating images in a videoconferencing session provides users with a 3-D-like view of one or more presented sites, without the need for 3-D equipment. A plurality of cameras may record a room at a transmitting endpoint, and the receiving endpoint may select one of the received video streams based upon a point of view of a conferee at the receiving endpoint. The conferee at the receiving endpoint will thus experience a 3-D-like view of the presented site.