H04N13/366

DISPLAY APPARATUS, IMAGE GENERATION METHOD, AND PROGRAM
20230028976 · 2023-01-26 ·

[Object] To provide a display apparatus, an image generation method, and a program that are capable of displaying images such that an image displayed on a display unit and a scene outside the display apparatus appear to be continuous.

[Solving Means] The display apparatus includes a first image sensor, a first distance sensor, a second sensor, a display unit, and an image generation unit. The first image sensor is disposed on a first surface side of an apparatus main body. The first distance sensor is disposed on the first surface side. The second sensor is disposed on a second surface side opposite to the first surface side. The display unit is disposed on the second surface side. The image generation unit generates a display image to be displayed on the display unit, using a two-dimensional image of a subject and a distance image of the subject, the two-dimensional image being acquired by the first image sensor, the distance image being acquired by the first distance sensor, on the basis of three-dimensional position information of a viewpoint of a photographer, the three-dimensional position information being calculated on the basis of a sensing result acquired by the second sensor.

Anamorphic display device
11707671 · 2023-07-25 · ·

An anamorphic display device is provided. The anamorphic device includes a secondary display configured to be detachably coupled to a computing device including a primary display; and a non-transitory device operatively coupled to the primary and secondary displays and having instructions thereon that are configured, when executed, to render an anamorphic image on at least one of the primary and secondary displays so as to create, in combination, a three-dimensional effect from a point of view facing the primary and secondary displays.

Anamorphic display device
11707671 · 2023-07-25 · ·

An anamorphic display device is provided. The anamorphic device includes a secondary display configured to be detachably coupled to a computing device including a primary display; and a non-transitory device operatively coupled to the primary and secondary displays and having instructions thereon that are configured, when executed, to render an anamorphic image on at least one of the primary and secondary displays so as to create, in combination, a three-dimensional effect from a point of view facing the primary and secondary displays.

Enabling motion parallax with multilayer 360-degree video

Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.

Enabling motion parallax with multilayer 360-degree video

Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.

METHOD AND DEVICE FOR POSITIONING INTERNET OF THINGS DEVICES

A method for positioning an internet of things (IoT) device with AR glasses is provided. The method includes: acquiring an indoor map; determining an AR-position of the AR glasses on the indoor map and a viewing direction of the AR glasses based on initial information of the AR glasses; receiving from the AR glasses tracking information concerning the IoT device to be positioned; and determining an IoT-position of the loT device to be positioned from the AR-position and the viewing direction in combination with the tracking information of the AR glasses.

METHOD AND DEVICE FOR POSITIONING INTERNET OF THINGS DEVICES

A method for positioning an internet of things (IoT) device with AR glasses is provided. The method includes: acquiring an indoor map; determining an AR-position of the AR glasses on the indoor map and a viewing direction of the AR glasses based on initial information of the AR glasses; receiving from the AR glasses tracking information concerning the IoT device to be positioned; and determining an IoT-position of the loT device to be positioned from the AR-position and the viewing direction in combination with the tracking information of the AR glasses.

VIRTUAL REALITY FILM HYBRIDIZATION
20230231982 · 2023-07-20 ·

Described are methods, systems, and media for immersive content. Also described herein are camera assemblies for capturing unidirectional immersive three-dimensional images and video with wide ranges of focal lengths.

VIRTUAL REALITY FILM HYBRIDIZATION
20230231982 · 2023-07-20 ·

Described are methods, systems, and media for immersive content. Also described herein are camera assemblies for capturing unidirectional immersive three-dimensional images and video with wide ranges of focal lengths.

SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
20230231983 · 2023-07-20 ·

There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.