Patent classifications
H04N13/366
SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.
Image generation apparatus, image generation method, and program
Provided is an image generation technology capable of suppressing unpleasantness associated with fluctuation in image quality caused by the viewer's viewpoint movement. A synthesis ratio determination unit of an image generation apparatus includes: an image quality evaluation index calculation unit that generates a synthesis image J.sub.1 from an observation viewpoint image I.sub.1 and an intermediate viewpoint image I.sub.2, and a synthesis image J.sub.3 from an observation viewpoint image I.sub.3 and an intermediate viewpoint image I.sub.2, for a plurality of synthesis ratios, calculates an image quality evaluation index in an observation viewpoint V.sub.1, an image quality evaluation index in an intermediate viewpoint V.sub.2, and an image quality evaluation index in an observation viewpoint V.sub.3 by using the synthesis images J.sub.1 and J.sub.3, and calculates a variation v of an image quality evaluation index by using the image quality evaluation index in the observation viewpoint V.sub.1, the image quality evaluation index in the intermediate viewpoint V.sub.2, and the image quality evaluation index in the observation viewpoint V.sub.3; and an image quality evaluation index comparison unit that determines a synthesis ratio A based on the variation v of the image quality evaluation index.
Image generation apparatus, image generation method, and program
Provided is an image generation technology capable of suppressing unpleasantness associated with fluctuation in image quality caused by the viewer's viewpoint movement. A synthesis ratio determination unit of an image generation apparatus includes: an image quality evaluation index calculation unit that generates a synthesis image J.sub.1 from an observation viewpoint image I.sub.1 and an intermediate viewpoint image I.sub.2, and a synthesis image J.sub.3 from an observation viewpoint image I.sub.3 and an intermediate viewpoint image I.sub.2, for a plurality of synthesis ratios, calculates an image quality evaluation index in an observation viewpoint V.sub.1, an image quality evaluation index in an intermediate viewpoint V.sub.2, and an image quality evaluation index in an observation viewpoint V.sub.3 by using the synthesis images J.sub.1 and J.sub.3, and calculates a variation v of an image quality evaluation index by using the image quality evaluation index in the observation viewpoint V.sub.1, the image quality evaluation index in the intermediate viewpoint V.sub.2, and the image quality evaluation index in the observation viewpoint V.sub.3; and an image quality evaluation index comparison unit that determines a synthesis ratio A based on the variation v of the image quality evaluation index.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing apparatus includes an imaging unit that images a user and a real object, an analysis unit that analyzes an attitude of the real object based on imaging information captured by the imaging unit, and a control unit that controls display of an image related to the real object based on the attitude of the real object.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus (100) includes: a calculation unit (163) that calculates an index value related to display of stereoscopic image content including a stereoscopic image based on a relative position of a viewer with respect to a position of a display unit (150) that displays the stereoscopic image; and a display control unit (164) that controls display processing performed by the display unit (150) based on the index value calculated by the calculation unit (163).
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES
Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.
HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES
Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.
Electronic device and control method thereof
An electronic device according to the present invention includes: a processor; and a memory storing a program which, when executed by the processor, causes the electronic device to: perform control to change a display region of an image in accordance with an orientation change of the electronic device or in accordance with accepting a user operation and display the display region of the image on a screen; and determine a clipping region of the image to be clipped from the image based on a position of the display region of the image, wherein the image includes the display region and the clipping region and the clipping region is wider than the display region.