Patent classifications
H04N13/398
Display method of image
A display method of an image is disclosed. A position of a vergence surface of a user is obtained through a gaze tracking device. An image is provided by a display, the image is located at a virtual image surface, and the image has an offset between different view directions. A controller is coupled to the gaze tracking device and the display. The controller receives an information of the position of the vergence surface obtained through the gaze tracking device, performs an algorithm processing according to the information to obtain the offset, and transmits a display information including the offset to the display. An eye of the user focuses on an accommodation surface when viewing the image, and a position of the accommodation surface is different from a position of the virtual image surface.
Display method of image
A display method of an image is disclosed. A position of a vergence surface of a user is obtained through a gaze tracking device. An image is provided by a display, the image is located at a virtual image surface, and the image has an offset between different view directions. A controller is coupled to the gaze tracking device and the display. The controller receives an information of the position of the vergence surface obtained through the gaze tracking device, performs an algorithm processing according to the information to obtain the offset, and transmits a display information including the offset to the display. An eye of the user focuses on an accommodation surface when viewing the image, and a position of the accommodation surface is different from a position of the virtual image surface.
Image processing apparatus, image processing method, and storage medium
Provided is an image processing apparatus including: a virtual viewpoint image generating unit that generates a virtual viewpoint image which is a video, based on captured images captured by image capturing apparatuses from different directions; an electronic sign information obtaining unit that obtains information indicating a timing at which a content displayed on an electronic sign changes, the electronic sign being contained in the virtual viewpoint image and configured to change the content to be displayed on a time basis; and a control unit that performs control to cause a display unit to display the virtual viewpoint image having a virtual content inserted, the virtual content being a content that is virtual and not contained in the captured images. Based on the information, the control unit controls how the virtual content is displayed on the virtual viewpoint image.
Image processing apparatus that performs processing concerning display of stereoscopic image, image processing method, and storage medium
An image processing apparatus that performs processing concerning the display of a stereoscopic image so as to improve the convenience of a user in viewing a stereoscopic image. A head mount display displays a stereoscopic image using image data including a plurality of images having different viewpoints. The image data is processed based on metadata attached to the image data. In a case where information indicating that the image data is associated with a file format which cannot cause the head mount display to perform display is included in the metadata, the image data is converted into a file format which can cause the head mount display to perform display.
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
METHODS FOR DISPLAYING USER INTERFACE ELEMENTS RELATIVE TO MEDIA CONTENT
In some embodiments, a computer system displays a caption for a media item at different depths depending on the depth of the portion of the media item over which the caption is displayed. In some embodiments, a computer system displays a user interface element that includes information associated with the media item at different locations relative to the media item depending on attention of the user. In some embodiments, a computer system displays a user interface element that includes information associated with the media item with different visual appearances depending on visual characteristics of the portion of the media item over which the user interface element is displayed.
Multi-depth display system
An imaging system includes an image realisation device, and projection optics for rendering a display image on a display screen. The image realisation device includes an image realisation surface and a light structuring device having a surface with a first and second region. The light structuring device simulates a first lens on the first region of the surface. A first source image formed on a first region of the image realisation surface and projected through the projection optics renders a first display image on the display screen at a first apparent depth. The light structuring device simulates a second lens on the second region of the surface. A second source image formed on a second region of the image realisation surface and projected through the projection optics renders a second display image on the display screen at a second apparent depth. The first and second lens are independently configurable.
Multi-depth display system
An imaging system includes an image realisation device, and projection optics for rendering a display image on a display screen. The image realisation device includes an image realisation surface and a light structuring device having a surface with a first and second region. The light structuring device simulates a first lens on the first region of the surface. A first source image formed on a first region of the image realisation surface and projected through the projection optics renders a first display image on the display screen at a first apparent depth. The light structuring device simulates a second lens on the second region of the surface. A second source image formed on a second region of the image realisation surface and projected through the projection optics renders a second display image on the display screen at a second apparent depth. The first and second lens are independently configurable.
Light field display
A method of displaying a light field to at least one viewer of a light field display device, the light field based on a 3D model, the light field display device comprising a plurality of spatially distributed display elements, the method including the steps of: (a) determining the viewpoints of the eyes of the at least one viewer relative to the display device; (b) for each eye viewpoint and each of a plurality of the display elements, rendering a partial view image representing a view of the 3D model from the eye viewpoint through the display element; and (c) displaying, via each display element, the set of partial view images rendered for that display element.