Patent classifications
H04N13/366
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
Enhanced Emotive Engagement with Volumetric Content
A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
Scope of coverage indication in immersive displays
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.
Scope of coverage indication in immersive displays
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.
Virtual reality surgical camera system
A system includes a console assembly, a trocar assembly operably coupled to the console assembly, a camera assembly operably coupled to the console assembly having a stereoscopic camera assembly, and at least one rotational positional sensor configured to detect rotation of the stereoscopic camera assembly about at least one of a pitch axis or a yaw axis. The console assembly includes a first actuator and a first actuator pulley operable coupled to the first actuator. The trocar assembly includes a trocar having an inner and outer diameter, and a seal sub-assembly comprising at least one seal and the seal sub-assembly operably coupled to the trocar. The camera assembly includes a camera support tube having a distal and a proximal end, the stereoscopic camera operably coupled to the distal end of the support tube and a first and second camera module having a first and second optical axis.
Virtual reality surgical camera system
A system includes a console assembly, a trocar assembly operably coupled to the console assembly, a camera assembly operably coupled to the console assembly having a stereoscopic camera assembly, and at least one rotational positional sensor configured to detect rotation of the stereoscopic camera assembly about at least one of a pitch axis or a yaw axis. The console assembly includes a first actuator and a first actuator pulley operable coupled to the first actuator. The trocar assembly includes a trocar having an inner and outer diameter, and a seal sub-assembly comprising at least one seal and the seal sub-assembly operably coupled to the trocar. The camera assembly includes a camera support tube having a distal and a proximal end, the stereoscopic camera operably coupled to the distal end of the support tube and a first and second camera module having a first and second optical axis.
Immersive viewing experience
This patent discloses a method record imagery in a way that is larger than a user could visualize. Then, allow the user to view naturally via head tracking and eye tracking to allow one to see and inspect a scene as if one were naturally there viewing it in real time. A smart system of analyzing the viewing parameters of a user and streaming of customized image to be displayed is also taught herein.
Immersive viewing experience
This patent discloses a method record imagery in a way that is larger than a user could visualize. Then, allow the user to view naturally via head tracking and eye tracking to allow one to see and inspect a scene as if one were naturally there viewing it in real time. A smart system of analyzing the viewing parameters of a user and streaming of customized image to be displayed is also taught herein.
VIEWING SYSTEM, DISTRIBUTION APPARATUS, VIEWING APPARATUS, AND RECORDING MEDIUM
A viewing system provides a viewing user with an experience of viewing a content that presents a character whose behaviors are controlled based on physical motions of a performer, the content being a binocular stereopsis content that presents staging in a 3D space in which a first character associated with a first performer and a second character associated with a second performer are arranged. The first character is arranged in a first region in the space, a viewpoint associated with the viewing user and the second character are arranged in a second region in the space. The system controls behaviors of the first character based on first motion information of the first performer, and controls behaviors of the second character based on at least one of second motion information of the second performer and information of an operational input performed by the viewing user.
VIEWING SYSTEM, DISTRIBUTION APPARATUS, VIEWING APPARATUS, AND RECORDING MEDIUM
A viewing system provides a viewing user with an experience of viewing a content that presents a character whose behaviors are controlled based on physical motions of a performer, the content being a binocular stereopsis content that presents staging in a 3D space in which a first character associated with a first performer and a second character associated with a second performer are arranged. The first character is arranged in a first region in the space, a viewpoint associated with the viewing user and the second character are arranged in a second region in the space. The system controls behaviors of the first character based on first motion information of the first performer, and controls behaviors of the second character based on at least one of second motion information of the second performer and information of an operational input performed by the viewing user.