Patent classifications
H04N9/8715
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes a receiving unit that receives, during or after reproduction of a video, a predetermined operation with respect to the video, an associating unit that associates the received operation with a reproduction location where the received operation has been generated in the video, and a setting unit that sets in response to the received operation an importance degree of the reproduction location associated with the received operation.
TRANSMITTER, TRANSMISSION METHOD, RECEIVER, AND RECEPTION METHOD
An association with a system timing at the time of transmission is secured without changing a display timing in text information of a subtitle, and a reception side displays the subtitle at an appropriate timing.
A packet in which a document of the text information of the subtitle having display timing information is included in a payload is generated and transmitted in synchronization with a sample period. A header of the packet includes a time stamp on a first time axis indicating a start time of the corresponding sample period. The payload of the packet further includes reference time information of a second time axis regarding the display timing associated with the start time of the corresponding sample period.
Template-Based Excerpting and Rendering of Multimedia Performance
Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing template-based excerpting and rendering of multimedia performances technologies. An embodiment includes at least one computer processor configured to retrieve a first content instance and corresponding first metadata. The first content instance may include a first plurality of structural elements, with at least one structural element corresponding to at least part of the first metadata. The first content instance may be transformed by a rendering engine running on the at least one computer processor and/or transmitted to a content-playback device.
EVENT-TRIGGERED VIDEO CREATION WITH DATA AUGMENTATION
A method for creating a video that is generated based on the occurrence of pertinent events within a period of time. This video may be a summary video that includes video segments from multiple sources. The video may be augmented to display data describing pertinent events that occur.
SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER
This disclosure describes a system configured to present primary and secondary, tertiary, etc., virtual reality content to a user. Primary virtual reality content may be displayed to a user, and, responsive to the user turning his view away from the primary virtual reality content, a sensory cue is provided to the user that indicates to the user that his view is no longer directed toward the primary virtual reality content, and secondary, tertiary, etc., virtual reality content may be displayed to the user. Primary virtual reality content may resume when the user returns his view to the primary virtual reality content. Primary virtual reality content may be adjusted based on a user's interaction with the secondary, tertiary, etc., virtual reality content. Secondary, tertiary, etc., virtual reality content may be adjusted based on a user's progression through the primary virtual reality content, or interaction with the primary virtual reality content.
Video clip object tracking
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering a three-dimensional virtual object in a video clip. The method and system include capturing, using a camera-enabled device, video content of a real-world scene and movement information collected by the camera-enabled device during capture of the video content. The captured video and movement information are stored. The stored captured video content is processed to identify a real-world object in the scene. An interactive augmented reality display is generated that: adds a virtual object to the stored video content to create augmented video content comprising the real-world scene and the virtual object; and adjusts, during playback of the augmented video content, an on-screen position of the virtual object within the augmented video content based at least in part on the stored movement information.
Automatic versioning of video presentations
A system and method are presented to create custom versions for users of recorded sessions of individuals. Individuals are recorded at a booth responding to prompts. Audio and visual data recorded at the booth are divided into time segments according to the timing of the prompts. Depth sensors at the booth are used to assign score values to time segments. Prompts are related to criteria that were selected as being relevant to an objective. Users are associated with subsets of criteria in order to identify subsets of prompts whose responses are relevant to the users. Time segments of audio and visual data created by the identified subset of prompts are selected. The selected time segments are ordered according to herd behavior analysis. Lesser weighted time segments may be redacted. The remaining portions of ordered time segments are presented to the user as a custom version.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An image processing device (3000) comprises an input unit (3020) and a presentation unit (3040). The input unit (3020) accepts an input of an operation for movement, on a captured image captured by a camera, of a first image which is superimposed on the captured image on the basis of a predetermined camera parameter indicating the position and attitude of the camera and which indicates a target object having a predetermined shape and a predetermined size set in a real space. The presentation unit (3040) presents the first image indicating the target object in a manner of view corresponding to a position on the captured image after the movement on the basis of the camera parameter.
Stereoscopic 3D camera for virtual reality experience
Embodiments are disclosed for a stereoscopic device (also referred to simply as the “device”) that captures three-dimensional (3D) images and videos with a wide field of view and provides a virtual reality (VR) experience by immersing a user in a simulated environment using the captured 3D images or videos.
User-generated templates for segmented multimedia performance
Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing user-generated templates for segmented multimedia performances. An embodiment includes at least one computer processor configured to transmit a first version of a content instance and corresponding metadata. The first version of the content instance may include a plurality of structural elements, with at least one structural element corresponding to at least part of the metadata. The first content instance may be transformed by a rendering engine triggered by the at least one computer processor.