Patent classifications
G06F16/748
TEXT-DRIVEN EDITOR FOR AUDIO AND VIDEO ASSEMBLY
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
MULTIMEDIA LINKED TIMESTAMP VALIDATION DETECTION
A computer-implemented method, including: receiving, by a computing device, a post referring to a multimedia content; identifying, by the computing device, a time in the post; generating, by the computing device, a validity score based on analyzing contextual data of the time in the post; determining, by the computing device, a correlation between the time in the post to the multimedia content based on the validity score; and publishing, by the computing device, the post with an interactive link to a corresponding time of the multimedia content based on the determined correlation.
Information processing apparatus, information processing method, and program for presenting reproduced video including service object and adding additional image indicating the service object
This information processing apparatus includes: a media reproduction unit that acquires and reproduces video data including a service object, for which a service that processes a request from a user through voice is available; and a controller that adds an additional image for informing the user about the service object to the reproduced video and saves identification information of the video data and information of a start time and an end time of the additional image, as a bookmark that is optionally selected by the user and is provided to a scene with the additional image.
RATING INTERFACE FOR BEHAVIORAL IMPACT ASSESSMENT DURING INTERPERSONAL INTERACTIONS
A rating interface system and method are provided that allow human users to continuously rate the impact they or other human users and/or their avatars are having on themselves or others during interpersonal interactions, such as conversations or group discussions. The system and method provide time stamping of users' ratings data and audio and video data of an interaction, and correlate the ratings data with the audio and video data at selected time intervals for subsequent analysis.
Transformation of database entries for improved association with related content items
A content analysis system includes processor and memory hardware storing data analyzed content items and instructions for execution by the processor hardware. The instructions include, in response to a first intermediate content item being analyzed to generate a first text description, receiving the first intermediate content item and analyzing the first text description to generate a first reduced text description. The instructions include identifying a first set of tags by applying a tag model to the first text description and generating a first analyzed content item. The instructions include adding the first analyzed content item to the analyzed content database and, in response to a displayed content item being associated with at least one tag of the first set of tags, displaying a first user-selectable link corresponding to the first analyzed content item on a portion of a user interface of a user device displaying the displayed content item.
METHODS, SYSTEMS, AND MEDIA FOR PRESENTING INTERACTIVE ELEMENTS WITHIN VIDEO CONTENT
Methods, systems, and media for presenting interactive elements within video content are provided. In some embodiments, the method comprises: causing immersive video content to be presented on a user device, wherein the immersive video content includes at least a first view and a second view, and wherein the first view includes a first interactive element to be presented within the first view and the second view includes a second interactive element to be presented within the second view; receiving an indication that the first view of the immersive video content is to be presented; in response to receiving the indication, causing the first view of the immersive video content to be presented on the user device; determining that the first interactive element has been presented within the first view of the immersive video content; in response to determining that the first interactive element has been presented, identifying a content creator associated with the first interactive element; and assigning attribution information that indicates the presentation of the first interactive element to the content creator associated with the first interactive element.
Viewport selection for hypervideo presentation
Embodiments of the invention provide a method, system and computer program product for viewport selection for hypervideo presentation. In a method for viewport selection for hypervideo presentation, a multiplicity of different hypervideos, for example 360° hypervideos, are played back to an end user and end user interactions by the end user with each of the different hypervideos are recorded. Then, an end user profile for the end user is computed from the recorded end user interactions so as to specify a particular viewport. Finally, in response to a directive by the end user to view a new hypervideo, the end user profile is retrieved and the particular viewport identified so that the new hypervideo is played back to the end user utilizing the particular viewport.
SYSTEM AND METHOD FOR PROVIDING ADDITIONAL INFORMATION BASED ON MULTIMEDIA CONTENT BEING VIEWED
A distributed computing system for artificial intelligence in relating a second multimedia program content with a first multimedia program content based on a key reference. A user terminal is set up locally in a user’s environment to monitor a first multimedia program content consumed by the user. The user’s reaction to a portion of the first multimedia program content is detected by the user terminal. The relevant portion of the first multimedia program content is identified and parsed to obtain a reference portion. The reference portion is related to a second multimedia portion using database mapping and machine learning.
USING MANIFEST FILES TO DETERMINE EVENTS IN CONTENT ITEMS
Systems, methods, apparatuses are described for monitoring events in a plurality of different services. A system may monitor manifest files for one or more content items. Manifest files may contain manifest file tags indicating events. Events may be detected, and a switch from one content item to another content item, based on customized user priority preferences, may be caused.
Digital transport adapter
One or more computing devices may be configured to identify information corresponding to a program change request associated with a multi-program data transmission. The information may comprise at least a link to a desired program within the multi-program data transmission. The one or more computing devices may communicate the link to the desired program to a client device over a specified time period. After the time period, the one or more computing devices may communicate the desired program to the client device using a single program data transmission. The single program data transmission may be derived from the multi-program data transmission.