Patent classifications
H04N21/8549
SUPPLMENTAL AUDIO GENERATION SYSTEM IN AN AUDIO-ONLY MODE
Systems and methods for generating supplemental audio for an audio-only mode are disclosed. For example, a system generates for output a content item that includes video and audio. In response to determining that an audio-only mode is activated, the system determines that a portion of the content item is not suitable to play in the audio-only mode. In response to determining that the portion of the content item is not suitable to play in the audio-only mode, the system generates for output supplemental audio associated with the content item during the portion of the content item.
Information processing apparatus and information processing method
A clip image to be used as a highlight image, a replay image, or the like in a broadcast or the like is enabled to be generated easily and precisely. For this purpose, an information processing apparatus performs first processing for converting a received image signal into an image signal for real-time processing and transmitting the image signal to an analysis engine that is located outside. Furthermore, the information processing apparatus performs second processing for receiving event extraction information that has been received from the analysis engine and generating setting information of a clip image, by using the event extraction information.
Dynamic audiovisual segment padding for machine learning
Techniques for padding audiovisual clips (for example, audiovisual clips of sporting events) for the purpose of causing the clip to have a predetermined duration so that the padded clip can be evaluated for viewer interest by a machine learning (ML) algorithm. The unpadded clip is padded with audiovisual segment(s) that will cause the padded clip to have a level of viewer interest that it would have if the unpadded clip had been longer. In some embodiments the padded segments are synthetic images generated by a generative adversarial network such that the synthetic images would have the same level of viewer interest (as adjudged by an ML algorithm) as if the unpadded clip had been shot to be longer.
Dynamic audiovisual segment padding for machine learning
Techniques for padding audiovisual clips (for example, audiovisual clips of sporting events) for the purpose of causing the clip to have a predetermined duration so that the padded clip can be evaluated for viewer interest by a machine learning (ML) algorithm. The unpadded clip is padded with audiovisual segment(s) that will cause the padded clip to have a level of viewer interest that it would have if the unpadded clip had been longer. In some embodiments the padded segments are synthetic images generated by a generative adversarial network such that the synthetic images would have the same level of viewer interest (as adjudged by an ML algorithm) as if the unpadded clip had been shot to be longer.
VIDEO PROCESSING METHOD AND APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
The present disclosure relates to a video processing method and apparatus, a readable medium, and an electronic device. The method includes: acquiring a first highlight segment obtained by performing highlight recognition on a target video; displaying an editing page corresponding to the first target highlight segment in response to an editing instruction for a first target highlight segment, where the editing page is used by a user to perform an editing operation on the first target highlight segment; processing, according to the editing operation of the user, the first target highlight segment to obtain a second highlight segment; and posting the second highlight segment in response to receiving a posting instruction for the second highlight segment.
VIDEO PROCESSING METHOD AND APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
The present disclosure relates to a video processing method and apparatus, a readable medium, and an electronic device. The method includes: acquiring a first highlight segment obtained by performing highlight recognition on a target video; displaying an editing page corresponding to the first target highlight segment in response to an editing instruction for a first target highlight segment, where the editing page is used by a user to perform an editing operation on the first target highlight segment; processing, according to the editing operation of the user, the first target highlight segment to obtain a second highlight segment; and posting the second highlight segment in response to receiving a posting instruction for the second highlight segment.
METHOD FOR GENERATING TARGET VIDEO, APPARATUS, SERVER, AND MEDIUM
A method for generating a target video, an apparatus, a server, and a medium are provided. The method includes: obtaining live broadcast stream data, wherein the live broadcast stream data comprises at least one among voice data and live broadcast interaction data, and video data; performing processing on the live broadcast stream data, and generating at least one among a corresponding voice metric value and interaction metric value, and a corresponding video metric value according to a target object included in a processing result; generating an overall metric value for the live broadcast stream data according to the generated metric values; and in response to determining that the comprehensive metric value for the live broadcast stream data satisfies a preset condition, generating a target video on the basis of the live broadcast stream data.
METHOD FOR GENERATING TARGET VIDEO, APPARATUS, SERVER, AND MEDIUM
A method for generating a target video, an apparatus, a server, and a medium are provided. The method includes: obtaining live broadcast stream data, wherein the live broadcast stream data comprises at least one among voice data and live broadcast interaction data, and video data; performing processing on the live broadcast stream data, and generating at least one among a corresponding voice metric value and interaction metric value, and a corresponding video metric value according to a target object included in a processing result; generating an overall metric value for the live broadcast stream data according to the generated metric values; and in response to determining that the comprehensive metric value for the live broadcast stream data satisfies a preset condition, generating a target video on the basis of the live broadcast stream data.
VIDEO PROCESSING METHOD, APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
The present disclosure relates to a video processing method and apparatus, a readable medium and an electronic device. The method includes: dividing a target video to obtain a target video clip; determining, according to video frame images contained in the target video clip, a quality score corresponding to the target video clip; displaying the quality score corresponding to the target video clip at a time position corresponding to the target video clip on a quality score display timeline, where the time position corresponding to the target video clip is a time position where the target video clip appears in the target video. Thus, it can provide a user with a visual display result about the quality score of the target video clip, and provide a reference for the user's video clip selection, which saves the time the user spends viewing the target video clip.
System and method for presenting contextual clips for distributed content
Systems and methods for presenting contextual clips for distributed content are disclosed. Some embodiments include receiving an input for presenting content while the content is currently being distributed at a first distribution time point within the content, transmitting a request for contextual content prior to the first distribution time point, receiving information for displaying a plurality of contextual content clips distributed prior to the first distribution time point, wherein each of the plurality of contextual content clips corresponds to an event depicted in the content, displaying the plurality of contextual content clips using the received information, and displaying the content at a second distribution time point after all of the plurality of contextual content clips have been displayed.