H04N21/43

SYNCHRONIZED RECORDING OF AUDIO AND VIDEO WITH WIRELESSLY CONNECTED VIDEO AND AUDIO RECORDING DEVICES
20230046779 · 2023-02-16 ·

A method of synchronizing video and audio when recording with a video recording device and an audio recording device configured for wireless data communication with each other including activating a recording at the video recording device, sending an audio recording command from the video recording device to the audio recording device, storing a recorded video data stream in a memory of the video recording device, receiving an audio data stream from the audio recording device at the video recording device and storing in the memory of the video recording device, determining a delay of the stored audio data stream relative to the stored video data stream, and joining the stored audio data stream and the stored video data stream together, taking the determined delay into consideration, in order to provide a recording data stream with synchronized video and audio.

INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD

An information processing system for obtaining an audio content file for video data providing video content representing a sport event, including: a receiver configured to receive a data stream including the video data; a preference data obtainer configured to obtain preference data, wherein the preference data indicate a selected competitor participating in the sport event; a category identifier obtainer configured to obtain a category identifier from a machine learning algorithm into which the video data is input, wherein the machine learning algorithm is trained to classify a scene represented in the video content into a category of a predetermined set of categories associated with the sport event, wherein the category identifier indicates the category into which the scene is classified; an audio content file obtainer configured to obtain, based on the obtained category identifier and the obtained preference data, the audio content file from a prestored set of audio content files, wherein the audio content file provides audio content associated with the category of the scene and the preference data; and a synchronizer configured to synchronize the audio content and the video content for synchronized play back of the scene by a media player configured to play back the video content and the audio content file.

METHODS AND SYSTEMS FOR SYNCHRONIZING PLAYBACK OF MEDIA CONTENT ITEMS

Systems and methods are described for synchronizing playback of media content items. A first media content item is displayed at first user equipment and second user equipment. At least one playback parameter relating to the operation of the first user equipment and/or the second user equipment is determined. A determination is made as to whether the at least one playback parameter is less than a first playback parameter threshold. In response to determining that the at least one playback parameter is less than the first playback parameter threshold, at least one playback characteristic of the first media content item at the first user equipment and/or the second user equipment is adjusted to cause the display of the first media content item at first user equipment and second user equipment to be synchronized.

Automated voice translation dubbing for prerecorded video

A method for aligning a translation of original caption data with an audio portion of a video is provided. The method includes identifying, by a processing device, original caption data for a video that includes a plurality of caption character strings. The processing device identifies speech recognition data that includes a plurality of generated character strings and associated timing information for each generated character string. The processing device maps the plurality of caption character strings to the plurality of generated character strings using assigned values indicative of semantic similarities between character strings. The processing device assigns timing information to the individual caption character strings based on timing information of mapped individual generated character strings. The processing device aligns a translation of the original caption data with the audio portion of the video using assigned timing information of the individual caption character strings.

Streaming synchronized media content to separate devices
11582300 · 2023-02-14 · ·

Described are system, apparatus, article of manufacture, method, or computer program product embodiments for controlling streaming of media content. An embodiment operates by halting a presentation of future content from a buffer upon determining that the buffer is exhausted of content to present. The embodiment includes receiving one or more packets over a network connection, the one or more packets including media information corresponding to a first portion of streaming media content, in which the first portion corresponds to a second portion of the streaming media content. The one or more packets in a buffer are stored as buffered content. Responsive to determining that the network connection is not experiencing a burst condition, the buffer is trimmed. Then, presentation of buffered content is resumed and the first portion is caused to be presented in sync with the second portion.

Methods and apparatuses for combining and distributing user enhanced video/audio content

Methods and apparatuses are provided, which may be implemented to combine and distribute user enhanced video and/or audio content.

Apparatus, systems and methods for content availability notification
11582521 · 2023-02-14 · ·

Systems and methods are operable to notify a user of content availability. An exemplary embodiment receives a content availability notification request that specifies at least one content of interest, searches current electronic program guide (EPG) information to identify the content of interest, determines that the information identifying the specified content of interest is unavailable based upon the search of the current EPG information, generates a content availability reminder that is associated with the specified content of interest, monitors a content database to determine an availability of the specified content of interest identified in the content availability reminder, determines that the specified content of interest is available when the monitored content database indicates availability of the specified content of interest, and generates a content availability reminder notification that indicates at least a title of the specified content of interest.

Video processing method and apparatus, and storage medium
11582506 · 2023-02-14 · ·

Disclosed are a video processing method and apparatus, and a storage medium. The method includes: receiving a selection instruction of having selected one or more video streams or key frames of the one or more video streams to be browsed; setting video stream thumbnails generated from the one or more video streams or key frame thumbnails generated from the key frames to a scene thumbnail to generate a picture layout stream according to the selection instruction, where the scene thumbnail is generated according to a scene displayed in an augmented reality/virtual reality (AR/VR) interface; and presenting the picture layout stream in the VR/AR interface, and providing a virtual layout interface of multiple video stream pictures.

Video-based competition platform

A video-based competition platform supports video-based competitions between possibly geographically distributed competitors. The video-based competition platform enables users of electronic communication devices to create, compete, view, and vote in video-based competitions. In at least some embodiments, a video-based competition is presented to a user with two or more video clips played in conjunction. The video clips may be synchronized to a time base and/or common audio clip.

ACTION SYNCHRONIZATION FOR TARGET OBJECT

A method for synchronizing an action of a target object with source audio is provided. Facial parameter conversion is performed on an audio parameter of the source audio at different time periods to obtain source parameter information of the source audio at the respective time periods. Parameter extraction is performed on a target video that includes the target object to obtain target parameter information of the target video. Image reconstruction is performed on the target object in the target video based on the source parameter information of the source audio and the target parameter information of the target video, to obtain a reconstructed image. Further, a synthetic video is generated based on the reconstructed image, the synthetic video including the target object, and the action of the target object being synchronized with the source audio.