H04N21/4856

VIDEO PLAYBACK METHOD AND DEVICE

Embodiments of the present disclosure provide a method and device for playing a video. The method comprises: in the process of playing a video, displaying a first subtitle in a subtitle component displayed in a playback interface of the video, and playing a first audio corresponding to the first subtitle; in response to a first trigger operation of a user on the subtitle component, displaying a first pop-up window, the first pop-up window being used to switch display of the first subtitle to the second subtitle; in response to a second trigger operation of the user on the first pop-up window, switching to display the second subtitle in the subtitle component, and playing a second audio corresponding to the second subtitle. The video playback method and device provided by the embodiments of the present disclosure can solve the issues that users cannot understand subtitles and audios in the process of watching videos, thereby improving the users' experience.

MANAGING INTERACTIVE SUBTITLE DATA
20180014079 · 2018-01-11 ·

Embodiments of the present application relate to a method, apparatus, and system for processing subtitle data. The method includes dividing subtitle data into multiple subtitle groups according to subtitle data display time information related to a played object, wherein a subtitle group comprises at least one subtitle data entry, and wherein a subtitle data entry comprises subtitle content, a subtitle display time in relation to the played object, and a speed of subtitle motion, selecting a piece of subtitle data from a subtitle group according to the display time information of the played object, and causing the selected piece of subtitle data to be displayed on a track such that the selected piece of subtitle data does not overlap with or pass another piece of subtitle data displayed on the track.

Automated generation of banner images
11711593 · 2023-07-25 · ·

Example systems and methods for automated generation of banner images are disclosed. A program identifier associated with a particular media program may be received by a system, and used for accessing a set of iconic digital images and corresponding metadata associated with the particular media program. The system may select a particular iconic digital image for placing a banner of text associated with the particular media program, by applying an analytical model of banner-placement criteria to the iconic digital images. The system may apply another analytical model for banner generation to the particular iconic image to determine (i) dimensions and placement of a bounding box for containing the text, (ii) segmentation of the text for display within the bounding box, and (iii) selection of font, text size, and font color for display of the text. The system may store the particular iconic digital image and banner metadata specifying the banner.

Audio trick mode
11570396 · 2023-01-31 · ·

Various embodiments of apparatus, systems and/or methods are described for independently controlling an audio stream relative to a video stream in audio trick mode. In one example, an audio stream and a video stream is received, where the audio stream comprises frames that correspond to corresponding frames of the video stream. The audio and video streams are played from a first time to a second time at a first speed. An input to time shift the audio stream independent of the video stream is received, and after receiving such, the audio stream is time shifted to the first time. Then, the audio stream may be re-played from the first time to the second time at a second speed different from the first speed.

Systems and methods for providing media based on a detected language being spoken

Various embodiments provide media based on a detected language being spoken. In one embodiment, the system electronically detects which language of a plurality of languages is being spoken by a user, such during a conversation or while giving a voice command to the television. Based on which language of a plurality of languages is being spoken by the user, the system electronically presents media to the user that is in the detected language. For example, the media may be television channels and/or programs that are in the detected language and/or a program guide, such as a pop-up menu, including such media that are in the detected language.

SUBTITLE RENDERING BASED ON THE READING PACE
20220414133 · 2022-12-29 ·

Systems and methods for summarizing captions, configuring playback speed, and rewriting the caption file for a media asset are disclosed. The system determines whether to display the original captions or a summarized version of the captions, which are based on user's language proficiency level, reading pace, and historical data, and can be generated either on-demand or automatically when rewinds and pauses are detected. The caption file which includes the original captions can be rewritten. The system determines whether to stream a caption or a rewritten file to a media device based on user or system selections. In the absence of a caption file, or when the caption file cannot be summarized, the playback speed of the media asset is slowed down to provide additional reading time to the user.

SUBTITLE RENDERING BASED ON THE READING PACE
20220414132 · 2022-12-29 ·

Systems and methods for summarizing captions, configuring playback speed, and rewriting the caption file for a media asset are disclosed. The system determines whether to display the original captions or a summarized version of the captions, which are based on user's language proficiency level, reading pace, and historical data, and can be generated either on-demand or automatically when rewinds and pauses are detected. The caption file which includes the original captions can be rewritten. The system determines whether to stream a caption or a rewritten file to a media device based on user or system selections. In the absence of a caption file, or when the caption file cannot be summarized, the playback speed of the media asset is slowed down to provide additional reading time to the user.

Electronic Devices and Corresponding Methods for Redirecting Event Notifications in Multi-Person Content Presentation Environments
20220408159 · 2022-12-22 ·

An electronic device includes a communication device electronically communicating with a content presentation companion device operating as a primary display for the electronic device and at least one augmented reality companion device. One or more sensors detect multiple persons within an environment while the content presentation companion device operates as the primary display. One or more processors redirect an event notification intended for presentation on the primary display to the augmented reality companion device while both the content presentation companion device operates as the primary display for the electronic device and the multiple persons are within the environment of the electronic device. When communicating with two augmented reality companion devices, the one or more processors can direct subtitles associated with a content offering, sometimes in different languages, to at least a first augmented reality companion device and a second augmented reality companion device.

MULTILINGUAL SUBTITLE SERVICE SYSTEM AND METHOD FOR CONTROLLING SERVER THEREOF
20220383228 · 2022-12-01 ·

Proposed are a multilingual subtitle service system and a method for controlling a service server thereof. The subtitle service system includes: a subtitle service server configured to, in response to a request from a worker, provide a subtitle content creating tool for creating a subtitle content for a content image requested by a client and evaluate task performance of the worker based on the subtitle content created by the worker; and a user terminal device configured to access the subtitle service server to transmit project information on the content image requested by the client, and, in response to a request from the worker, display a subtitle service window including the subtitle content creating tool provided by the subtitle service server.

Electronic devices and corresponding methods for redirecting event notifications in multi-person content presentation environments
11595732 · 2023-02-28 · ·

An electronic device includes a communication device electronically communicating with a content presentation companion device operating as a primary display for the electronic device and at least one augmented reality companion device. One or more sensors detect multiple persons within an environment while the content presentation companion device operates as the primary display. One or more processors redirect an event notification intended for presentation on the primary display to the augmented reality companion device while both the content presentation companion device operates as the primary display for the electronic device and the multiple persons are within the environment of the electronic device. When communicating with two augmented reality companion devices, the one or more processors can direct subtitles associated with a content offering, sometimes in different languages, to at least a first augmented reality companion device and a second augmented reality companion device.