H04N21/4884

Automated quality assessment of translations

Technologies are provided for automated quality assessment of translations. In some embodiments, quality of a translation can be assessed by generating a machine-learning (ML) model that classifies the translation as pertaining to one of three quality categories. A first quality category can include, for example, translations that are deemed satisfactory. A second quality category can include, for example, translations that are deemed subject to edition prior to being deemed satisfactory. A third quality category can include, for example, translations that are deemed unsatisfactory. The generated ML model can then be applied to the translation and a corresponding sentence in a source language in order to classify the translation as pertaining to one of the three categories.

SYSTEMS AND METHODS FOR HIGHLIGHTING CONTENT WITHIN MEDIA ASSETS
20230007336 · 2023-01-05 ·

Systems and methods are described herein for highlighting objects with a primary content that are likely to be of interest to a user viewing the primary content. More particularly, when the system receives a segment of primary content to be displayed on a user equipment device for consumption, the system analyzes the received segment to identify an object within the received segment. The system then checks a database storing supplemental content to determine whether supplemental content associated with the identified object is available. When supplemental content associated with the identified object is available within the database, the system modifies the received segment of the primary content to highlight the identified object and displays the modified segment of the primary content on the user equipment device for consumption.

SYSTEMS AND METHODS OF PRESENTING VIDEO OVERLAYS

Systems and methods are provided for relocating an overlay overlapping information in content. The systems and methods may comprise receiving a content item, the content item comprising a video image, and determining a first screen position of an information box (e.g., a score box) in the video image. Determining may be performed with image analysis and/or a machine learning model. The system receives an overlay image (e.g., a channel logo) with a second screen position and determines if the second screen position (e.g., for the logo) overlaps the first screen position (e.g., for the score). In response to determining the second screen position (e.g., of the logo) overlaps the first screen position (e.g., the score), the system modifies the second screen position (e.g., for the logo). Then the system generates for display the overlay image on the video in the modified screen position. The system may not relocate the overlay if the overlay is a high priority.

SYSTEMS AND METHODS OF PRESENTING VIDEO OVERLAYS

Systems and methods are provided for relocating an overlay overlapping information in content. The systems and methods may comprise receiving a content item, the content item comprising a video image, and determining a first screen position of an information box (e.g., a score box) in the video image. Determining may be performed with image analysis and/or a machine learning model. The system receives an overlay image (e.g., a channel logo) with a second screen position and determines if the second screen position (e.g., for the logo) overlaps the first screen position (e.g., for the score). In response to determining the second screen position (e.g., of the logo) overlaps the first screen position (e.g., the score), the system modifies the second screen position (e.g., for the logo). Then the system generates for display the overlay image on the video in the modified screen position. The system may not relocate the overlay if the overlay is a high priority.

SYSTEMS AND METHODS FOR INSERTING EMOTICONS WITHIN A MEDIA ASSET
20230007359 · 2023-01-05 ·

Systems and methods are described herein for inserting emoticons within a media asset based on an audio portion of the media asset. Each audio portion of a media asset is associated with a respective part of speech, and an emotion corresponding to the audio portion for the media asset is determined. A corresponding emoticon is identified based on the determined emotion in the audio portion and causing to be presented at the location within the media asset.

DISPLAY DEVICE AND DISPLAY SYSTEM

The present disclosure relates to a display device and a display system for providing lyrics when reproducing music of the external device, regardless of a connection state of an external device. The display device includes: a display; a controller configured to receive a music reproduction command through an external device; and an audio output interface configured to output music received from the external device, wherein, when the controller receives the music reproduction command, the controller is configured to request lyric information to the external device, and when the controller receives the lyric information from the external device, the controller is configured to display lyrics through the display while outputting the music.

CAPTION ADJUSTMENT METHOD AND DEVICE, TERMINAL, AND STORAGE MEDIUM
20220417608 · 2022-12-29 ·

The present disclosure provides a caption adjustment method, a caption adjustment device, a terminal and a storage medium. The method includes: sending a ranging signal to a remote control device, and receiving the ranging signal returned by the remote control device; determining a distance between a terminal and the remote control device according to a time difference between sending of the ranging signal and reception of the ranging signal; and adjusting a display size of a caption resource currently played by the terminal according to the distance between the terminal and the remote control device.

SUBTITLE RENDERING BASED ON THE READING PACE
20220414133 · 2022-12-29 ·

Systems and methods for summarizing captions, configuring playback speed, and rewriting the caption file for a media asset are disclosed. The system determines whether to display the original captions or a summarized version of the captions, which are based on user's language proficiency level, reading pace, and historical data, and can be generated either on-demand or automatically when rewinds and pauses are detected. The caption file which includes the original captions can be rewritten. The system determines whether to stream a caption or a rewritten file to a media device based on user or system selections. In the absence of a caption file, or when the caption file cannot be summarized, the playback speed of the media asset is slowed down to provide additional reading time to the user.

GESTURE-BASED PARENTAL CONTROL SYSTEM
20220417600 · 2022-12-29 ·

Systems and methods for presenting user-selectable options for parental control in response to detecting a triggering action by a user are disclosed. A system generates for output a first content item on a device. The system identifies a first user and a second user in proximity to the device and determines that a first gesture is performed by the first user wherein the first gesture is covering the eyes of the second user. In response to determining that the first gesture is performed, the system presents a selectable option for a user input such as (a) skipping a portion of the first content item; (b) lowering the volume; (c) removing the video of the first content item; or (d) presenting a second content item instead of presenting the first content item. In response to receiving a user input selecting the selectable option, the system performs an action corresponding to the selectable option.

SUBTITLE RENDERING BASED ON THE READING PACE
20220414132 · 2022-12-29 ·

Systems and methods for summarizing captions, configuring playback speed, and rewriting the caption file for a media asset are disclosed. The system determines whether to display the original captions or a summarized version of the captions, which are based on user's language proficiency level, reading pace, and historical data, and can be generated either on-demand or automatically when rewinds and pauses are detected. The caption file which includes the original captions can be rewritten. The system determines whether to stream a caption or a rewritten file to a media device based on user or system selections. In the absence of a caption file, or when the caption file cannot be summarized, the playback speed of the media asset is slowed down to provide additional reading time to the user.