Patent classifications
H04N21/4856
Translating between spoken languages with emotion in audio and video media streams
Systems and methods are described herein for generating alternate audio for a media stream. The media system receives media that is requested by the user. The media comprises a video and audio. The audio includes words spoken in a first language. The media system stores the received media in a buffer as it is received. The media system separates the audio from the buffered media and determines an emotional state expressed by spoken words of the first language. The media system translates the words spoken in the first language into words spoken in a second language. Using the translated words of the second language, the media system synthesizes speech having the emotional state previously determined. The media system then retrieves the video of the received media from the buffer and synchronizes the synthesized speech with the video to generate the media content in a second language.
Audiovisual content item data streams
A transmitting apparatus generates an audiovisual content item data stream (e.g. transport stream) comprising a plurality of individual audiovisual data streams with audiovisual components for the content item. A generator (301-307) generates a first stream comprising both mandatory audio data and replaceable audio data for the audio representation where the replaceable audio data being data can be replaced by alternative audio data. A combiner (309) includes the resulting stream into the content item data stream. A receiving apparatus includes an extractor (403) which extracts the mandatory audio data from the received stream. A replacer (415) may replace the replaceable audio data by alternative audio data and an output (415) can generate an audio signal from the mandatory and alternative audio data. The approach may specifically provide an improved and more flexible data stream for audiovisual content.
LIVE AUDIO ADJUSTMENT BASED ON SPEAKER ATTRIBUTES
An audio stream of a speaker can be isolated from a received audio signal. Based on the audio stream, an attribute of the speaker can be identified. This attribute can be presented to a user, allowing for a user input. Based on a received user input (and on the audio stream), the audio stream can be modified.
Method and system of displaying subtitles, computing device, and readable storage medium
The present invention discloses techniques for generating and presenting subtitles. The disclosed techniques comprise extracting target audio information from a video; converting the target audio information to first text information, wherein the target audio information and the first text information are in a first language; translating the first text information to at least one second text information, wherein the at least one second text information is in at least one second language; generating a first subtitle based on the first text information; generating at least one second subtitle based on the at least one second text information; obtaining a first target subtitle and at least one second target subtitle by implementing a sensitive word processing to the first subtitle and the at least one second subtitle, respectively; and presenting at least one of the first target subtitle or the at least one second target subtitle in response to user input.
BULLET SCREEN KEY CONTENT JUMP METHOD AND BULLET SCREEN JUMP METHOD
The present disclosure provides techniques for presenting information associated with bullet screens. The techniques comprise receiving trigger information comprising information of identifying a bullet screen and information of identifying a user who performed a trigger event for the bullet screen; determining a list of jump links associated with the bullet screen based on the information of identifying the bullet screen; determining a tag associated with the user based on the information of identifying the user; selecting a target jump link from the list of jump links based on the tag associated with the user; and transmitting information associated with the bullet screen and comprising the target jump link for display of at least one part of the information in a preset area associated with the bullet screen.
SYSTEMS AND METHODS FOR CONTROLLING CLOSED CAPTIONING
A system for controlling turning on and off of closed captioning receives information regarding a program content stream and automatically determines whether to turn on or off closed captioning based on thresholds being crossed regarding an estimated current loudness level of ambient noise and an estimated current loudness level of the audio of the program content stream. The estimated current loudness level of audio of the program content stream is, or is based on, one or more indications of current volume level in an audio signal representing the audio of the program content stream and current audio settings of a device outputting the audio of the program content stream. The system may estimate the loudness level of the ambient noise by use of a loudness meter that causes the ambient noise to be sampled with a microphone and a decibel level of the sampled ambient noise to be determined.
EVENT-DRIVEN STREAMING MEDIA INTERACTIVITY
Aspects described herein may provide systems, methods, and device for facilitating language learning using videos. Subtitles may be displayed in a first, target language or a second, native language during display of the video. On a pause event, both the target language subtitle and the native language subtitle may be displayed simultaneously to facilitate understanding. While paused, a user may select an option to be provided with additional contextual information indicating usage and context associated with one or more words of the target language subtitle. The user may navigate through previous and next subtitles with additional contextual information while the video is paused. Other aspects may allow users to create auto-continuous video loops of definable duration, and may allow users to generate video segments by searching an entire database of subtitle text, and may allow users create, save, share, and search video loops.
SUBTITLE DATA EDITING METHOD AND SUBTITLE DATA EDITING PROGRAM FOR CONTENTS SUCH AS MOVING IMAGES
The present invention provides a subtitle data editing method that simplifies the work of inputting subtitles to be displayed on contents such as moving images and facilitates quick and efficient editing work. By accepting a predetermined line feed operation twice consecutively in a state wherein a cursor is present in a subtitle input field in which the subtitle content is input, a subtitle input field is separated to be displayed on screen separations before and after the cursor. By accepting a predetermined line feed operation in a state where the cursor exists at the beginning of the second and subsequent lines, the line and the line directly above are separated and the subtitle input field is divided and displayed on screen, when the subtitle input field in which the subtitle content is input is a plurality of lines.
Systems and methods for providing subtitles based on language proficiency
Systems and methods are described for providing subtitles based on a user's language proficiency. An illustrative method includes receiving a request to display subtitles, selecting a language for the subtitles, determining, from a user profile, a user's proficiency level in the selected language, selecting, based on the user's proficiency level in the selected language, a set of subtitles from a plurality of sets of subtitles in the selected language, wherein each respective set of subtitles corresponds to a different proficiency level in the selected language, and generating for display the selected set of subtitles.
Method for displaying GUI for providing menu items and display device
The present application discloses a GUI method for providing menu items, which is used to automatically adjust display locations of the menu items according to a setting language of a display device. The method includes: displaying an image content in a content display area of a display in response to a first control instruction input by a user and used for displaying the image content; and calculating a layout location of at least one menu item in a menu display area of the display based on a setting language of the display device, where the at least one menu item is configured for the image content and used to perform functions required for the image content; and displaying the at least one menu item based on the calculated layout location.