Patent classifications
H04N21/4884
SYSTEMS AND METHODS FOR RECOMMENDING CONTENT USING PROGRESS BARS
During playback of a content item, a media signature corresponding to a first portion of the content item is identified. A number of media signatures representing portions of a plurality of other content items may have been previously identified and stored. Each stored media signature may also include an identifier of an associated content item and a timestamp corresponding to a position in the associated content item at which the signature is located. If it is determined that the identified media signature matches a stored media signature, a progress bar is generated for display comprising an identifier of the content item associated with the matching stored media signature, and a progress indicator corresponding to a timestamp associated with the stored media signature.
SYSTEMS AND METHODS FOR DETERMINING TYPES OF REFERENCES IN CONTENT AND MAPPING TO PARTICULAR APPLICATIONS
Systems and methods are provided herein for determining types of references within a content item and mapping them to particular applications. A content management application identifies an entity and a context of the entity at a location within the content item. The content management application may identify the entity and the context of the entity in real time as a first user device processes the content item, or the content management application may identify and store the entity and the context of the entity in a database before providing the content item. After determining a presence of a second user device associated with a profile, the content management application determines at least one application associated with the entity and the context of the entity on the second user device and launches the application to create an immersive content consumption experience.
VIDEO PLAYBACK METHOD AND DEVICE
Embodiments of the present disclosure provide a method and device for playing a video. The method comprises: in the process of playing a video, displaying a first subtitle in a subtitle component displayed in a playback interface of the video, and playing a first audio corresponding to the first subtitle; in response to a first trigger operation of a user on the subtitle component, displaying a first pop-up window, the first pop-up window being used to switch display of the first subtitle to the second subtitle; in response to a second trigger operation of the user on the first pop-up window, switching to display the second subtitle in the subtitle component, and playing a second audio corresponding to the second subtitle. The video playback method and device provided by the embodiments of the present disclosure can solve the issues that users cannot understand subtitles and audios in the process of watching videos, thereby improving the users' experience.
Audio improvement using closed caption data
Methods and systems are described herein for improving audio for hearing impaired content consumers. An example method may comprise determining a content asset. Closed caption data associated with the content asset may be determined. At least a portion of the closed caption data may be determined based on a user setting associated with a hearing impairment. Compensating audio comprising a frequency translation associated with at least the portion of the closed caption data may be generated. The content asset may be caused to be output with audio content comprising the compensating audio and the original audio.
Automated voice translation dubbing for prerecorded video
A method for aligning a translation of original caption data with an audio portion of a video is provided. The method includes identifying, by a processing device, original caption data for a video that includes a plurality of caption character strings. The processing device identifies speech recognition data that includes a plurality of generated character strings and associated timing information for each generated character string. The processing device maps the plurality of caption character strings to the plurality of generated character strings using assigned values indicative of semantic similarities between character strings. The processing device assigns timing information to the individual caption character strings based on timing information of mapped individual generated character strings. The processing device aligns a translation of the original caption data with the audio portion of the video using assigned timing information of the individual caption character strings.
Method and apparatus for locating video playing node, device and storage medium
The disclosure provides a method for locating a video playing node, and relates to fields of big data and video processing. The method includes: selecting a target video out from a plurality of videos; and sending the target video, a plurality of subtitle text segments of the target video and start time information of each of the plurality of subtitle text segments to a client, to cause the client to display the plurality of subtitle text segments, and determine, in response to a trigger operation on an any subtitle text segment of the plurality of subtitle text segments, a start playing node of the target video based on the start time information of the any subtitle text segment. The disclosure further provides an apparatus for locating a video playing node, an electronic device and a storage medium.
Communication transfer between devices
A method may include obtaining an indicator that a first device is in a location of a second device and in response to obtaining the indicator, sending a redirect request to a communication service provider of the first device to direct, to the second device, incoming communication requests handled by the communication service provider that are directed to the first device. The method may further include after sending the redirect request and after a communication request to a communication session is directed to the first device, obtaining, at the second device, a communication indication to participate in the communication session. The method may further include directing audio of the communication session to a transcription system and obtaining, at the second device, the transcription of the audio from the transcription system. The method may also include presenting, by the second device, the audio and the transcription.
ADJUSTING AUDIO AND NON-AUDIO FEATURES BASED ON NOISE METRICS AND SPEECH INTELLIGIBILITY METRICS
Some implementations involve determining a noise metric and/or a speech intelligibility metric and determining a compensation process corresponding to the noise metric and/or the speech intelligibility metric. The compensation process may involve altering a processing of audio data and/or applying a non-audio-based compensation method. In some examples, altering the processing of the audio data does not involve applying a broadband gain increase to the audio signals. Some examples involve applying the compensation process in an audio environment. Other examples involve determining compensation metadata corresponding to the compensation process and transmitting an encoded content stream that includes encoded compensation metadata, encoded video data and encoded audio data from a first device to one or more other devices.
Implementation method and system of real-time subtitle in live broadcast and device
The present disclosure describes techniques of synchronizing subtitles in live broadcast The disclosed techniques comprise obtaining a source signal and a simultaneous interpretation signal in a live broadcast; performing voice recognition on the simultaneous interpretation signal in real-time to obtain corresponding translation text; delaying the simultaneous interpretation signal to obtain a first delayed signal; delaying the source signal to obtain a second delayed signal; obtaining proofreading results of the first delayed signal and the corresponding translation text; determining proofread subtitles based on the proofreading results; and sending the proofread subtitles and the second delay signal to a live display interface.
TRANSMISSION APPARATUS, TRANSMISSION METHOD, RECEPTION APPARATUS, AND RECEPTION METHOD
Simplifying subtitle display processing in a variable speed reproduction mode on the receiving side is intended.
A video stream formed with a video packet having coded image data in a payload is generated. A subtitle stream formed with a subtitle packet having subtitle information in a payload is generated. A multiplexed stream including the video stream and the subtitle stream is generated and transmitted. In generating the multiplexed stream, the subtitle packet is arranged at a random access position.