Patent classifications
H04N21/4856
Vehicle-based sign language communication systems and methods
Vehicle-based sign language communication systems and methods are provided herein. An example device can be configured to determine a sign language protocol used by the first user, determine a target language used by a second user, obtain a translation library based on the sign language protocol and the target language, receive spoken word input from a second user through a microphone, convert the spoken word input into sign language output using the translation library, and provide the sign language output using a sign language output device.
EVENT-DRIVEN STREAMING MEDIA INTERACTIVITY
Aspects described herein may provide systems, methods, and device for facilitating language learning using videos. Subtitles may be displayed in a first, target language or a second, native language during display of the video. On a pause event, both the target language subtitle and the native language subtitle may be displayed simultaneously to facilitate understanding. While paused, a user may select an option to be provided with additional contextual information indicating usage and context associated with one or more words of the target language subtitle. The user may navigate through previous and next subtitles with additional contextual information while the video is paused. Other aspects may allow users to create auto-continuous video loops of definable duration, and may allow users to generate video segments by searching an entire database of subtitle text, and may allow users create, save, share, and search video loops.
EFFICIENT CHANNEL SCANNING FOR MEDIA RENDERING DEVICE
A media rendering device and method for scan of channels on the media rendering device is provided. The media rendering device determines a first geographical region associated with a location of the media rendering device. A first set of over-the-air (OTA) channels may be communicated in the first geographical region. The media rendering device determines a second geographical region within a threshold distance from the location of the media rendering device. A second set of OTA channels may be communicated in the second geographical region. The media rendering device receives a first user input to scan the first set of OTA channels and the second set of OTA channels, and configures the first set of OTA channels and the second set of OTA channels on the media rendering device, based on the scan of the first set of OTA channels and the second set of OTA channels.
Advanced television systems committee (ATSC) 3.0 latency-free display of content attribute
Techniques are described for expanding and/or improving the Advanced Television Systems Committee (ATSC) 3.0 television protocol in robustly delivering the next generation broadcast television services. A dynamic MPD can be supplemented by offline descriptor information transmitted OTA at display time so that the descriptor information, e.g., language information, captioning information, and the like, can be immediately presented on a UI.
Menu navigation mode for media discs
Systems and methods are provided for reordering and/or bypassing certain informational content or menus that are conventionally presented prior to playback of media content stored on physical media discs. Upon initial use of a physical media disc, certain information content or menus may be presented to a user or viewer, for example, piracy warnings, language selection menus, etc. However, upon subsequent use of the physical media disc, such informational content or menus may be bypassed. The user or viewer is given an option to immediately begin consuming the media content stored on the physical media disc. Conventional content, such as trailers are not played prior to playback of the media content.
Methods and systems for facilitating conversion of content for transfer and storage of content
Various embodiments provide methods and devices for utilizing content conversion for the communication of content. In an embodiment, method, performed by a user device, includes receiving a user input comprising one or more user preferences to facilitate at least one output content; Further, the method includes receiving at least one input content from at least one content source based on the one or more user preferences. Thereafter, the method includes separating the at least one input content from the at least one content source using delimiters, and generating the at least one output content from the at least one input content based on one or more content characteristics. The at least one output content has data size less than the data size of the at least one input content. Furthermore, the method includes transmitting, the at least one output content to another user device.
INTERACTIVE PRONUNCIATION LEARNING SYSTEM
Systems and methods for generating audible pronunciation of a closed captioning word in a content item. For example, a system generates for output on a first device a content item comprising dialogue. The system generates for display on the first device a closed captioning word corresponding to the dialogue where the closed captioning word is selectable via a user interface of the first device. The system receives a selection of the closed captioning word via the user interface of the first device. In response to receiving the selection of the closed captioning word, the system generates for playback on the first device at least a portion of the dialogue corresponding to the selected closed captioning word.
SYSTEMS AND METHODS FOR REPLAYING A CONTENT ITEM
Systems and methods for replaying a portion of a content item based on the user’s language proficiency level in a secondary language is disclosed. The system accesses a user profile comprising a user’s proficiency level in at least one secondary language, the secondary language being their non-native language. A command to replay a first portion of a content item is received and, in response to receiving the replay command, the system generates for display the first portion of the content item at a level below the user’s proficiency level in the secondary language.
SYSTEM AND METHOD FOR PROVIDING ADVANCED CONTENT INTERACTIVITY FEATURES
Systems and methods for interactively engaging consumers of a media asset are disclosed. The methods allow selection and personalization of a media asset character's name, voice, or dialogue while the media asset is being consumed. The personalization may be propagated through the entire media asset or additionally to other episodes, sequels, and related media assets by identifying and replacing associated metatags. The system determines whether the media asset is being consumed as a group watch where its members are consuming the media asset from different IP addresses or being consumed by viewers in the same room to determine the type of changes allowed. The methods also present queries to engage the viewer, such as by the character asking them a question, and provide supplemental videos to aid in responding to the queries. The responses to queries may also determine the path a story takes in the media asset.
Pause playback of media content based on closed caption length and reading speed
A method, computer system, and a computer program product for playing a video recording that includes captions, wherein each caption includes a textual transcription of an audio portion of the video, a description of non-speech elements of the video, or both, is provided. The present invention may include rendering a video, identifying a first portion of the video that includes a first caption, and, in response to identifying the first portion of the video, estimating a second time period for a particular user to read and understand text in the first caption. The present invention may further include determining whether the second time period for the particular user to read and understand text in the first caption is greater than the first time period for rendering the first portion of the video. The present invention may lastly include pausing the video for a third time period.