Patent classifications
H04N21/43074
OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE
Methods, systems, and devices are described for presenting non-video content through a mobile device that uses a video camera to track a video on another screen. In one embodiment, a system includes a video display, such as a TV, that displays video content. A mobile device with an integrated video camera captures video data from the TV and allows a user to select an area in the video in order to hear/feel/smell what is at that location in the video.
VIDEO CURATION SERVICE METHOD
A video curation service (VCS) method provides video content provided by an open streaming service (OSS) by adding video content information in conjunction with the OSS. The VCS method comprises: a step in which a subtitlist produces subtitle data with regard to predetermined video content provided from an OSS server, and uploads same onto a VCS server; and a step in which the VCS server operates a web or app page for viewing video content on a viewer terminal in accordance with a request of a viewer.
Information processing apparatus, information processing apparatus, and program
This information processing apparatus includes: an AV decoder 41 that acquires and reproduces video data including a service object, for which a service that processes a request from a user through voice is available; and an application execution environment 43 that adds an additional image for informing the user about the service object to the reproduced video. The additional image includes a visual feature unique to each service object such that the service object is uniquely determined by voice recognition in the service.
AUTOMATED VOICE TRANSLATION DUBBING FOR PRERECORDED VIDEO
A method for aligning a translation of original caption data with an audio portion of a video is provided. The method involves identifying original caption data for the video that includes caption character strings, identifying translated language caption data for the video that includes translated character strings associated with audio portion of the video, and mapping caption sentence fragments generated from the caption character strings to corresponding translated sentence fragments generated from the translated character strings based on timing associated with the original caption data and the translated language caption data. The method further involves estimating time intervals for individual caption sentence fragments using timing information corresponding to individual caption character strings, assigning time intervals to individual translated sentence fragments based on estimated time intervals of the individual caption sentence fragments, generating a set of translated sentences using consecutive translated sentence fragments, and aligning the set of translated sentences with the audio portion of the video using assigned time intervals of individual translated sentence fragments from corresponding translated sentences.
GENERATING VERIFIED CONTENT PROFILES FOR USER GENERATED CONTENT
Systems and methods for searching, identifying, scoring, and providing access to companion media assets for a primary media asset are disclosed. In response to a request for companion content, metadata within a predefined time period of a play position when the request was made, is downloaded. A dynamic search template that contains search parameters based on the downloaded metadata is generated. In response to the search conducted using the search template, a plurality of companion media assets are identified and then verified. A trust score for the companion media asset is accessed. The trust score may be analyzed and modified based on its contextual relationship to the play position of the primary media asset. If the trust score is within a rating range, then a link to access the companion media asset, or a specific segment or play position within the companion media asset, is provided.
COMBINING SPORTS WAGERING AND TELEVISION SPORTS AND RELATED SYSTEMS AND METHODS
A system, computer-implemented method and gaming device are provided. A method includes receiving, by a synchronization server, broadcast event data that is displayed on an electronic display and that is generated by a television broadcasting system, receiving, by the synchronization server, sports wagering system data that is generated by a sports wagering system and that corresponds to a broadcast event, generating synchronized wager data that includes broadcast event data and the sports wagering system data and causing the sports wagering system data to be displayed on the electronic display.
DISPLAY APPARATUS AND METHOD OF CONTROLLING THE SAME
A display apparatus may include: a display; a video processor configured to process a video signal; a graphic processor configured to process a graphic signal; a mixer configured to mix a video corresponding to the video signal processed by the video processor and a graphic corresponding to the graphic signal processed by the graphic processor to be displayed together on the display; and a main processor configured to identify a video frame and a graphic frame, to which matching identification information is assigned, based on identification information assigned to a plurality of video frames of the video signal and identification information assigned to a plurality of graphic frames of the graphic signal in order of the respective frames, and control the video processor and/or the graphic processor to delay and output at least one of the identified video frame and the identified graphic frame to make the video of the identified video frame and the graphic of the identified graphic frame be displayed together on the display.
Systems and methods for blending interactive applications with television programs
Object selection reward data, including rewards for viewer selection of objects of interest in presented media content of a video stream may be electronically communicated to the user automatically when the user electronically selects the object of interest as it is shown in the screen. Provided is improved functionality to activate an image in the video stream into an object that then can be selectable or become a part of an application running on a receiving device such as a set-top box or other media device. The received video may or may not be taken over by the application running on the set-top box. The video scaling can be preserved, and a part of the application. Alternatively, the whole of the visible video screen may not be a part of the application.
PRODUCT SUGGESTION AND RULES ENGINE DRIVEN OFF OF ANCILLARY DATA
Curating ancillary data to be presented to audience members of a visual program content may include a) creating a timeline rule that correlates ancillary data objects to respective visual program content features, the visual program content features correlated to respective instances on a timeline of the visual program content, b) creating an environmental rule to correlate the ancillary data objects to respective environmental features of an audience member; and c) indicating that the ancillary data objects are to be presented to the audience member when both the timeline rule and the environmental rule are met such that the ancillary data objects may be presented to the audience member when both a) the respective ones of the visual program content features appear in the visual program content during playback by the audience member and b) the respective environmental features are present.
Hot video clip extraction method, user equipment, and server
A system including a server, a first terminal and a second terminal, where the server is configured to: send video content to the first terminal and the second terminal, respectively, where the video content includes a first hot clip and a second hot clip; send a first tag of the first hot clip to the first terminal; and send a second tag of the second hot clip to the second terminal; the first terminal is configured to display the first tag on a play time axis of the video content, where the first tag is located in a first location of the play time axis; and the second terminal is configured to display the second tag on the play time axis of the video content, where the second tag is located in a second location of the play time axis.