Patent classifications
H04N21/26603
METHODS AND APPARATUS FOR DETERMINING AUDIENCE METRICS ACROSS DIFFERENT MEDIA PLATFORMS
An example includes a segment collector to: access impression records indicative of media access segments, the media access segments including start times and end times corresponding to media accessed by a panelist; and determine ones of the impression records that include a watermark corresponding to a first media platform presenting the media; a segment classifier to convert a first one of the impression records including the watermark to a converted impression record; and combine the converted impression record corresponding to the first media platform and a second impression record corresponding to a second media platform; and a media creditor to generate audience measurement metrics based on the combined impression records.
Techniques for video analytics of captured video content
Techniques for video analytics of captured video content are described. An apparatus may comprise a flash memory, a serial bus, and a processor circuit coupled to the flash memory and the serial bus. The processor circuit may comprise a multi-core central processing unit (CPU) and an integrated graphics processing unit (GPU). The processor circuit may receive captured video content via a local communication link, perform video analytics on the captured video content; and send data associated with the performed video analytics to a network interface, for communication to a remote device via a network communication link. Other examples are described and claimed.
Method for labeling performance segment, video playing method, apparatus and system
Provided is a method for labeling a segment of a video, in a server. In the method, a multimedia file corresponding to an acting role is obtained. A role feature of the acting role is determined based on the multimedia file. A target video is decoded to obtain a data frame and a playing timestamp corresponding to the data frame, the data frame including at least one of a video frame and an audio frame. In the data frame of the target video, a target data frame that matches the role feature is identified. A segment related to performance of the acting role in the target video is automatically labeled based on a playing timestamp of the target data frame.
Text-driven editor for audio and video editing
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
Consistent generation of media elements across media
An example method performed by a processing system includes retrieving a digital model of a media element from a database storing a plurality of media elements, wherein the media element is to be inserted into a scene of an audiovisual media, rendering the media element in the scene of the audiovisual media, based on the digital model of the media element and on metadata associated with the digital model to produce a rendered media element, wherein the metadata describes a characteristic of the media element and a limit on the characteristic, and inserting the rendered media element into the scene of the audiovisual media.
Game moment implementation system and method of use thereof
A method for implementing a moment of a videogame for playing a portion of a videogame; it includes receiving user selection input of a moment of a particular videogame associated with starting at a particular progress point of the particular videogame; causing an emulation to start of the particular videogame for streaming on the computer of the user; performing image analysis of the image stream to generate metadata on progress of the particular game by the user; determining if one or more end conditions of the moment is met through analysis of the metadata; and causing the emulation to end, resulting in an end of the streaming of the particular game on the computer of the user, if the one or more end conditions is met.
Machine-learning models for tagging video frames
According to a first aspect of this specification, there is described a computer-implemented method of tagging video frames. The method comprises generating, using a frame tagging model, a tag for each of a plurality of frames of an animation sequence. The frame tagging model comprises: a first neural network portion configured to process, for each frame of the plurality of frames, a plurality of features associated with the frame and generate an encoded representation for the frame. The frame tagging model further comprises a second neural network portion configured to receive input comprising the encoded representations of each frame and generate output indicative of a tag for each of the plurality of frames.
MATCHING VIDEO CONTENT TO PODCAST EPISODES
Systems and methods for matching videos to podcast episodes are provided. A data store comprising podcast episode identifiers is accessed. The podcast episode identifiers are associated with one or more podcast episode attributes. A video content item is identified. The video content item includes one or more video content item attributes. A matching podcast episode identifier that matches the video content item is determined based on the one or more podcast episode attributes and the one or more video content item attributes. A ranking of one of the video content item or the matching podcast episode identifier is caused to be adjusted to reflect the correspondence between the video content item and the matching podcast episode identifier. Information associated with the matching podcast episode identifier is provided to a first user device.
Method and apparatus for recommending live streaming room
The disclosure provides a method and an apparatus for recommending a live streaming room, and a storage medium. The method is implemented as follows. Social information of a target user account is acquired in response to detecting a predetermined operation from the target user account. A target live streaming room is selected based on interaction data of each associated user account indicated by the social information. Information on the target live streaming room is displayed to the target user account.
GENERATION, PROVISION AND INTERACTIVE DISPLAY OF SPOOLING MEDIA PACKAGES AND RELATED ANALYTICAL INFORMATION
A facility for generating and displaying information regarding the packaging of individual media asset files associated with multiple scheduled presentations of interstitial media assets is provided. Scheduling information is received regarding future scheduled media presentations for each of a plurality of media assets via one or more content channels. Based at least in part on the received scheduling information, one or more package files are generated such that each package file includes a plurality of media asset files, each corresponding to at least one of the future scheduled media presentations, for distribution to a plurality of distinct media receivers. Database information is generated regarding the generating of the one or more package files, such that the database information includes a completion time associated with the generating of each package file. At least a portion of the generated database information is displayed on a user client device coupled to a multichannel media distribution computing system.