H04N21/80

Method and system for creating media content

Methods, apparatuses, and computer program products are disclosed for determining a plurality of parameters of a shim file to define a format of at least one media content input, creating a file based program master based on the shim file, and providing the created file based program master to a user for creating and delivering the at least one media content input.

Prediction model training via live stream concept association
11245968 · 2022-02-08 · ·

In certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. In some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.

Prediction model training via live stream concept association
11245968 · 2022-02-08 · ·

In certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. In some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.

Event-driven streaming media interactivity
11252477 · 2022-02-15 · ·

Aspects described herein may provide systems, methods, and device for facilitating language learning using videos. Subtitles may be displayed in a first, target language or a second, native language during display of the video. On a pause event, both the target language subtitle and the native language subtitle may be displayed simultaneously to facilitate understanding. While paused, a user may select an option to be provided with additional contextual information indicating usage and context associated with one or more words of the target language subtitle. The user may navigate through previous and next subtitles with additional contextual information while the video is paused. Other aspects may allow users to create auto-continuous video loops of definable duration, and may allow users to generate video segments by searching an entire database of subtitle text, and may allow users create, save, share, and search video loops.

Computer-implemented event detection using sonification

Computer-implemented event detection includes obtaining, at one or more processors, multimedia data including multiple frames of video data and corresponding audio data. The processor(s) process the multiple frames to detect at least one object and to track the object(s) between two or more of the frames. The processor(s) generate sonification audio data representing a position of the object(s) in the two or more frames, movement of the object(s), or both the position and the movement of object(s). The processor(s) generate combined audio data including the audio data and the sonification audio data. The processor(s) generate one or more feature vectors representing the combined audio data and provide the feature vector(s) as input to a trained event classifier to detect an event represented in the multimedia data.

PREDICTION MODEL TRAINING VIA LIVE STREAM CONCEPT ASSOCIATION
20220132222 · 2022-04-28 ·

in certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.

PREDICTION MODEL TRAINING VIA LIVE STREAM CONCEPT ASSOCIATION
20220132222 · 2022-04-28 ·

in certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.

Determination of QOE in encrypted video streams using supervised learning

A method and respective system for determining quality of experience parameters of an encrypted video stream received at a client device is provided. The method comprises extracting, from one or more encrypted video streams sent over a network from a content server to a plurality of client devices, a first instance of at least one stream-related feature. A first instance of at least one quality-related label of a plurality of quality-related labels is determined based on applying a trained classifier to the first instance of the at least one stream-related feature, wherein each of the plurality of quality-related labels corresponds to a respective experience parameter of the quality of experience parameters of the encrypted video stream received at the client device.

ANALYZING AND DIVIDING CONTENT FOR EFFICIENT STORAGE AND RETRIEVAL
20230280923 · 2023-09-07 ·

In an approach to analyzing and dividing content for efficient storage and retrieval, one or more computer processors receive a first digital content from a user. One or more computer processors divide the first digital content into a first unique portion and a first common portion. One or more computer processors analyze the first common portion to determine whether the first common portion matches a previously stored template. One or more computer processors determine the first common portion does not match a previously stored template. In response to determining the first common portion does not match a previously stored template one or more computer processors store the first common portion as a new template. One or more computer processors distribute the new template to the user.

SYSTEMS AND METHODS FOR GENERATING TIME LAPSE VIDEOS
20230014193 · 2023-01-19 ·

Video information may define spherical video content having a duration. Spherical video content may define visual content viewable from a point of view as a function of progress through the spherical video content. Path information may define a path selection for the spherical video content. Path selection may include movement of a viewing window within the spherical video content. The viewing window may define extents of the visual content viewable from the point of view as the function of progress through the spherical video content. Time lapse parameter information may define at least two of a time portion of the duration, an image sampling rate, and a time lapse speed effect. A time lapse video may be generated based on the video information, the path information, and the time lapse parameter information.