H04N21/234318

SYSTEMS AND METHODS OF CUSTOMIZED TELEVISION PROGRAMMING OVER THE INTERNET
20180007403 · 2018-01-04 ·

A production facility receives program content from a plurality of broadcast feeds over the Internet. In an embodiment, the production facility comprises server on a computer network, such as the Internet. The server comprises computer programs configured to manipulate the audio and video data of the multiple program contents. At the production facility, the program content can be manipulated to produce a production. Program manipulation can comprise at least one of green screen technology, music, graphics, Foley, sound effects, voice over, advertising, or the like. The production is broadcast over the computer network to viewers, where the viewers receive the production. In an embodiment, the production is customized based on the viewers' input received while broadcasting. In other words, the production is customized in real time based at least in part on the interaction of the production with the viewers. In another embodiment, the viewers can further manipulate the program content of the production to create a new production, which can be broadcast over the customized programming system.

Mini-Banner Content
20230005199 · 2023-01-05 · ·

Devices, systems, and methods are provided for use in interpreting, converting, generating, embedding, presenting, storing and otherwise using mini-banner content. For at least one embodiment, a mini-banner content system may include a secondary content system element which executes non-transient computer executable instructions to configure: a content interpreter to interpret secondary content, identify aspect information, and output the aspect information; a content converter, which when receiving aspect information at least one of retrieves a first element corresponding to the aspect information and/or generates a second element corresponding to the aspect information, and generates a mini-banner content element based upon results of the retrieving operation and/or the generating operation.

Automated Dynamic Data Extraction, Distillation, and Enhancement
20230029096 · 2023-01-26 ·

A dynamic data extraction, distillation, and enhancement system is disclosed that includes a dynamic extraction, distillation, and enhancement framework. The framework includes an allocator, extractor, and deconstructor stored in a non-transitory memory that, when executed by a processor, receive files in different formats from data sources, determine a native format of each file, identify and extract an embedded object from a file, deconstruct the file into components, assign each file to one of a plurality of streams based on the native format of the file, assign the embedded object to a stream based on a format of the embedded object, and assign a deconstructed component to a stream based on a format of the deconstructed component. The native format includes one of text, video, image, or audio. Each stream corresponds to one native format. The streams include a text stream, an audio stream, a video stream, and an image stream.

A METHOD AND APPARATUS FOR ENCODING AND DECODING VOLUMETRIC VIDEO

Methods, devices and stream are disclosed to encode and decode a volumetric content. At the encoding, the space of the volumetric content is divided in distinct sectors according to at least two different sectorizations. One atlas is generated for each sectorization or a single atlas is generated encoding all the sectorizations. At the decoding, a sectorization is selected according to the current direction and field of view, according to user's gaze navigation and according to prediction of the upcoming pose of the virtual camera controlled by the user. Sectors are selected according the selected sectorization and the current direction and field of view and only patches encoded in regions of the atlas associated with these sectors are accessed to generate the viewport image representative of the content seen from the current point of view.

REPRESENTING VOLUMETRIC VIDEO IN SALIENCY VIDEO STREAMS

Saliency regions are identified in a global scene depicted by volumetric video. Saliency video streams that track the saliency regions are generated. Each saliency video stream tracks a respective saliency region. A saliency stream based representation of the volumetric video is generated to include the saliency video streams. The saliency stream based representation of the volumetric video is transmitted to a video streaming client.

Separation of graphics from natural video in streaming video content
11546617 · 2023-01-03 · ·

Aspects of the subject disclosure may include, for example, a method that includes obtaining, by a processing system including a processor, video frames over a network; the processing system uses a machine learning algorithm to identify in each frame a first region comprising a natural image and a second region comprising a synthetic graphic image. The processing system separates the natural image from the synthetic graphic image to generate a natural video and a graphics video, encodes the natural video, and processes the graphics video to generate instructions for rendering graphic images at a client system. The client system performs a decoding procedure for the encoded video, a rendering procedure for client-side graphics in accordance with the instructions, and a compositing procedure to obtain a presentable video stream including the natural image and a client-side graphic corresponding to the synthetic graphic image. Other embodiments are disclosed.

Electronic device and control method thereof

An electronic device is provided. The electronic device includes a display, at least one processor, and at least one memory configured to store instructions that cause the at least one processor to obtain first information from a first still image frame that is included in a first moving image, obtain second information from the first moving image, identify at least one image function based on at least one of the first information or the second information, and control the display to display at least one function execution object for executing the at least one image function. Various other embodiments can be provided.

Methods and systems for dynamic media content

Methods and systems are provided for presenting media content capable of being dynamically adapted. One method involves analyzing content of a media program to identify a replaceable object at a spatial location within the content at a temporal location within the content, analyzing the spatial location of the content corresponding to the replaceable object within the content to identify one or more attributes of the replaceable object, identifying a substitute object based at least in part on the one or more attributes associated with the replaceable object, augmenting the temporal location of the content to include the substitute object at the spatial location within the content in lieu of the replaceable object, and providing the augmented version of the content to a media player for presentation.

Method and apparatus for efficient delivery and usage of audio messages for high quality of experience

A method and a system for virtual reality, augmented reality, mixed reality, or 360-degree Video environment is disclosed. The system receives Video Streams associated to audio and video scenes to be reproduced and Audio Streams associated to audio and video scenes to be reproduced. There are provided a Video decoder which decodes signal from the Video Stream for the representation of the audio and video scene; an Audio decoder which decodes signal from the Audio Stream for the representation of the audio and video scene to the user; and a region of interest processor deciding, based e.g. on the user's viewport, head orientation, movement data, or metadata, whether an Audio information message is to be reproduced. At the decision, the reproduction of the Audio information message is caused.

System and Method for Analyzing Videos in Real-Time

A method and a sports analytics system (SAS) for analyzing a live video broadcast stream (LVBS) of a sporting event are provided. The SAS splits the LVBS into a real time messaging protocol (RTMP) stream and a hypertext transfer protocol live stream (HLS) and analyses the RTMP stream using a phase difference between the RTMP stream and the HLS. The SAS detects persons present in a frame of the RTMP stream using a first set of cues and tracks the detected persons by analyzing preceding frames. The SAS recognizes the tracked persons using a second set of cues, assigns individual weights to each of the second set of cues, and compares the assigned weights of each of the recognized persons with pre-existing data of all players to identify the players in the frame. The SAS transmits the HLS and contextual interactive content of the identified players to a user device.