H04N21/234318

Virtualizing content
11688145 · 2023-06-27 · ·

Techniques for virtualizing content are disclosed. Elements comprising source content are virtualized by mapping each to and representing each with a corresponding database object. A specification of the corresponding database objects is provided for rendering the source content instead of any original pixel information of the source content so that a virtualized version of the source content is rendered.

Distributed content analysis network

A master node in a multimedia content network includes a processor, a content interface coupled to the processor and configured to receive, from a media source, a multimedia stream including a multimedia program, a network interface coupled to the processor and configured to provide an interface to a broadband network, processor executable instructions for performing operations including: identifying a group of subordinate nodes available to analyze a multimedia program, assigning different analysis tasks for the multimedia program to the available subordinate nodes, receiving analysis results from the subordinate nodes; and modifying electronic programming guide information for the multimedia program based on the analysis results.

Transport stream packets with time stamp generation by medium access control

Provided is a method of transmitting transport stream packets from a terminal connected to a network including generating a time stamp with reference to time information managed in synchronization with other terminals by a medium access control (MAC) layer in order to control medium use in the network; processing the transport stream packets by using the time stamp; and transmitting the processed transport stream packets.

SYSTEMS AND METHODS FOR VIDEO/MULTIMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY
20170287525 · 2017-10-05 · ·

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application.

INFORMATION PROCESSING APPARATUS HAVING CAPABILITY OF APPROPRIATELY SETTING REGIONS DISPLAYED WITHIN AN IMAGE CAPTURING REGION USING DIFFERENT IMAGE CATEGORIES
20220053145 · 2022-02-17 ·

An information processing apparatus which accepts designation of a first image category from among a plurality of image categories including a visible light image, an infrared light image, and a composite image, accepts designation of a second image category different from the first image category from among the plurality of image categories, displays an image of the accepted first image category in a display region of a display unit, and accepts designation of a region in the image of the first image category displayed in the display region, wherein an image of the accepted second image category is displayed in the accepted region.

Method of transforming an image file

A computer-implemented method of transforming an image file by an image file transformation apparatus, the method including providing an image file in a pixel-based format having a plurality of pixels, dividing the pixels into a plurality of patches, sampling the pixels to generate boundary conditions relating to each of the patches, deriving Fourier coefficients of a solution to a partial differential equation according to the boundary conditions, and outputting the Fourier coefficients for each of the patches as a transformed image file.

Client apparatus, client apparatus processing method, server, and server processing method
11240480 · 2022-02-01 · ·

Multiple clients (viewers) are allowed to share their VR spaces for communication with one another. A server-distributed stream including a video stream obtained by encoding a background image is received from a server. A client-transmitted stream including representative image meta information for displaying a representative image of another client is received from another client apparatus. The video stream is decoded to obtain the background image. The image data of the representative image is generated on the basis of the representative image meta information. Display image data is obtained by synthesizing the representative image on the background image.

Mini-banner content
11455764 · 2022-09-27 · ·

Devices, systems, and methods are provided for use in interpreting, converting, generating, embedding, presenting, storing and otherwise using mini-banner content. For at least one embodiment, a mini-banner content system may include a secondary content system element which executes non-transient computer executable instructions to configure: a content interpreter to interpret secondary content, identify aspect information, and output the aspect information; a content converter, which when receiving aspect information at least one of retrieves a first element corresponding to the aspect information and/or generates a second element corresponding to the aspect information, and generates a mini-banner content element based upon results of the retrieving operation and/or the generating operation.

Apparatus and method for generating 3D video data

A plurality of video input units generate video frames and provide shooting characteristics. A 3D video frame generator creates a 3D video frame by combining a plurality of video frames, which are provided from the plurality of video input units, respectively, and provides 3D video frame composition information indicating a composition type of the plurality of video frames included in the 3D video frame, and resolution control information indicating adjustment/non-adjustment of resolutions of the video frames. A 3D video frame encoder outputs an encoded 3D video stream by encoding the 3D video frame provided from the 3D video frame generator. A composition information checker checks 3D video composition information including the shooting information, the 3D video frame composition information, and the resolution control information. A 3D video data generator generates 3D video data by combining the 3D video composition information and the encoded 3D video stream.

Methods and systems for automatically evaluating an audio description track of a media asset
09774911 · 2017-09-26 · ·

Methods and systems for automatically evaluating an audio description track of a media asset include initializing a rating of an audio description track of a media asset to a default value; receiving a first video frame and a second video frame of the media asset; detecting an object in the first video frame and the second video frame; determining that a difference in a characteristic of the object between the first video frame and the second video frame exceeds a threshold difference; determining that an audio characteristic in a portion of the audio description track that corresponds to the first video frame and the second video frame exceeds a threshold audio characteristic; and increasing the rating of the audio description track by a unit.