Patent classifications
H04N21/85
Providing restricted overlay content to an authorized client device
A processing device for generating a viewing data report is disclosed. The processing device may include a memory device and a processor. The memory device may store instructions. The processor may be operatively coupled to the memory device. The processor may execute the instructions to: determine first viewing data associated with a first ACR event; determine second viewing data associated with a second ACR event; determine that a data field of a plurality of data fields in the first viewing data is incomplete; derive new data for the data field using other data fields of the first viewing data; aggregate the first viewing data and the second viewing data into a single data model to obtain aggregated viewing data of viewing behavior of a first viewer and a second viewer; and create a viewing data report as a compilation of the aggregated viewing data.
Providing restricted overlay content to an authorized client device
A processing device for generating a viewing data report is disclosed. The processing device may include a memory device and a processor. The memory device may store instructions. The processor may be operatively coupled to the memory device. The processor may execute the instructions to: determine first viewing data associated with a first ACR event; determine second viewing data associated with a second ACR event; determine that a data field of a plurality of data fields in the first viewing data is incomplete; derive new data for the data field using other data fields of the first viewing data; aggregate the first viewing data and the second viewing data into a single data model to obtain aggregated viewing data of viewing behavior of a first viewer and a second viewer; and create a viewing data report as a compilation of the aggregated viewing data.
System and method for generating videos
A system comprising a processor configured to: provide a master Three-Dimensional (3D) scene; insert at least one source video feed into at least one position within the master 3D scene, allowing a configuration in which at least a first part of the master 3D scene is in front of the source video feed and at least a second part of the master 3D scene is behind the source video feed; and generate a combined video of the master 3D scene with the at least one source video feed inserted therein.
SYSTEM AND METHODS FOR INTERACTIVE FILTERS IN LIVE STREAMING MEDIA
This present disclosure describes a system and methods for interactive filters in live streaming multimedia. At least one method includes a user playing video games on a computer, using streaming software to combine all or part of their computer session with their local camera feed, using streaming software to encode and stream the encoded video to one or more streaming services, streaming services displaying the video stream to one or more viewers, said viewers interacting with the video via the streaming service, the user's streaming software retrieving data about viewer interactions, the streaming software using a computer vision algorithm to detect the position of an object in the user's camera feed, such as the user's face or hands, the streaming software retrieving animation code, the streaming software using the detected position of the detected object to generate a graphical image that aligns with and follows the detected object in the local camera feed, the streaming software adding the graphical image to the video stream in direct response to viewer interactions, and said graphical image being inserted into the video stream prior to the video being published for viewers to consume by the streaming service.
SYSTEM AND METHODS FOR INTERACTIVE FILTERS IN LIVE STREAMING MEDIA
This present disclosure describes a system and methods for interactive filters in live streaming multimedia. At least one method includes a user playing video games on a computer, using streaming software to combine all or part of their computer session with their local camera feed, using streaming software to encode and stream the encoded video to one or more streaming services, streaming services displaying the video stream to one or more viewers, said viewers interacting with the video via the streaming service, the user's streaming software retrieving data about viewer interactions, the streaming software using a computer vision algorithm to detect the position of an object in the user's camera feed, such as the user's face or hands, the streaming software retrieving animation code, the streaming software using the detected position of the detected object to generate a graphical image that aligns with and follows the detected object in the local camera feed, the streaming software adding the graphical image to the video stream in direct response to viewer interactions, and said graphical image being inserted into the video stream prior to the video being published for viewers to consume by the streaming service.
Applications generating statistics for user behavior
An architecture to assemble and manage usage information and populate one or more panels in an intelligent TV. The architecture includes a usage statistics provider module adapted to assemble one or more of usage information and installation information and query the one or more of usage information and installation information to populate one or more of icons and information in a view or panel on the intelligent TV. The architecture further includes a panel manager adapted to assemble the one or more of icons and information into a requested view. A display controller displays the view on a display of the intelligent TV. A silo manager sorts information in at least one panel subcategory based at least on the one or more of usage information and installation information, where the at least one subpanel panel includes a plurality of icons each representing an app or content.
Applications generating statistics for user behavior
An architecture to assemble and manage usage information and populate one or more panels in an intelligent TV. The architecture includes a usage statistics provider module adapted to assemble one or more of usage information and installation information and query the one or more of usage information and installation information to populate one or more of icons and information in a view or panel on the intelligent TV. The architecture further includes a panel manager adapted to assemble the one or more of icons and information into a requested view. A display controller displays the view on a display of the intelligent TV. A silo manager sorts information in at least one panel subcategory based at least on the one or more of usage information and installation information, where the at least one subpanel panel includes a plurality of icons each representing an app or content.
Video generation system to render frames on demand
Method to generate frames on demand starts with a system receiving a request for a media content item from a client device. The request includes a media content identification and a main user identification. The system transmits to the client device a playlist including a first set of media content item segments. While the first set of media content item segments is being displayed on the client device, the system renders a second set of media content item segments using the media content identification and the main user identification. Rendering the second set of media content item segments can include rendering a main user avatar based on the main user identification and incorporating the main user avatar into the second set of media content item segments. The system then updates the playlist to include the second set of media content item segments. Other embodiments are disclosed herein.
Methods and systems for derived immersive tracks
The techniques described herein relate to methods, apparatus, and computer readable media configured to access media data for a first three-dimensional (3D) immersive media experience including media tracks each including an associated series of samples of media data for a different component of the first 3D immersive media experience and derived immersive tracks, each comprising a set of derivation operations to perform to generate an associated series of samples of media data for a different component of a second 3D immersive media experience and perform, for each of the one or more derived immersive tracks, a derivation operation of the set of derivation operations by processing associated samples of the one or more media tracks as specified by the derivation operation to generate the associated series of samples of media data of the second 3D immersive media experience.
Methods and systems for derived immersive tracks
The techniques described herein relate to methods, apparatus, and computer readable media configured to access media data for a first three-dimensional (3D) immersive media experience including media tracks each including an associated series of samples of media data for a different component of the first 3D immersive media experience and derived immersive tracks, each comprising a set of derivation operations to perform to generate an associated series of samples of media data for a different component of a second 3D immersive media experience and perform, for each of the one or more derived immersive tracks, a derivation operation of the set of derivation operations by processing associated samples of the one or more media tracks as specified by the derivation operation to generate the associated series of samples of media data of the second 3D immersive media experience.