Patent classifications
H04N21/23412
Extended reality recorder
Implementations of the subject technology provide systems and methods for recording an extended reality experience in a way that allows the experience to be played back at a later time from a different viewpoint or perspective. This allows computer-generated content that was rendered for display to a user during the recording, to be re-rendered during playback at the correct time and location in the recording, but from a different perspective. In order to facilitate this type of viewer-centric playback, the recording includes a computer-generated content track that references resources for re-rendering the computer-generated content at each point in time in the recording.
Generating composite video stream for display in VR
A processor system and computer-implemented method may be provided for generating a composite video stream which may combine a background video and a foreground video stream into one stream. For that purpose, a spatially segmented encoding of the background video may be obtained, for example in the form of a tiled stream. The foreground video stream may be received, for example, from a(nother) client device. The foreground video stream may be a real-time stream, e.g., when being used in real-time communication. The image data of the foreground video stream may be inserted into the background video by decoding select segments of the background video, inserting the foreground image data into the decoded background image data of these segments, and by encoding the resulting composite image data to obtain composite segments which, together with the non-processed segments of the background video, form a spatially segmented encoding of a composite video.
CONSISTENT GENERATION OF MEDIA ELEMENTS ACROSS MEDIA
An example method performed by a processing system includes retrieving a digital model of a media element from a database storing a plurality of media elements, wherein the media element is to be inserted into a scene of an audiovisual media, rendering the media element in the scene of the audiovisual media, based on the digital model of the media element and on metadata associated with the digital model to produce a rendered media element, wherein the metadata describes a characteristic of the media element and a limit on the characteristic, and inserting the rendered media element into the scene of the audiovisual media.
Dynamic, interactive segmentation in layered multimedia content
Computer implemented systems and methods are described for providing layered multimedia content. Specifically, the systems and methods can analyze a content stream or file using software to identify objects present in the first content, which could include people, items, places, music, sounds, and so forth. One or more elements can be generated and overlaid on to the content, which allow a viewer of the content to access information about the object and/or purchase a product or service associated with the object. Such information can be presented to the viewer when the viewer clicks on or otherwise interacts with the element.
VOLUMETRIC MEDIA PROCESS METHODS AND APPARATUS
Methods, systems and apparatus for processing of volumetric media data are described. One example method of volumetric media determining, from a media presentation description (MPD) file, one or more preselection elements corresponding to a preselection of a volumetric media, accessing, using the one or more preselection elements, one or more atlas data components and associated video-encoded components of the volumetric media; and reconstructing the volumetric media from the one or more atlas data components and the associated video-encoded components.
MEDIA STREAMING
A media playback system for presenting to a user a composition of a plurality of media streams. It has a media selection component configured to receive a scenario dataset, to receive user input for selecting viewing times defining segments of media and composition selections, and to output a list of segments of media from the scenario dataset that are authorized to be viewed by the user. The system has a playback control component configured to retrieve from media storage at least the segments of media from the output list of segments, to decode the segments of media, and to compile composition instructions. The system has a media playback component configured to receive the rendered media and the composition instructions.
Systems and methods for removing identifiable information
Systems and methods for censoring text characters in text-based data are provided. In some embodiments, an artificial intelligence system may be configured to receive text-based data and store the text-based data in a database. The artificial intelligence system may be configured to receive a list of target pattern types identifying sensitive data and receive censorship rules for the target pattern types determining target pattern types requiring censorship. The artificial intelligence system may be configured to assemble a computer-based model related to a received target pattern type in the list of target pattern types. The artificial intelligence system may be configured to use a computer-based model to identify a target data pattern corresponding to the received target pattern type within the text-based data, identify target characters within the target data pattern, and to assign an identification token to the target characters.
VIDEO GENERATION SYSTEM TO RENDER FRAMES ON DEMAND USING A FLEET OF GPUS
A content controller system to render frames on demand comprises a rendering server system that includes a plurality of graphics processing units (GPUs). The GPUs in the rendering server system render a set of media content item segments using a media content identification and a main user identification. Rendering the set of media content item segments includes retrieving metadata from a metadata database associated with the media content identification, rendering the set of media content item segments using the metadata, generating a main user avatar based on the main user identification, and incorporating the main user avatar into the set of media content item segments. The rendering server system then uploads the set of media content item segments to a segment database; and updates segment states in a segment state database to indicate that the set of media content item segments are available. Other embodiments are disclosed herein.
Video streaming apparatus, video editing apparatus, and video delivery system
A video delivery system according to the present disclosure includes: a video streaming apparatus which records and transmits a video data; and a video editing apparatus which receives video data and edits a video based on the video data. The video streaming apparatus includes a streaming processing unit which transmits information which indicates whether a record processing unit has recorded a video file. The video editing apparatus includes range designation means which indicates, to a user, a time range for which the video streaming apparatus has recorded a video file, based on determination by the additional information interpretation means.
CREATING INTERACTIVE DIGITAL EXPERIENCES USING A REALTIME 3D RENDERING PLATFORM
Certain aspects of the present disclosure provide techniques for creating interactive digital experiences for linear content. This includes identifying a plurality of assets relating to presentation of linear content. It further includes generating interactive content using the linear content, including generating an interactive sequence referencing one or more of the plurality of assets and combining the linear content with the interactive sequence on a timeline sequentially describing the linear content. The timeline includes one or more branches relating to the linear content, and selection of a first branch of the one or more branches is based on the interactive sequence. It further includes transmitting the interactive content to a user.