Patent classifications
H04N21/816
VIRTUAL VIDEO LIVE STREAMING PROCESSING METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
The application provides a virtual video live streaming processing method and apparatus, an electronic device, and a computer-readable storage medium, and relates to the field of virtual video live streaming technologies. The virtual video live streaming processing method includes: obtaining text data and determining to-be-synthesized video data corresponding to the text data; synthesizing a live video stream in real time according to the to-be-synthesized video data and pushing the live video stream to a live streaming client; determining target video data from the to-be-synthesized video data that has not been synthesized into a live video stream in response to a live streaming interruption request during receiving a live streaming interruption request; and synthesizing an interruption transition video stream according to the target video data and pushing the interruption transition video stream to the live streaming client. When a live video is interrupted during a virtual video live streaming process, this application may implement a smooth transition process between a current video action and a next video action without affecting real-time performance of the live video.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing system includes a controller. The controller acquires viewing state information together with viewer identification information from terminals of a plurality of viewers that are reproducing content in which a performance of a performer is imaged via a network in real time, the viewing state information indicating a line of sight or a position of each viewer in a coordinate system of a space where the viewer is present, the viewer identification information identifying the viewer. Further, the controller adds an effect to the content for each of the viewers on the basis of the acquired viewing state information.
MEDIA FILE ENCAPSULATING METHOD, MEDIA FILE DECAPSULATING METHOD, AND RELATED DEVICES
This application provides a media file encapsulating method, a media file decapsulating method, and related devices. The media file encapsulating method includes: acquiring a media stream of a target media content in a corresponding application scenario; encapsulating the media stream to generate an encapsulation file of the media stream, the encapsulation file including a first application scenario type field, the first application scenario type field being used for indicating the application scenario corresponding to the media stream; and transmitting the encapsulation file to a first device for the first device to determine the application scenario corresponding to the media stream according to the first application scenario type field and determine at least one of a decoding method and a rendering method of the media stream according to the application scenario corresponding to the media stream. This method can distinguish different application scenarios in the encapsulation of media files.
Viewport dependent video streaming events
Systems and methods described herein provide for rendering and quality monitoring of rendering of a 360-degree video, where the video has a plurality of representations with different levels of quality in different regions. In an exemplary method, a client device tracks a position of a viewport with respect to the 360-degree video and renders to the viewport a selected set of the representations. The client adaptively adds and removes representations from the selected set based on the viewport position. The client also measures and reports a viewport switching latency. In some embodiments, the latency for a viewport switch is a comparable-quality viewport switch latency that represents the time it takes after a viewport switch to return to a quality comparable to the pre-switch viewport quality.
METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO CONFERENCING
A method comprising obtaining a 360-degree video content from a video source; projecting the 360-degree video content onto a 2D image plane; dividing the projected 360-degree video content into a plurality of regions, wherein the regions are partly overlapping and each region covers a region of the 360-degree video content suitable for a viewport presentation; receiving a request for a viewport orientation of the 360-degree video content from a client; and providing the client with an viewport presentation of the region corresponding to the requested viewport orientation.
Systems and methods for virtual and augmented reality
Disclosed are methods, systems, and computer program products for mixed-reality systems. These methods or systems determine a three-dimensional model for at least a portion of a physical environment in which a user is located; and present, by a spatial computing system, a mixed-reality representation to the user. In addition, these methods or systems determine a first object model for a first object in the mixed-reality representation and update, by the spatial computing system, the mixed-reality representation into an updated mixed-reality representation that reflects an interaction pertaining to the first object.
Artificial reality system using superframes to communicate surface data
This disclosure describes efficient communication of surface texture data between system on a chip (SOC) integrated circuits. An example system includes a first integrated circuit and a second integrated circuit communicatively coupled to the first integrated circuit by a video communication interface. The first integrated generates a superframe in a video frame of the video communication interface for transmission to the second integrated circuit. The superframe includes multiple subframe payloads that carry surface texture data to be updated in the frame and corresponding subframe headers that include parameters of the subframe payloads. The second integrated circuit includes a direct access memory (DMA) controller. The DMA upon receipt of the superframe, writes the surface texture data within each of the subframe payloads directly to an allocated location in memory based on the parameters included in the corresponding one of the subframe headers.
ATSC OVER-THE-AIR (OTA) BROADCAST OF PUBLIC VOLUMETRIC AUGMENTED REALITY (AR)
Techniques are described for using the Advanced Television Systems Committee (ATSC) 3.0 television protocol to deliver volumetric information for presentation on various displays using ATSC over-the-air communications channels.
Systems and methods for generating time lapse videos
Video information may define spherical video content having a duration. Spherical video content may define visual content viewable from a point of view as a function of progress through the spherical video content. Path information may define a path selection for the spherical video content. Path selection may include movement of a viewing window within the spherical video content. The viewing window may define extents of the visual content viewable from the point of view as the function of progress through the spherical video content. Time lapse parameter information may define at least two of a time portion of the duration, an image sampling rate, and a time lapse speed effect. A time lapse video may be generated based on the video information, the path information, and the time lapse parameter information.
Creating multi-camera panoramic projections
One embodiment provides a method, including: obtaining, from each of two more panoramic cameras, panoramic video, wherein each of the two or more panoramic cameras are located at different physical locations within an event environment; compositing the panoramic video obtained from the two or more panoramic cameras into a single video; and streaming the composited panoramic video to one or more end users, wherein each of the one or more end users provide commands to manipulate the streamed composited panoramic video resulting in viewing of a different view of the streamed composited panoramic video for the corresponding end user based on the provided commands.