Patent classifications
H04N19/179
Method and System for Encoding a 3D Scene
A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.
Method and System for Encoding a 3D Scene
A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.
METHODS AND DEVICES FOR HIGH-LEVEL SYNTAX IN VIDEO CODING
Methods, apparatuses, and non-transitory computer-readable storage medium are provided for decoding video signals. A decoder receives a bitstream that includes a sequence parameter set (SPS) for video data. The decoder obtains arranged partition constraint syntax elements in sequence parameter set (SPS) level, and the arranged partition constraint syntax elements include intra prediction related syntax elements and inter prediction related syntax elements, and the arranged partition constraint syntax elements are arranged so that the intra prediction related syntax elements are defined before the inter prediction related syntax elements. The decoder decodes the video data based on the arranged partition constraint syntax elements.
ENHANCED WI-FI SENSING MEASUREMENT SETUP AND SENSING TRIGGER FRAME FOR RESPONDER-TO-RESPONDER SENSING
This disclosure describes systems, methods, and devices related to responder-to-responder Wi-Fi sensing between station devices. A device may cause to send, during a trigger frame sounding phase of a responder-to-responder Wi-Fi sensing, a sensing responder-to-responder sounding trigger frame to the first station device and the second station device, the sensing responder-to-responder sounding trigger frame associated with causing the first station device to send a responder-to-responder null data packet (NDP) to the second station device; cause to send, during a reporting phase of the responder-to-responder Wi-Fi sensing, a sensing report trigger frame to the second station device; and identify, during the reporting phase, a sensing measurement report from the second station device based on the sensing report trigger frame, wherein the sensing measurement report is indicative of measurements of the responder-to-responder NDP.
Optimized reduced bitrate encoding for titles and credits in video content
Embodiments include systems, methods, and computer-readable media for optimized reduced bitrate encoding for text-based content in video frames. Example methods may include determining that a first segment of video content includes a content scene, determining that a second segment of the video content includes text, and determining a first encoder configuration to encode the first segment of video content, where the first encoder configuration includes a first encoding parameter setting. Example methods may include determining a second encoder configuration to encode the second segment of the video content, where the second encoder configuration includes a second encoding parameter setting, encoding the first segment using the first encoder configuration, and encoding the second segment using the second encoder configuration. The first segment may be encoded at a first bitrate that is greater than a second bitrate at which the second segment is encoded.
COMBINED CONVEX HULL OPTIMIZATION
The disclosed computer-implemented method may include combining a first video sequence with a second video sequence to generate a combined video sequence. A video complexity of the first video sequence may differ from that of the second video sequence. The method may also include performing, using a baseline encoder, encoding parameter optimization on the combined video sequence to generate a baseline performance curve and performing, using a target encoder, encoding parameter optimization on the combined video sequence to generate a target performance curve. The method may further include analyzing the target encoder by comparing the target performance curve with the baseline performance curve, and generating a bitrate ladder for the target encoder based on the analysis, wherein the bitrate ladder includes desired bitrate-resolution pairs for encoding. Various other methods, systems, and computer-readable media are also disclosed.
CONTENT ADAPTIVE ENCODING
The described technology is generally directed towards developing an adaptive bitrate stack (ladder) on a per-title basis. Variable bitrate encodings are used to obtain complexity information for a title and per-frames scores for the encodings; another encoding provides scene data. The complexity information is analyzed and processed based on the scene data to determine scene-based (e.g., objective and/or subjective quality) scores, which are used to determine scores for the encodings. The results are used to derive a candidate stack, comprising various resolutions and bitrates that provide desirable results. The candidate stack is evaluated by encoding the title using the candidate stack. These encodings are evaluated to select one resolution from any duplicate resolutions for a bitrate (e.g., based on relative quality), resulting in a pruned, final ladder that is associated with the title as the adaptive bitrate stack to be used for streaming that title's content.
IMAGE DATA ENCODING METHOD AND APPARATUS, DISPLAY METHOD AND APPARATUS, AND ELECTRONIC DEVICE
An image data encoding method and apparatus (60) and an electronic device (80) are provided. The method includes: obtaining three-dimensional picture information of a first scene (101); and encoding the three-dimensional picture information of the first scene according to a preset file format, where the encoded three-dimensional picture information of the first scene includes file header information, index information, and data information, the index information is used to record reference point data corresponding to the three-dimensional picture information of the first scene, and the data information is used to record three-dimensional point data in the three-dimensional picture information of the first scene (102).
Method for image processing and apparatus for implementing the same
A method of method of processing an image includes: determining estimates of parameters of an auto-regressive, AR, parametric model of noise contained in the image, according to which a current noise pixel is computed as a combination of a linear combination of P previous noise pixels in a causal neighborhood of the current noise pixel weighted by respective AR model linear combination parameters (φ.sub.1, . . . , φ.sub.P) with a generated noise sample corresponding to an additive Gaussian noise of AR model variance parameter (σ), generating a noise template of noise pixels based on the estimated AR model parameters, wherein the noise template is of a predetermined pixel size smaller than the pixel size of the image, determining an estimate (σ.sub.P) of a variance of the noise template, and based on a comparison of the estimated variance (σ.sub.P) with a predetermined threshold (T.sub.σ), correcting the AR model variance parameter (σ).
LOAD BALANCING METHOD FOR VIDEO DECODING IN A SYSTEM PROVIDING HARDWARE AND SOFTWARE DECODING RESOURCES
A load balancing method for video decoding. The load balancing includes first determining which hardware devices are suitable for the new decoding process, and determining the current load of each of the suitable hardware devices. From the suitable devices potential devices are selected having a current load less than a threshold and overloaded devices are selected having a load greater than or equal to the threshold. If there are no suitable devices, then the decoding process is implemented by software decoding. If the list of potential hardware devices includes only one potential hardware device, then the decoding process is implemented on the hardware device. If the list of potential hardware devices includes more than one potential hardware device, then it is determined how many decoding processes are currently running on each potential hardware device, and the new decoding process is implemented on the potential hardware device having the fewest processes.