Patent classifications
G06T9/00
2D UV ATLAS SAMPLING BASED METHODS FOR DYNAMIC MESH COMPRESSION
Method, apparatus, and system for sampling-based dynamic mesh compression are provided. The process may include determining one or more sample positions associated with an input mesh based on one or more sampling rates, and an occupancy status associated respectively with each of the one or more sample positions indicating whether each of the one or more sample positions is within boundaries of one or more polygons defined by the input mesh is determined. The process may include generating a sample-based occupancy map based on the occupancy status associated respectively with each of the one or more sample positions.
Coding Blocks of Pixels
A method and decoding unit for decoding a compressed data structure that encodes a set of Haar coefficients for a 2×2 quad of pixels of a block of pixels. The set of Haar coefficients comprises a plurality of differential coefficients and an average coefficient. A first portion of the compressed data structure encodes the differential coefficients for the 2×2 quad of pixels. A second portion of the compressed data structure encodes the average coefficient for the 2×2 quad of pixels. The first portion of the compressed data structure is used to determine signs and exponents differential coefficients which are non-zero. The second portion of the compressed data structure is used to determine a representation of the average coefficient. The result of a weighted sum of the differential coefficients and the average coefficient for the 2×2 quad of pixels is determined using: (i) the determined signs and exponents for the differential coefficients which are non-zero, (ii) the determined representation of the average coefficient, and (iii) respective weights for the differential coefficients. The determined result is used to determine the decoded value. The determined decoded value is outputted.
Coding Blocks of Pixels
A method and decoding unit for decoding a compressed data structure that encodes a set of Haar coefficients for a 2×2 quad of pixels of a block of pixels. The set of Haar coefficients comprises a plurality of differential coefficients and an average coefficient. A first portion of the compressed data structure encodes the differential coefficients for the 2×2 quad of pixels. A second portion of the compressed data structure encodes the average coefficient for the 2×2 quad of pixels. The first portion of the compressed data structure is used to determine signs and exponents differential coefficients which are non-zero. The second portion of the compressed data structure is used to determine a representation of the average coefficient. The result of a weighted sum of the differential coefficients and the average coefficient for the 2×2 quad of pixels is determined using: (i) the determined signs and exponents for the differential coefficients which are non-zero, (ii) the determined representation of the average coefficient, and (iii) respective weights for the differential coefficients. The determined result is used to determine the decoded value. The determined decoded value is outputted.
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
CODING SCHEME FOR VIDEO DATA USING DOWN-SAMPLING/UP-SAMPLING AND NON-LINEAR FILTER FOR DEPTH MAP
Methods of encoding and decoding video data are provided. In an encoding method, source video data comprising one or more source views is encoded into a video bitstream. Depth data of at least one of the source views is nonlinearly filtered and downsampled prior to encoding. After decoding, the decoded depth data is up-sampled and nonlinearly filtered.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
METHOD FOR COMPRESSING A SEQUENCE OF IMAGES DISPLAYING SYNTHETIC GRAPHICAL ELEMENTS OF NON-PHOTOGRAPHIC ORIGIN
Method for compressing a sequence of images comprising a first image and a second image, the method comprising the steps of: generating a first descriptor comprising parameters for displaying a computer-generated graphical element in the first image, the graphical element being of non-photographic origin, and the display parameters not comprising pixel values; processing the second image so as to determine an event which gave rise to a potential variation in the parameters for displaying the graphical element between the first image and the second image; generating a second descriptor comprising an event code indicating the determined event.
METHOD FOR COMPRESSING A SEQUENCE OF IMAGES DISPLAYING SYNTHETIC GRAPHICAL ELEMENTS OF NON-PHOTOGRAPHIC ORIGIN
Method for compressing a sequence of images comprising a first image and a second image, the method comprising the steps of: generating a first descriptor comprising parameters for displaying a computer-generated graphical element in the first image, the graphical element being of non-photographic origin, and the display parameters not comprising pixel values; processing the second image so as to determine an event which gave rise to a potential variation in the parameters for displaying the graphical element between the first image and the second image; generating a second descriptor comprising an event code indicating the determined event.
NOISE SYNTHESIS FOR DIGITAL IMAGES
Apparatus and methods for providing software and hardware based solutions to the problem of synthesizing noise for a digital image. According to one aspect, a probability image is generated and noise blocks are randomly placed at locations in the probability image where the locations have probability values that are compared to a threshold criterion, creating a synthesized noise image. Embodiments include generating synthesized film grain images and synthesized digital camera noise images.