Patent classifications
H04N19/42
Guaranteed data compression
A method of converting 10-bit pixel data (e.g. 10:10:10:2 data) into 8-bit pixel data involves converting the 10-bit values to 7-bits or 8-bits and generating error values for each of the converted values. Two of the 8-bit output channels comprise a combination of a converted 7-bit value and one of the bits from the fourth input channel. A third 8-bit output channel comprises the converted 8-bit value and the fourth 8-bit output channel comprises the error values. In various examples, the bits of the error values may be interleaved when they are packed into the fourth output channel.
Guaranteed data compression
A method of converting 10-bit pixel data (e.g. 10:10:10:2 data) into 8-bit pixel data involves converting the 10-bit values to 7-bits or 8-bits and generating error values for each of the converted values. Two of the 8-bit output channels comprise a combination of a converted 7-bit value and one of the bits from the fourth input channel. A third 8-bit output channel comprises the converted 8-bit value and the fourth 8-bit output channel comprises the error values. In various examples, the bits of the error values may be interleaved when they are packed into the fourth output channel.
IMAGE COMPRESSION METHOD AND APPARATUS, IMAGE DISPLAY METHOD AND APPARATUS, AND MEDIUM
The present disclosure provides an image compression method, including steps of: acquiring a human-eye fixation point on an original image, and determining a fixation region and a non-fixation region of the original image according to the human-eye fixation point; and compressing the non-fixation region, and generating a compressed image according to the fixation region and the compressed non-fixation region. The present disclosure also provides an image display method, an image compression apparatus, an image display apparatus, and a computer readable medium.
Extensions of inter prediction with geometric partitioning
A method for processing a video includes performing a determination, by a processor, that a first video block is partitioned to include a first prediction portion that is non-rectangular and non-square; adding a first motion vector (MV) prediction candidate associated with the first prediction portion to a motion candidate list associated with the first video block, wherein the first MV prediction candidate is derived from a sub-block MV prediction candidate; and performing further processing of the first video block using the motion candidate list.
METHOD AND APPARATUS FOR DYNAMIC LEARNING RATES OF SUBSTITUTION IN NEURAL IMAGE COMPRESSION
Neural network based substitutional end-to-end (E2E) image compression (NIC) being performed by at least one processor and includes receiving an input image to an E2E NIC framework, determining a step size of the input image indicating a learning rate of a training model, determining a substitute image based on the training model, encoding the substitute image in lieu of the input image to generate a bitstream, and mapping the substitute image to the bitstream to generate a compressed representation. Further, step size may be determined by a scheduler and change throughout the training of the training model. The image may also be split into patches for which a scheduler is assigned for each patch and each patch is encoded instead of the entire input image.
METHOD AND APPARATUS FOR DYNAMIC LEARNING RATES OF SUBSTITUTION IN NEURAL IMAGE COMPRESSION
Neural network based substitutional end-to-end (E2E) image compression (NIC) being performed by at least one processor and includes receiving an input image to an E2E NIC framework, determining a step size of the input image indicating a learning rate of a training model, determining a substitute image based on the training model, encoding the substitute image in lieu of the input image to generate a bitstream, and mapping the substitute image to the bitstream to generate a compressed representation. Further, step size may be determined by a scheduler and change throughout the training of the training model. The image may also be split into patches for which a scheduler is assigned for each patch and each patch is encoded instead of the entire input image.
Video decoding implementations for a graphics processing unit
Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.
Video decoding implementations for a graphics processing unit
Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.
Video decoding implementations for a graphics processing unit
Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.
LOAD BALANCING METHOD FOR VIDEO DECODING IN A SYSTEM PROVIDING HARDWARE AND SOFTWARE DECODING RESOURCES
A load balancing method for video decoding. The load balancing includes first determining which hardware devices are suitable for the new decoding process, and determining the current load of each of the suitable hardware devices. From the suitable devices potential devices are selected having a current load less than a threshold and overloaded devices are selected having a load greater than or equal to the threshold. If there are no suitable devices, then the decoding process is implemented by software decoding. If the list of potential hardware devices includes only one potential hardware device, then the decoding process is implemented on the hardware device. If the list of potential hardware devices includes more than one potential hardware device, then it is determined how many decoding processes are currently running on each potential hardware device, and the new decoding process is implemented on the potential hardware device having the fewest processes.