Patent classifications
H04N19/65
IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM, AND IMAGE COMPRESSION METHOD
Provided are an image sensor module, an image processing device, and an image compression method. The image compression method includes receiving pixel values of a target pixel group to be compressed in the image data and reference values of reference pixels to be used for compression of the target pixel group, generating a virtual reference map by applying an offset value to each of the reference values, compressing the pixel values of the target pixel group based on the virtual reference map, and generating a bitstream based on a compression result and compression information based on the virtual reference map.
IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM, AND IMAGE COMPRESSION METHOD
Provided are an image sensor module, an image processing device, and an image compression method. The image compression method includes receiving pixel values of a target pixel group to be compressed in the image data and reference values of reference pixels to be used for compression of the target pixel group, generating a virtual reference map by applying an offset value to each of the reference values, compressing the pixel values of the target pixel group based on the virtual reference map, and generating a bitstream based on a compression result and compression information based on the virtual reference map.
REDUCING DROPPED FRAMES IN IMAGE CAPTURING DEVICES
Methods, systems, and devices for reducing dropped frames in image capturing devices are described. The method includes receiving, from an optical sensor of the image capturing device, a batch of frames, determining, by a hardware layer or a software layer of the image capturing device, that an error condition exists in relation to the batch of frames, determining, by the hardware layer, a numerical quantity of frames of the batch of frames in a frame buffer based on determining the error condition exists, and sending, by the hardware layer, the determined quantity of frames to the software layer of the image capturing device for processing by the software layer.
REDUCING DROPPED FRAMES IN IMAGE CAPTURING DEVICES
Methods, systems, and devices for reducing dropped frames in image capturing devices are described. The method includes receiving, from an optical sensor of the image capturing device, a batch of frames, determining, by a hardware layer or a software layer of the image capturing device, that an error condition exists in relation to the batch of frames, determining, by the hardware layer, a numerical quantity of frames of the batch of frames in a frame buffer based on determining the error condition exists, and sending, by the hardware layer, the determined quantity of frames to the software layer of the image capturing device for processing by the software layer.
Decoding method and apparatuses with candidate motion vectors
An image coding method bitstream includes: determining a maximum number of a merging candidate which is a combination of a prediction direction, a motion vector, and a reference picture index for use in coding of a current block; deriving a first merging candidate; determining whether or not a total number of the first merging candidate is smaller than the maximum number; deriving a second merging candidate when it is determined that the total number of the first merging candidate is smaller than the maximum number; selecting a merging candidate for use in the coding of the current block from the first merging candidate and the second merging candidate; and coding, using the maximum number, an index for identifying the selected merging candidate, and attaching the coded index to the bitstream.
Decoding method and apparatuses with candidate motion vectors
An image coding method bitstream includes: determining a maximum number of a merging candidate which is a combination of a prediction direction, a motion vector, and a reference picture index for use in coding of a current block; deriving a first merging candidate; determining whether or not a total number of the first merging candidate is smaller than the maximum number; deriving a second merging candidate when it is determined that the total number of the first merging candidate is smaller than the maximum number; selecting a merging candidate for use in the coding of the current block from the first merging candidate and the second merging candidate; and coding, using the maximum number, an index for identifying the selected merging candidate, and attaching the coded index to the bitstream.
Systems and methods for selecting resolutions for content optimized encoding of video data
A disclosed computer-implemented method may include receiving a media item for encoding via a content optimized encoding algorithm. The method may also include determining, in accordance with the content optimized encoding algorithm, an overall error model for the media item. The overall error model may include (1) a rate-distortion model for the media item, and (2) a downsampling-distortion model for the media item. The method may also include determining, based on the overall error model, a bitrate cost associated with streaming of the media item within a bitrate lane. The method may further include adjusting the overall error model based on the bitrate cost associated with streaming of the media item within the bitrate lane and encoding the media item for streaming within the bitrate lane based on the adjusted overall error model.
Systems and methods for selecting resolutions for content optimized encoding of video data
A disclosed computer-implemented method may include receiving a media item for encoding via a content optimized encoding algorithm. The method may also include determining, in accordance with the content optimized encoding algorithm, an overall error model for the media item. The overall error model may include (1) a rate-distortion model for the media item, and (2) a downsampling-distortion model for the media item. The method may also include determining, based on the overall error model, a bitrate cost associated with streaming of the media item within a bitrate lane. The method may further include adjusting the overall error model based on the bitrate cost associated with streaming of the media item within the bitrate lane and encoding the media item for streaming within the bitrate lane based on the adjusted overall error model.
Apparatus, a method and a computer program for video coding and decoding
A method comprising: deriving a first prediction block (608) at least partly based on an output of a neural net (602) using a first set of parameters; deriving a first encoded prediction error block (614-620) through encoding a difference of the first prediction block and a first input block; encoding (620) the first encoded prediction error block into a bitstream; deriving a first reconstructed prediction error block (624) from the first encoded prediction error block; deriving a training signal (628) from one or both of the first encoded prediction error block and/or the first reconstructed prediction error block (624); retraining (630) the neural net (602) with the training signal (628) to obtain a second set of parameters for the neural net (602); deriving a second prediction block (608) at least partly based on an output of the neural net using the second set of parameters; deriving a second encoded prediction error block (614-620) through encoding a difference of the second prediction block and a second input block; and encoding (620) the second encoded prediction error block into a bitstream. The invention relates to image or video encoding or decoding, especially by online training a neural network (602) that is in the prediction loop.
Apparatus, a method and a computer program for video coding and decoding
A method comprising: deriving a first prediction block (608) at least partly based on an output of a neural net (602) using a first set of parameters; deriving a first encoded prediction error block (614-620) through encoding a difference of the first prediction block and a first input block; encoding (620) the first encoded prediction error block into a bitstream; deriving a first reconstructed prediction error block (624) from the first encoded prediction error block; deriving a training signal (628) from one or both of the first encoded prediction error block and/or the first reconstructed prediction error block (624); retraining (630) the neural net (602) with the training signal (628) to obtain a second set of parameters for the neural net (602); deriving a second prediction block (608) at least partly based on an output of the neural net using the second set of parameters; deriving a second encoded prediction error block (614-620) through encoding a difference of the second prediction block and a second input block; and encoding (620) the second encoded prediction error block into a bitstream. The invention relates to image or video encoding or decoding, especially by online training a neural network (602) that is in the prediction loop.