H04N19/194

Method and device for encoding or decoding image

An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.

Region-based image compression and decompression

An apparatus for encoding an image and an apparatus for decoding an image are presented. An image contains one or more regions. For encoding the image, the image is decomposed into one or more regions and a region is evaluated to determine whether the region meets a predetermined compressions acceptability criteria. The region is then encoded in response to the transformed and quantized region meeting the predetermined compression acceptability criteria. For decoding the image, a region of the image is selected and the selected region is decoded using metadata associated with the selected region. The metadata includes transformation quantization settings and information describing an aspect ratio used to compress the region.

Region-based image compression and decompression

An apparatus for encoding an image and an apparatus for decoding an image are presented. An image contains one or more regions. For encoding the image, the image is decomposed into one or more regions and a region is evaluated to determine whether the region meets a predetermined compressions acceptability criteria. The region is then encoded in response to the transformed and quantized region meeting the predetermined compression acceptability criteria. For decoding the image, a region of the image is selected and the selected region is decoded using metadata associated with the selected region. The metadata includes transformation quantization settings and information describing an aspect ratio used to compress the region.

GENERATING ADAPTIVE DIGITAL VIDEO ENCODINGS BASED ON DOWNSCALING DISTORTION OF DIGITAL VIDEO CONTENT
20230096744 · 2023-03-30 ·

Methods, systems, and non-transitory computer readable storage media are disclosed for two-phase encoding a digital video based on downsampling distortion of the digital video and a constant rate factor transition threshold. For example, the disclosed system can determine a downsampling distortion indicating a measure of distortion resulting from downsampling an input digital video. The disclosed systems can utilize the downsampling distortion to determine a constant rate factor transition threshold for selecting sets of encoding parameters. For example, the disclosed systems can select a first set of encoding parameters below the constant rate factor transition threshold and a second set of encoding parameters at or above the constant rate factor transition threshold. Additionally, the disclosed systems can generate first and second sets of digital video encodings of the input digital video by utilizing the first and second sets of encoding parameters, respectively.

ENHANCED WI-FI SENSING MEASUREMENT SETUP AND SENSING TRIGGER FRAME FOR RESPONDER-TO-RESPONDER SENSING
20230033468 · 2023-02-02 ·

This disclosure describes systems, methods, and devices related to responder-to-responder Wi-Fi sensing between station devices. A device may cause to send, during a trigger frame sounding phase of a responder-to-responder Wi-Fi sensing, a sensing responder-to-responder sounding trigger frame to the first station device and the second station device, the sensing responder-to-responder sounding trigger frame associated with causing the first station device to send a responder-to-responder null data packet (NDP) to the second station device; cause to send, during a reporting phase of the responder-to-responder Wi-Fi sensing, a sensing report trigger frame to the second station device; and identify, during the reporting phase, a sensing measurement report from the second station device based on the sensing report trigger frame, wherein the sensing measurement report is indicative of measurements of the responder-to-responder NDP.

CODING MODE INFORMATION PROPAGATION FOR VIDEO CODING

Coding mode information of a current block is propagated to neighbor blocks even when the coding mode is not selected for a current block. Such propagation may be limited, for example to one block only or to multiple generation of copies. Different propagation modes are proposed: left to right direction, top to bottom direction, bottom-right diagonal direction, first available information, or last coded information. The coding mode propagation allows to improve further predictions since a further block will benefit from the information from its neighbors.

Method and device for encoding or decoding image

An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.

Method and device for encoding or decoding image

An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.

Optimized multipass encoding

An original input video file is encoded using a machine learning approach. The encoder performs a detailed video analysis and selection of encoding parameters that using a machine learning algorithm improves over time. The encoding process is done using a multi-pass approach. During a first pass, the entire video file is scanned to extract video property information that does not require in-depth analyses. The extracted data is then entered into an encoding engine, which uses artificial intelligence to produce optimized encoder settings. The video file is into a set of time-based chunks and, in a second pass, the encoding parameters for each chunk are set and distributed to encoding nodes for parallel processing. These encoder instances probe-encode each chunk determine the level of complexity for the chunk and to derive chunk-specific encoding parameters. Following completion of the second pass, the results of both passes are then merged to obtain the necessary information for the encoder to achieve the best possible result.

SHOT-CHANGE DETECTION USING CONTAINER LEVEL INFORMATION
20230060780 · 2023-03-02 ·

The disclosed computer-implemented method may include, for a current frame of a sequence of video frames, determining a frame type label of the current frame. The method may include, in response to determining that the current frame is labeled as an intra frame (I-frame), decoding the current frame and comparing the decoded frame to historical I-frame data. The method may also include, in response to the comparison satisfying a shot-change threshold, flagging the current frame as a shot-change frame, and in response to flagging the current frame as the shot-change frame, storing the current frame for a subsequent shot-change detection. The method may further include updating, based on flagged shot-change frames, shot boundaries for the sequence of video frames. Various other methods, systems, and computer-readable media are also disclosed.