H03M7/3075

Method and apparatus for content-aware point cloud compression using HEVC tiles
10798389 · 2020-10-06 · ·

A method includes receiving a data cloud including a plurality of data points. The method further includes identifying each data point including a region-of-interest (ROI) and dividing the data cloud into a ROI cloud and one or more non-ROI clouds. The method includes performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI. The method includes performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile of the map, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile.

GENE SEQUENCING DATA COMPRESSION METHOD AND DECOMPRESSION METHOD, SYSTEM AND COMPUTER-READABLE MEDIUM

The invention discloses a gene sequencing data compression method and decompression method, a system, and a computer-readable medium. The compression method includes: comparing a read sequence R with a reference genome to obtain an equal-length gene character sequence CS; coding the read sequence R and the equal-length gene character sequence CS, performing reversible computing by means of a reversible function, compressing a most approximate position p of the read sequence R in the reference genome and the reversible computing result that serve as two data streams, and outputting the compressed data streams. The data decompression method is reverse processing of the compression method. By means of the present invention, the compression ratio can be further decreased, the compression/decompression time of an algorithm is shorter while a better compression ratio is obtained. The present invention is compatible with algorithms for making comparisons between read sequences and reference genomes.

METHOD AND DEVICE FOR DIGITAL DATA COMPRESSION

The invention relates to a method for compressing an input data set, wherein the coefficients in the input data set are grouped in groups of coefficients, a number of bit planes,GCLI, needed for representing each group is determined, a quantization is applied,keeping a limited number of bit planes,a prediction mechanism is applied to the GCLIs for obtaining residues, and an entropy encoding of the residues is performed. The entropy-encoded residues, and the bit planes kept allow the decoder to reconstruct the quantized data, at a minimal cost in meta-data.

SYSTEM AND METHOD FOR COMPRESSION OF GEOSPATIAL LOCATION DATA
20200192869 · 2020-06-18 ·

Systems and methods for the compression and decompression of geospatial locations are disclosed. The compression and decompression are based on a prediction of the geospatial location and a geometrical projection of the Earth.

Data processing apparatus, memory system, and method of processing data

A data processing apparatus for compressing physical address values correlated to logical address values includes a first prediction unit that calculates a first predicted address value for a first input address value in input data to be compressed, a determination unit that selects an encoding processing for the first input address value according to the first predicted address value, and a compression unit configured to encode the first input address value according to the encoding processing selected by the determination unit.

Realtime Multimodel Lossless Data Compression System and Method
20200128307 · 2020-04-23 · ·

Methods and systems for processing telemetry data that contains multiple data types is disclosed. Optimum multimodel encoding approaches can be used which can achieve data-specific compression performance for heterogeneous datasets by distinguishing data types and their characteristics at real-time and applying most effective compression method to a given data type. Using an optimum encoding diagram for heterogeneous data, a data classification algorithm classifies input data blocks into predefined categories, such as Unicode, telemetry, RCS and IR for telemetry datasets, and a class of unknown which includes non-studied data types, and then assigns them into corresponding compression models.

METHOD AND APPARATUS FOR CONTENT-AWARE POINT CLOUD COMPRESSION USING HEVC TILES
20200107028 · 2020-04-02 · ·

A method includes receiving a data cloud including a plurality of data points. The method further includes identifying each data point including a region-of-interest (ROI) and dividing the data cloud into a ROI cloud and one or more non-ROI clouds. The method includes performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI. The method includes performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile of the map, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile.

Techniques for compressing floating-point format images
10602183 · 2020-03-24 · ·

Disclosed herein are techniques for pre-processing a multiple-channel image for compression. The multiple-channel image can be composed of a collection of pixels that are represented using a floating-point format (e.g., half-precision/16-bit) for display on devices optimized for wide-gamut color space. The techniques can include a first step of quantizing the pixels into a fixed range of values, and applying invertible color-space transformations to the sub-pixels of each pixelwhich can include red, green, blue, and alpha sub-pixelsto produce transformed sub-pixels including luma and chroma values. Next, the luma sub-pixels are placed into a luma data stream, the first and second chroma values are placed into a chroma data stream, and the alpha sub-pixels are placed into an alpha data stream. Predictive functions are then applied to the luma and chroma data streams. Finally, the various streams are separated into buffers and compressed to produce a multiple-channel image.

Coefficient Context Modeling In Video Coding
20200074689 · 2020-03-05 ·

In some embodiments, a method determines a plurality of classes of bins that are used to determine a context model for entropy coding of a current block in a video. The method calculates a first value for a first class of bins in the plurality of classes of bins and calculates a second value for a second class of bins in the plurality of classes of bins. The first value for the first class of bins is weighted by a first weight to generate a weighted first value and the second value for the second class of bins is weighted by a second weight to generate a weighted second value. The method then selects a context model based on the first weighted value and the second weighted value.

TECHNIQUES FOR COMPRESSING FLOATING-POINT FORMAT IMAGES
20190373283 · 2019-12-05 ·

Disclosed herein are techniques for pre-processing a multiple-channel image for compression. The multiple-channel image can be composed of a collection of pixels that are represented using a floating-point format (e.g., half-precision/16-bit) for display on devices optimized for wide-gamut color space. The techniques can include a first step of quantizing the pixels into a fixed range of values, and applying invertible color-space transformations to the sub-pixels of each pixelwhich can include red, green, blue, and alpha sub-pixelsto produce transformed sub-pixels including luma and chroma values. Next, the luma sub-pixels are placed into a luma data stream, the first and second chroma values are placed into a chroma data stream, and the alpha sub-pixels are placed into an alpha data stream. Predictive functions are then applied to the luma and chroma data streams. Finally, the various streams are separated into buffers and compressed to produce a multiple-channel image.