H03M7/30

Low-latency direct cloud access with file system hierarchies and semantics

Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.

Memory system

According to one embodiment, a memory system includes a compressor configured to output second data obtained by compressing input first data and a non-volatile memory to which third data based on the second data output from the compressor is written. The compressor includes a dictionary coding unit configured to perform dictionary coding on the first data, an entropy coding unit configured to perform entropy coding on the result of the dictionary coding, a first calculation unit configured to calculate compression efficiencies of the dictionary coding and the entropy coding, and a first control unit configured to control an operation of at least one of the dictionary coding unit and the entropy coding unit based on the compression efficiencies and a power reduction level.

Channel-parallel compression with random memory access
11716095 · 2023-08-01 · ·

A data compressor a zero-value remover, a zero bit mask generator, a non-zero values packer, and a row-pointer generator. The zero-value remover receives 2.sup.N bit streams of values and outputs 2.sup.N non-zero-value bit streams having zero values removed from each respective bit stream. The zero bit mask generator receives the 2.sup.N bit streams of values and generates a zero bit mask for a predetermined number of values of each bit stream in which each zero bit mask indicates a location of a zero value in the predetermined number of values corresponding to the zero bit mask. The non-zero values packer receives the 2.sup.N non-zero-value bit streams and forms a group of packed non-zero values. The row-pointer generator that generates a row-pointer for each group of packed non-zero values.

Compressing device and method using parameters of quadtree method

A device configured to compress a tensor including a plurality of cells includes: a quadtree generator configured to generate a quadtree searching for a non-zero cell included in the tensor and extract at least one parameter value from the quadtree; a mode selector configured to determine a compression mode based on the at least one parameter; and a bitstream generator configured to generate a bitstream by compressing the tensor based on the compression mode.

Guaranteed data compression using intermediate compressed data
11716094 · 2023-08-01 · ·

Methods for converting an n-bit number into an m-bit number for situations where n>m and also for situations where n<m, where n and m are integers. The methods use truncation or bit replication followed by the calculation of an adjustment value which is applied to the replicated number.

Guaranteed data compression using intermediate compressed data
11716094 · 2023-08-01 · ·

Methods for converting an n-bit number into an m-bit number for situations where n>m and also for situations where n<m, where n and m are integers. The methods use truncation or bit replication followed by the calculation of an adjustment value which is applied to the replicated number.

Time-Series Telemetry Data Compression
20230239224 · 2023-07-27 ·

A system can identify a first group of time-series telemetry data that represents performance metrics of a computing device, wherein the first group of time-series telemetry data identifies respective first values and corresponding respective first timestamps. The system can create a second group of time-series telemetry data that identifies second timestamps. The system can populate the second group of time-series telemetry data with the respective first values at respective first locations of the second group of time-series telemetry data that correspond to the respective first timestamps of the respective first values. The system can create a tensor that identifies third timestamps. The system can populate the tensor with the respective first values at respective second locations of the tensor that correspond to the respective first timestamps of the respective first values, wherein populating the tensor comprises combining two values of the respective first values.

ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD

This encoding device is provided with a control circuit that, on the basis of information relating to the capability to convert the signal form of a sound signal in a decoding device for decoding encoded data of the sound signal, controls the conversion of the signal form of the sound signal, and an encoding circuit that encodes the sound signal in accordance to the conversion control.

Parallel processing circuits for neural networks

The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.

Parallel processing circuits for neural networks

The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.