H03M7/30

Storage system and storage control method

A storage system that performs irreversible compression on time-series data using a compressor/decompressor based on machine learning calculates a statistical amount value of each of one or more kinds of statistical amounts based on one or more parameters in relation to original data (time-series data input to a compressor/decompressor) and calculates a statistical amount value of each of the one or more kinds of statistical amounts based on the one or more kinds of parameters in relation to decompressed data (time-series data output from the compressor/decompressor) corresponding to the original data. The machine learning of the compressor/decompressor is performed based on the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data and the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data.

Dynamic high-speed high-sensitivity imaging device and imaging method

Any one or both of an optical system with a structured lighting pattern and a structured detecting system having a plurality of regions with different optical characteristics are used. In addition, optical signals from an object to be observed through one or a small number of pixel detectors are detected while changing relative positions between the object to be observed and any one of the optical system and the detecting system, time series signal information of the optical signals are obtained, and an image associated with an object to be observed from the time series signal information is reconstructed.

Method and apparatus with neural network data input and output control
11580393 · 2023-02-14 · ·

A neural network deep learning data control apparatus includes: a memory; an encoding circuit configured to receive a data sequence, generate a compressed data sequence in which consecutive invalid bits in a bit string of the data sequence are compressed into a single bit of the compressed data sequence, generate a validity determination sequence indicating a valid bit and an invalid bit in a bit string of the compressed data sequence, and write the compressed data sequence and the validity determination sequence to the memory; and a decoding circuit configured to read the compressed data sequence and the validity determination sequence from the memory, and determine a bit in the bit string of the compressed data sequence set for transmission to a neural network circuit, based on the validity determination sequence, such that the neural network circuit omits an operation with respect to non-consecutive invalid bits.

Method and apparatus with neural network data input and output control
11580393 · 2023-02-14 · ·

A neural network deep learning data control apparatus includes: a memory; an encoding circuit configured to receive a data sequence, generate a compressed data sequence in which consecutive invalid bits in a bit string of the data sequence are compressed into a single bit of the compressed data sequence, generate a validity determination sequence indicating a valid bit and an invalid bit in a bit string of the compressed data sequence, and write the compressed data sequence and the validity determination sequence to the memory; and a decoding circuit configured to read the compressed data sequence and the validity determination sequence from the memory, and determine a bit in the bit string of the compressed data sequence set for transmission to a neural network circuit, based on the validity determination sequence, such that the neural network circuit omits an operation with respect to non-consecutive invalid bits.

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

SYSTEMS AND METHODS OF DATA COMPRESSION

There is provided a computer implemented method of compressing a baseline dataset comprising a sequence of a plurality of instances of a plurality of unique data elements, the method comprising: providing a weight function that calculates an increasing value for a weight for each one of the plurality of instances of each one of the plurality of unique data elements in the baseline dataset, as a function of increasing number of previously processed sequential locations of each of the plurality of instances of each respective unique data element within the baseline dataset relative to a current sequential location of the baseline dataset, computing an encoding for the baseline dataset according to a distribution of the weight function computed for the plurality of unique data elements in the baseline dataset, and creating a compressed dataset according to the encoding.

MULTI-CONTEXT ENTROPY CODING FOR COMPRESSION OF GRAPHS
20230042018 · 2023-02-09 ·

Example embodiments relate to using a multi-context entropy coder for encoding adjacency lists. A system may obtain a graph having data (or multiple graphs) and may compress the data of the graph using a multi -context entropy coder. The multi-context entropy coder may encode adjacency lists within the data such that each integer is assigned to a different probability distribution. For example, operating the multi-context entropy coder may involve using a combination of arithmetic coding, Huffman coding, and ANS. The assignment of integers to the probability distributions may depend on each integer’s role and/or previous values of a similar kind. By using multi -context entropy- coding, the computing system may increase compression ratio while maintaining similar processing speed.

NEAR-OPTIMAL TRANSITION ENCODING CODES
20230041347 · 2023-02-09 ·

A method of encoding input data includes dividing the input data into a plurality of data packets, an input packet of the plurality of data packets including a plurality of digits in a first base system, base-converting the input packet from the first base system to generate a base-converted packet including a plurality of converted digits in a second base system, the second base system having a base value lower than that of the first base system, and incrementing the converted digits to generate a coded packet for transmission through a communication channel.

SYSTEM AND METHOD FOR DATA COMPACTION UTILIZING MISMATCH PROBABILITY ESTIMATION

A system and method for compacting data that uses mismatch probability estimation to improve entropy encoding methods to account for, and efficiently handle, previously-unseen data in data to be compacted. Training data sets are analyzed to determine the frequency of occurrence of each sourceblock in the training data sets. A mismatch probability estimate is calculated comprising an estimated frequency at which any given data sourceblock received during encoding will not have a codeword in the codebook. Entropy encoding is used to generate codebooks comprising codewords for data sourceblocks based on the frequency of occurrence of each sourceblock. A “mismatch codeword” is inserted into the codebook based on the mismatch probability estimate to represent those cases when a block of data to be encoded does not have a codeword in the codebook. During encoding, if a mismatch occurs, a secondary encoding process is used to encode the mismatched sourceblock.

Encoding and decoding with differential encoding size
11558066 · 2023-01-17 · ·

In accordance with an embodiment, the method includes determining a second sequence of numbers of digits for encoding the respective integer coefficient values of the first sequence, the second sequence including, as first element, a first number of digits for encoding the first integer coefficient value of the first sequence, and as second and subsequent elements, constrained numbers of digits that are greater than or equal to respective minimum required numbers of digits for encoding the second and subsequent integer coefficient values of the first sequence. The constrained numbers of digits are such that any two successive elements of the second sequence do not differ from each other by more than a given threshold value. The method further includes encoding difference values between the successive elements of the second sequence; and encoding the integer coefficient values of the first sequence using the respective numbers of digits of the second sequence.