H03M7/60

CLOUD-BASED SCALE-UP SYSTEM COMPOSITION

Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.

Parallel Processing of Data Having Data Dependencies for Accelerating the Launch and Performance of Operating Systems and Other Computing Applications

Representative embodiments are disclosed for a rapid and highly parallel decompression of compressed executable and other files, such as executable files for operating systems and applications, having compressed blocks including run length encoded (“RLE”) data having data-dependent references. An exemplary embodiment includes a plurality of processors or processor cores to identify a start or end of each compressed block; to partially decompress, in parallel, a selected compressed block into independent data, dependent (RLE) data, and linked dependent (RLE) data; to sequence the independent data, dependent (RLE) data, and linked dependent (RLE) data from a plurality of partial decompressions of a plurality of compressed blocks, to obtain data specified by the dependent (RLE) data and linked dependent (RLE) data, and to insert the obtained data into a corresponding location in an uncompressed file. The representative embodiments are also applicable to other types of data processing for applications having data dependencies.

Parallel processing of data having data dependencies for accelerating the launch and performance of operating systems and other computing applications

Representative embodiments are disclosed for a rapid and highly parallel decompression of compressed executable and other files, such as executable files for operating systems and applications, having compressed blocks including run length encoded (“RLE”) data having data-dependent references. An exemplary embodiment includes a plurality of processors or processor cores to identify a start or end of each compressed block; to partially decompress, in parallel, a selected compressed block into independent data, dependent (RLE) data, and linked dependent (RLE) data; to sequence the independent data, dependent (RLE) data, and linked dependent (RLE) data from a plurality of partial decompressions of a plurality of compressed blocks, to obtain data specified by the dependent (RLE) data and linked dependent (RLE) data, and to insert the obtained data into a corresponding location in an uncompressed file. The representative embodiments are also applicable to other types of data processing for applications having data dependencies.

Technologies for dividing work across accelerator devices

Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.

IN-MEMORY DATA STORAGE WITH TRANSPARENT COMPRESSION
20170220262 · 2017-08-03 ·

A storage aware memory controller and method for managing a physical storage system. A described controller includes: a system for mapping physical memory space into a memory region and a storage region; a system for applying different error protections schemes, in which a fine-grained memory fault tolerance scheme is applied to data in the memory region and a course-grained memory fault tolerance scheme is applied to data in the storage region; and an in-memory storage filesystem that compresses and stores individual pages of data in the storage region, wherein each page of data is compressed into a set of codewords that are codeword aligned such that no codeword shares compressed data from different pages, and wherein the in-memory storage filesystem stores a compression-aware logical block address (CA-LBA) for each page of data.

MODIFYING A DATABASE QUERY
20170277747 · 2017-09-28 ·

Techniques for modifying a database query are disclosed. A source and/or time associated with an initial database query for execution on a database are determined. A modification of the initial database query is determined based on the source and/or time. The modification includes adding a filter to the initial database query. The modified database query is executed to return a set of results. Optionally, partitions of the database that are relevant to the modified database query may be selected. The modified database query may be executed on only the relevant partitions. The datasets included in the results to the modified database query may be more important, relevant, and/or valuable to a user than the datasets that were excluded based on the filter. The datasets included in the results may be retrieved from faster data storage than the excluded datasets.

ASR-enhanced speech compression
11398239 · 2022-07-26 · ·

A process for compressing an audio speech signal utilizes ASR processing to generate a corresponding text representation and, depending on confidence in the corresponding text representation, selectively applies more, less, or no compression to the audio signal. The result is a compressed audio signal, with corresponding text, that is compact and well suited for searching, analytics, or additional ASR processing.

INFORMATION PROCESSING SYSTEM AND COMPRESSION CONTROL METHOD
20210406769 · 2021-12-30 · ·

A dynamic driving plan generator generates a driving plan representing a dynamic partial driving target of a compressor and a decompressor based on input data input to the compressor. The compressor is partially driven according to the driving plan to generate compressed data of the input data. The decompressor is partially driven according to the driving plan to generate reconstructed data of the compressed data. The dynamic driving plan generator has already been learned based on evaluation values obtained for the driving plan. Each of the evaluation values corresponds to a respective one of evaluation indexes for the driving plan, and the evaluation values are values obtained when at least the compression of the compression and the reconstruction according to the driving plan is executed. The evaluation indexes include the execution time for one or both of the compression and the reconstruction of the data.

Technolgies for millimeter wave rack interconnects

Racks and rack pods to support a plurality of sleds are disclosed herein. Switches for use in the rack pods are also disclosed herein. A rack comprises a plurality of sleds and a plurality of electromagnetic waveguides. The plurality of sleds are vertically spaced from one another. The plurality of electromagnetic waveguides communicate data signals between the plurality of sleds.

CIRCUITRY AND METHODS FOR LOW-LATENCY PAGE DECOMPRESSION AND COMPRESSION ACCELERATION
20220206975 · 2022-06-30 ·

Systems, methods, and apparatuses to low-latency page decompression and compression acceleration are described. In one embodiment, a system on a chip (SoC) includes a hardware processor core, and an accelerator circuit coupled to the hardware processor core, the accelerator circuit comprising a decompressor circuit and a direct memory access circuit to: in response to a first descriptor sent from the hardware processor core, cause the decompressor circuit to decompress compressed data from the direct memory access circuit into decompressed data and store the decompressed data in a buffer in the accelerator circuit, and in response to a second descriptor sent from the hardware processor core separately from the first descriptor, cause the decompressed data to be written from the buffer to memory external to the accelerator circuit by the direct memory access circuit.