G06F3/0655

Method and tensor traversal engine for strided memory access during execution of neural networks

A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.

System and method for converting input from alternate input devices
11714550 · 2023-08-01 · ·

An apparatus for delivering alternate user input between an alternate input device and an output device, the output device configured to receive input from a conventional input device, the output device not configured to receive input from the alternate input device, the apparatus including an input interconversion processor that receives the alternate user input from the alternate input device, a processing pipeline that converts the alternate user input to a conventional user input of a type normally received by the output device from the conventional input device, and an output port that transmits the conventional user input.

Memory device including interface circuit for data conversion according to different endian formats

A memory device including an interface circuit for data conversion according to different endian formats includes an interface circuit that performs data conversion with hardware in a data transfer path inside the memory device in accordance with a memory bank, a processing element (PE), and an endian format of a host device. The interface circuit is (i) between a memory physical layer interface (PHY) region and a serializer/deserializer (SERDES) region, (ii) between the SERDES region and the memory bank or the PE, (iii) between the SERDES region and a bank group input/output line coupled to a bank group including a number of memory banks, and (iv) between the PE and bank local input/output lines coupled to the memory bank.

Storage system accommodating varying storage capacities
11714715 · 2023-08-01 · ·

A plurality of storage nodes in a single chassis is provided. The plurality of storage nodes in the single chassis is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory for user data storage. The plurality of storage nodes is configured to distribute the user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of two of the plurality of storage nodes. A plurality of compute nodes is included in the single chassis, each of the plurality of compute nodes is configured to communicate with the plurality of storage nodes. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided.

Complex system and data transfer method

In a complex system including; one or more storage systems including a cache and a storage controller; and one or more storage boxes including a storage medium, the storage box generates redundant data from write data received from a server, and writes the write data and the redundant data to the storage medium. The storage box transmits the write data to the storage system when it is difficult to generate the redundant data or it is difficult to write the write data and the redundant data to the storage medium. The storage system stores the received write data in the cache.

APPARATUS AND METHOD TO SHARE HOST SYSTEM RAM WITH MASS STORAGE MEMORY RAM

A method includes, in one non-limiting embodiment, sending a request from a mass memory storage device to a host device, the request being one to allocate memory in the host device; writing data from the mass memory storage device to allocated memory of the host device; and subsequently reading the data from the allocated memory to the mass memory storage device. The memory may be embodied as flash memory, and the data may be related to a file system stored in the flash memory. The method enables the mass memory storage device to extend its internal volatile RAM to include RAM of the host device, enabling the internal RAM to be powered off while preserving data and context stored in the internal RAM.

METHODS AND DEVICES FOR FILE READ LATENCY REDUCTION
20230024420 · 2023-01-26 · ·

Methods and devices are provided in which a controller of a storage device receives a read command including at least a file pointer of a file, from an application at a host device. The controller retrieves a physical block address (PBA) list associated with file data from a table maintained at the controller using the file pointer. The controller reads data from a memory using the PBA list, and provides the file data to the application at the host device.

DATA DEDUPLICATION LATENCY REDUCTION

Aspects of the present disclosure relate to reducing the latency of data deduplication. In embodiments, an input/output (IO) workload received by a storage array is monitored. Further, at least one IO write operation in the IO workload is identified. A space-efficient probabilistic data structure is used to determine if a director board is associated with the IO write. Additionally, the IO write operation is processed based on the determination.

Read threshold management and calibration

A system and method for read threshold calibration in a non-volatile memory are provided. Physical dies in the memory are divided into groups based on device-level parameters such as time and temperature parameters. An outlier die may be identified outside of the plurality of groups based on a comparison of a bit error rate (BER) indicator for each die to a threshold. For each group of dies, a read parameter is determined for at least one die, and applied to each of the plurality of dies of the group. The read parameter may be determined based on a threshold measurement of a representative one or more word lines.

Methods and Systems for Improved K-mer Storage and Retrieval

Systems and methods of storing and retrieving K-mer data in a data structure are provided. In certain embodiments, the K-mer data is stored as an integer value that defines an address of a slot in the data structure. In many embodiments, each slot in the data structure stores the remaining portion of the K-mer that is not part of the prefix. Additional embodiments are directed to genetic or genomic analysis using a data structure for storing K-mer data.