Patent classifications
G06F13/1621
METHOD FOR PERFORMING DATA TRANSMISSION CONTROL OF INTER FIELD PROGRAMMABLE GATE ARRAYS AND ASSOCIATED APPARATUS
A method for data transmission control of inter field programmable gate array (FPGA) and an associated apparatus are provided. The method includes: utilizing a first register device to latch a set of data from a first FPGA according to a first clock, wherein the set of data is arranged and divided into multiple sets of partial data according to attributes of payloads and pointers; utilizing a time-division multiplexing (TDM) interface to transmit the multiple sets of partial data from the first register device to a second register device according to a TDM clock at multiple time points, respectively; and utilizing the second register device to sequentially receive the multiple sets of partial data, in order to output the set of data to a second FPGA, wherein the second FPGA operates according to a second clock different from the first clock.
SYSTEMS AND METHODS FOR PROCESSING A SUBMISSION QUEUE
A data storage device includes a memory and a controller coupled to the memory. The controller is configured to select a submission queue from a set of submission queues of an access device based at least in part on availability of space in a completion queue of the access device.
Configurable data processing system based on a hybrid data and control driven processing architecture
A data processing system comprising a plurality of data inputs and of data outputs for processing input data and providing processed data to a data output. The system comprises a plurality of data processing hardware units, each being configured to process data within a predetermined latency and according to a data processing task of a predetermined type. The system further comprises a memory for storing a predetermined latency for each of the data processing hardware units and a controller configured to determine a type of a data processing task to be executed as a function of a source of data to be processed or of a destination of processed data and further configured to select one data processing hardware unit as a function of the determined type of the task to be executed and of latency constraints associated with the task to be executed.
Data transmission system capable of perform union task with a plurality of channel control modules
A data transmission system includes a first memory, a second memory, a third memory; and a memory controller. The memory controller includes a first channel control module and a second channel control module. The first channel control module is coupled to the first memory and the second memory. The first channel control module transmits a first set of data between the first memory and the second memory, and transmits a switch signal after the first set of data is transmitted. The second channel control module is coupled to the first channel control module, the first memory, and the third memory. The second channel control module transmits a second set of data between the first memory and the third memory after receiving the switch signal.
METHOD AND APPARATUS FOR PERFORMING ACCESS MANAGEMENT OF MEMORY DEVICE WITH AID OF UNIVERSAL ASYNCHRONOUS RECEIVER-TRANSMITTER CONNECTION
A method for performing access management of a memory device with aid of a Universal Asynchronous Receiver-Transmitter (UART) connection and associated apparatus are provided. The method may include: utilizing a UART of a memory controller within the memory device to receive a set of intermediate commands corresponding to a set of operating commands through the UART connection between the memory device and a host device, wherein before sending the set of intermediate commands to the controller through the UART connection, the host device converts the set of operating commands into the set of intermediate commands; converting the set of intermediate commands into the set of operating commands according to a command mapping table; and accessing a non-volatile (NV) memory within the memory device with the set of operating commands for the host device, and sending a response to the host device through the UART connection.
Computing system for reducing latency between serially connected electronic devices
A computing system includes a host, a first electronic device connected to the host, and a second electronic device that communicates with the host through the first electronic device. The first electronic device requests a command written in a submission queue of the host based on a doorbell transmitted from the host, stores the command transmitted from the host, requests write data stored in a data buffer of the host, and stores the write data of the data buffer transmitted from the host.
MEMORY SYSTEM AND OPERATING METHOD THEREOF
Embodiments of the disclosure relate to a memory system and an operating method thereof. The memory system is configured to select, among the plurality of memory blocks, one or more target memory blocks operable to store user data to be accessed by a host which requests the memory system to write data, and determine whether to control a point of execution time of a command received from the host, based on valid page counts of respective target memory blocks.
Response-based interconnect control
Described apparatuses and methods enable a receiver of requests, such as a memory device, to control the arrival of future requests using a credit-based communication protocol. A transmitter of requests can be authorized to transmit a request across an interconnect responsive to possession of a credit. If the transmitter exhausts its credits, the transmitter waits until a credit is returned before transmitting another request. The receiver can manage credit returns based on how many responses are present in a response queue. The receiver can change a rate at which the credit returns are transmitted by changing a size of an interval of responses that are being transmitted, with one credit being returned per interval. This can slow the rate of credit returns while the response queue is relatively more filled. The rate adjustment can decrease latency by reducing an amount of requests or responses that are pooling in backend components.
MEMORY DEVICE FOR AN ARTIFICIAL NEURAL NETWORK
A memory device for an artificial neural network (ANN) includes at least one memory cell array of N columns and M rows; and a memory controller configured to sequentially perform a read or write operation of data of the at least one memory cell array in a burst mode based on predetermined sequential access information. Each of the at least one memory cell array may include a plurality of dynamic memory cells having a leakage current characteristic. The memory device may further include a processor configured to provide the memory controller with the ANN data locality information or information for identifying an input feature map, a kernel, and an output feature map. The memory controller can prepare data of an ANN model processed at a processor-memory level before being requested by the processor, thus enabling a substantial reduction in the delay of memory data being supplied to the processor.
MEMORY CONTROLLER, PROCESSOR AND SYSTEM FOR ARTIFICIAL NEURAL NETWORK
A system for an artificial neural network (ANN) includes a processor configured to output a memory control signal including an ANN data locality; a main memory in which data of an ANN model corresponding to the ANN data locality is stored; and a memory controller configured to receive the memory control signal from the processor and to control the main memory based on the memory control signal. The memory controller may be further configured to control, based on the memory control signal, a read or write operation of data of the main memory required for operation of the artificial neural network. Thus, the system optimizes an ANN operation of the processor by utilizing the ANN data locality of the ANN model, which operates at a processor-memory level.