Patent classifications
G06F13/1631
Information processing device and control method issuing commands based on latency information
An information processing device include: a memory; and a processor coupled to the memory and configured to: receive an access request directed to an access target and sets the access request in any one of a plurality of pending entries each of which includes latency information; issue a command that corresponds to the access request; control issuance of the command on a basis of the latency information of the any one of the pending entries; set a value that indicates latency for the access request in the latency information of the any one of the pending entries; and subtract a predetermined value from the latency information of the any one of the pending entries for each unit of time.
Sequential SLC read optimization
Systems and methods for read operations and management are disclosed. More specifically, this disclosure is directed to receiving a first read command directed to a first logical address and receiving, after the first read command, a second read command directed to a second logic address. The method also includes receiving, after the second read command, a third read command directed to a third logical address and determining that the first logical address and the third logical address correspond to a first physical address and a third physical address, respectively. The first physical address and the third physical address can be associated with a first word line of a memory component while the second logical address corresponds to a second physical address associated with a second word line of the memory component. The method includes executing the first read command and the third read command sequentially.
Remote memory access using memory address templates
A method comprises receiving a message comprising an identifier for an address template, using the identifier to select the address template from a set of address templates, determining a set of memory addresses for a corresponding set of memory operations using the address template, and executing the memory operations.
Method and apparatus for recovering regular access performance in fine-grained DRAM
A fine-grained dynamic random-access memory (DRAM) includes a first memory bank, a second memory bank, and a dual mode I/O circuit. The first memory bank includes a memory array divided into a plurality of grains, each grain including a row buffer and input/output (I/O) circuitry. The dual-mode I/O circuit is coupled to the I/O circuitry of each grain in the first memory bank, and operates in a first mode in which commands having a first data width are routed to and fulfilled individually at each grain, and a second mode in which commands having a second data width different from the first data width are fulfilled by at least two of the grains in parallel.
Sorting memory address requests for parallel memory access using input address match masks
Apparatus identifies a set of M output memory addresses from a larger set of N input memory addresses containing at least one non-unique memory address. A comparator block performs comparisons of memory addresses from a set of N input memory addresses to generate a binary classification dataset that identifies a subset of addresses from the set of input addresses, where each address in the subset identified by the binary classification dataset is unique within that subset. Combination logic units receive a predetermined selection of bits of the binary classification dataset and sort its received predetermined selection of bits into an intermediary binary string in which the bits are ordered into a first group identifying addresses belonging to the identified subset, and a second group identifying addresses not belonging to the identified subset. Output generating logic selects between bits belonging to different intermediary binary strings to generate a binary output identifying a set of output memory addresses containing at least one address in the identified subset.
Apparatus and method for operating multiple FPGAS in wireless communication system
The disclosure relates to a 5.sup.th Generation (5G) or pre-5G communication system for supporting a higher data rate than a 4.sup.th Generation (4G) communication system such as Long Term Evolution (LTE). According to various embodiments of the disclosure, an apparatus of a base station in a wireless communication system is provided. The apparatus includes: a master Field Programmable Gate Array (FPGA); a plurality of slave FPGAs controlled by the master FPGA; and an address masker coupled to the master FPGA and the plurality of slave FPGAs, wherein the address masker is configured to: receive different address bits assigned respectively to the plurality of slave FPGAs by the master FPGA; for the different address bits, mask bit values at a specific position with the same value; and transmit masked address bits corresponding respectively to the plurality of slave FPGAs.
Systems And Methods For Load Balancing Memory Traffic
An integrated circuit includes first and second memory controller circuits and a load balancing multiplexer circuit that redirects a first read operation from the first memory controller circuit to the second memory controller circuit in response to receiving an indication that the second memory controller circuit has available memory bandwidth. A circuit system includes first and second memory devices, first and second memory controller circuits, and a load balancing multiplexer circuit that sends a write operation to the first memory controller circuit to store data in the first memory device and to the second memory controller circuit to store the data in the second memory device, while performing a number of memory traffic shaping operations.
Scheduling memory requests for a ganged memory device
Systems, apparatuses, and methods for performing efficient memory accesses for a computing system are disclosed. A computing system includes one or more clients for processing applications. A memory controller transfers traffic between the memory controller and two channels, each connected to a memory device. A client sends a 64-byte memory request with an indication specifying that there are two 32-byte requests targeting non-contiguous data within a same page. The memory controller generates two addresses, and sends a single command and the two addresses to two channels to simultaneously access non-contiguous data in a same page.
APPARATUS AND METHOD FOR OPERATING MULTIPLE FPGAS IN WIRELESS COMMUNICATION SYSTEM
The disclosure relates to a 5.sup.th Generation (5G) or pre-5G communication system for supporting a higher data rate than a 4.sup.th Generation (4G) communication system such as Long Term Evolution (LTE). According to various embodiments of the disclosure, an apparatus of a base station in a wireless communication system is provided. The apparatus includes: a master Field Programmable Gate Array (FPGA); a plurality of slave FPGAs controlled by the master FPGA; and an address masker coupled to the master FPGA and the plurality of slave FPGAs, wherein the address masker is configured to: receive different address bits assigned respectively to the plurality of slave FPGAs by the master FPGA; for the different address bits, mask bit values at a specific position with the same value; and transmit masked address bits corresponding respectively to the plurality of slave FPGAs.
Redundancy resource comparator for a bus architecture, bus architecture for a memory device implementing an improved comparison method and corresponding comparison method
Disclosed herein is a redundancy resource comparator for a bus architecture of a memory device for comparing an address signal being received from an address signal bus and a redundancy address being stored in a latch of the memory device. Disclosed is also a corresponding bus architecture and comparison method.