G06F13/1615

SYSTEM-ON-CHIP DRIVEN BY CLOCK SIGNALS HAVING DIFFERENT FREQUENCIES
20250103521 · 2025-03-27 ·

A system-on-chip includes plural components, configured to perform separate functions, separate calculations, or separate operations, and a bus interface configured to support data communication between the plural components according to a point-to-point interconnect protocol. At least one component of the plural components is operatively engaged with a memory device. The at least one component includes: plural memory interfaces configured to access the memory device in an n-way interleaving way, where n is a positive integer which is equal to or greater than 2; and at least one slave intellectual property (IP) core configured to distribute and transmit, to the plural memory interfaces, plural commands input through the bus interface.

Method of managing requests for access to memories and data storage system
09583158 · 2017-02-28 · ·

The method includes, at a first clock cycle:obtaining (202) new requests by the processing stage;supplying (210) by the processing stage at least one of the new requests;placing on standby (212) by the processing stage at least one further new request, hereinafter referred to as a standby request. The method further includes, at a second clock cycle following the first clock cycle:obtaining (202) at least one new request by the processing stage;selecting (208) by the processing stage, from the standby request(s) and the new request(s), at least one request;la supplying (210) the selected request(s) by the processing stage.

Self-addressing memory

Techniques are disclosed relating to self-addressing memory. In one embodiment, an apparatus includes a memory and addressing circuitry coupled to or comprised in the memory. In this embodiment, the addressing circuitry is configured to receive memory access requests corresponding to a specified sequence of memory accesses. In this embodiment, the memory access requests do not include address information. In this embodiment, the addressing circuitry is further configured to assign addresses to the memory access requests for the specified sequence of memory accesses. In some embodiments, the apparatus is configured to perform the memory access requests using the assigned addresses.

Data processing apparatus for data-level pipeline

The present disclosure discloses a data processing apparatus, a data processing method, and related products. The data processing apparatus is used as a computing apparatus and is included in a combined processing apparatus. The combined processing apparatus further includes an interface apparatus and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The combined processing apparatus further includes a storage apparatus. The storage apparatus is respectively connected to the computing apparatus and other processing apparatus and is used to store data of the computing apparatus and other processing apparatus. The solution of the present disclosure takes full advantage of parallelism among different storage units to improve utilization of each functional component.

Low latency memory access
12367159 · 2025-07-22 · ·

A memory device includes receivers that use CMOS signaling levels (or other relatively large signal swing levels) on its command/address and data interfaces. The memory device also includes an asynchronous timing input that causes the reception of command and address information from the CMOS level receivers to be decoded and forwarded to the memory core (which is self-timed) without the need for a clock signal on the memory device's primary clock input. Thus, an activate row command can be received and initiated by the memory core before the memory device has finished exiting the low power state. Because the row operation is begun before the exit wait time has elapsed, the latency of one or more accesses (or other operations) following the exit from the low power state is reduced.

Memory Architecture
20250284624 · 2025-09-11 · ·

A memory including: an array of memory cells; a memory access logic programmable to generate a write allocation that maps an input comprising elements of data in a first sequence to the memory cells of the array and a read allocation that maps the memory cells of the array to an output comprising elements of data in a second sequence; and a memory controller arranged to write the elements of data at the input to the array based on the write allocation and to read the elements of data stored in the array to the output based on the read allocation.

TECHNIQUES TO MULTIPLY MEMORY ACCESS BANDWIDTH USING A PLURALITY OF LINKS
20250284647 · 2025-09-11 ·

Examples include techniques to multiply memory access bandwidth using a plurality of links. Example techniques may include generation and use of forwarding tables separately maintained at devices coupled via high speed internal-connect links (HSILs). A forwarding table to enable a first device to route a memory request received by the first device to access a memory address of a memory at a second device. The memory request received by the first device from a host compute device via a link between the first device and the host compute device, the memory request to be forwarded to the second device via an HSIL coupled between the first and second devices.

MEMORY SUB-SYSTEM AWARE PREFETCHING IN A DISAGGREGATED MEMORY ENVIRONMENT
20250315379 · 2025-10-09 ·

A processing device in a memory sub-system receives a first set of requests to access first data stored at a first set of physical addresses. The processing device identifies, using a physical address table comprising information about (i) a host and (ii) an application assigned to respective sets of physical addresses, a first host identity and a first application identity corresponding to the first set of physical addresses. The processing device further provides the first set of requests, the first host identity and the first application identity to a prefetch prediction engine. The processing device receives an output of the prefetch prediction engine, the output comprising a first memory address for prefetching second data from the first set of physical addresses to fulfill a second set of requests.

MEMORY DEVICE INTERFACE COMMUNICATING WITH SET OF DATA BURSTS CORRESPONDING TO MEMORY DIES VIA DEDICATED PORTIONS FOR COMMAND PROCESSING
20250390448 · 2025-12-25 ·

A first command associated with a first memory die is communicated via a first portion of an interface of the memory sub-system. A second command associated with a second memory die is communicated via the first portion of the interface to a second memory die. A data burst corresponding to the first memory die is caused to be communicated via a second portion of the interface, where the second command is communicated via the first portion of the interface concurrently with the data burst communicated via the second portion of the interface.

Techniques for storing vehicle data
12517673 · 2026-01-06 · ·

This disclosure is directed to techniques for storing vehicle data. For instance, system(s) may receive first vehicle data generated by first vehicles operating in a first geographic area. The system(s) may then store the first vehicle data in a first memory. Additionally, the system(s) may determine a type of vehicle data to request from other system(s). The system(s) may then send the other system(s) a request for the type of vehicle data. Based on sending the request, the systems may receive second vehicle data that includes the type of vehicle data, where the second vehicle data is generated by second vehicles operating in a second geographic area. The system(s) may then store the second vehicle data in a second memory. The first memory may be a first type of memory and the second memory may be a second type of memory that is different than the first type of memory.