Patent classifications
G06F12/0859
Techniques for handling requests for data at a cache
Techniques are disclosed relating to retrieving data from an in-memory cache, such as that for a database system. In various embodiments, an in-memory cache receives a request from an application for data, where the request specifies a class having a function executable to access the data from a location external to the cache in response to a cache miss. The cache handles the request such that the cache miss is not returned to the application. Specifically, the cache, in some embodiments, determines whether it stores the requested data, and in response to determining that it does not store the data, calls the function of the class to access the data from the location external to the cache and receives the data returned by the execution of the function. The cache then stores the received data in the cache and returns the received data in response to the request.
Cache memory, cache memory control unit, and method of controlling the cache memory
A cache memory includes: a tag storage section in which one of a plurality of indexes, each index containing a plurality of tag addresses and one suspension-indicating section, is looked up by a first address portion of an accessed address; a data storage section; a tag control section configured to, when the suspension-indicating section contained in the looked-up index indicates suspension, allow access relevant to the accessed address to wait, and when the suspension-indicating section contained in the looked-up index indicates non-suspension, compare a second address portion different from the first address portion of the accessed address to each of the plurality of tag addresses contained in the looked-up index, and detects a tag address matched with the second address portion; and a data control section.
Caching device, cache, system, method and apparatus for processing data, and medium
A caching device, an instruction cache, a system for processing an instruction, a method and apparatus for processing data and a medium are provided. The caching device includes a first queue, a second queue, a write port group, a read port, a first pop-up port, a second pop-up port and a press-in port. The is configured to write cache data into a set storage address in the first queue and/or the second queue; the read port is configured to read all cache data from the first queue and/or the second queue at one time; the press-in port is configured to press cache data into the first queue and/or the second queue; the first pop-up port is configured to pop up cache data from the first queue; and the second pop-up port is configured to pop up cache data from the second queue.
Profiling Cache Replacement
This document describes profiling cache replacement. Profiling cache replacement is a technique for managing data migration between a main memory and a cache memory to improve overall system performance. Unlike conventional cache replacement techniques, profiling cache replacement employs a profiler to maintain counters that count memory requests for access to not only the pages maintained in the cache memory, but also the pages maintained in the main memory. Based on the information collected by the profiler (e.g., about memory access requests), a mover moves pages between the main and cache memories. By way of example, the mover can swap highly-requested pages of the main memory, such as a most-requested page of the main memory, with little-requested pages of the cache memory, such as a least-requested page of the cache memory. The mover can do so, for instance, when the counters indicate that the number of page access requests for highly-requested pages of the main memory is greater than the number of page access requests for little-requested pages of the cache memory. So as not to impede the operations of memory users (e.g., client applications), the mover performs the page swapping in the background. To do so, the mover is limited to swapping pages at predetermined time intervals, such as once every microsecond (μs).
Method for accessing extended memory, device, and system
In a method for accessing an extended memory, after receiving a first memory access request from a processor system in a computer, an extended memory controller sends a read request for obtaining to-be-accessed data to the extended memory and return, to the processor system, a first response message indicating the to-be-accessed data has not been obtained. The extended memory controller writes the to-be-accessed data into a data buffer after receiving the to-be-accessed data returned by the extended memory. After receiving, from the processor system, a second memory access request comprising a second access address, the extended memory controller returns, to the processor system, the to-be-accessed data in the data buffer in response to the second memory access request, wherein the second access address is different from the first access address and points to the physical address of the to-be-accessed data.
Arithmetic processing device, information processing device, and control method of arithmetic processing device
An arithmetic processing device which connects to a main memory, the arithmetic processor includes a cache memory which stores data, an arithmetic unit which performs an arithmetic operation for data stored in the cache memory, a first control device which controls the cache memory and outputs a first request which reads the data stored in the main memory, and a second control device which is connected to the main memory and transmits a plurality of second requests which are divided the first request output from the first control device, receives data corresponding to the plurality of second requests which is transmitted from the main memory and sends each of the data to the first control device.
MULTI-CACHE BASED DIGITAL OUTPUT GENERATION
Multi-cache-based digital output generation is provided. A system receives data objects that include fields from a remote data source. The system sorts the data objects based on a field to generate a sorted data set. The system cleans the sorted data set to generate a clean data set based on a policy. The system receives a request for a type of digital output based on the data objects received from the data source and loads a portion of the clean data set to a first level cache. The system selects a machine learning model configured for the type of digital output, and loads a primary cache with a subset of fields stored in the first level cache selected based on the machine learning model. The system generates, based on the first level cache being complete, digital output corresponding to the type of digital output from data in the primary cache.
Indicating latency associated with a memory request in a system
Methods, systems, and devices for a latency indication in a memory system or sub-system are described. An interface controller of a memory system may transmit an indication of a time delay (e.g., a wait signal) to a host in response to receiving an access command from the host. The interface controller may transmit such an indication when a latency associated with performing the access command is likely to be greater than a latency anticipated by the host. The interface controller may determine a time delay based on a status of buffer or a status of memory device, or both. The interface controller may use a pin designated and configured to transmit a command or control information to the host when transmitting a signal including an indication of a time delay. The interface controller may use a quantity, duration, or pattern of pulses to indicate a duration of a time delay.
FORWARD CACHING MEMORY SYSTEMS AND METHODS
Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be coupled to a processor, which includes a memory controller. The memory controller may determine whether targeting of first data and second data by the processor to perform an operation results in processor-side cache misses. When targeting of the first data and the second data result in processor-side cache misses, the memory controller may determine a single memory access request that requests return of both the first data and the second data and instruct the processor to output the single memory access request to a memory system via one or more data buses coupled between the processor and the memory system to enable processing circuitry implemented in the processor to perform the operation based at least in part on the first data and the second data when returned from the memory system.
Main processor prefetching operands for coprocessor operations
Technology for providing data to a processing unit is disclosed. A computer processor may be divided into a master processing unit and consumer processing units. The master processing unit at least partially decodes a machine instruction and determines whether data is needed to execute the machine instruction. The master processing unit sends a request to memory for the data. The request may indicate that the data is to be sent from the memory to a consumer processing unit. The data sent by the memory in response to the request may be stored in local read storage that is close to the consumer processing unit for fast access. The master processing unit may also provide the machine instruction to the consumer processing unit. The consumer processing unit may access the data from the local read storage and execute the machine instruction based on the accessed data.