G06F2212/70

TECHNOLOGIES FOR REGION-BIASED CACHE MANAGEMENT

Technologies for region-based cache management includes network computing device. The network computing device is configured to divide an allocated portion main memory of the network computing device into a plurality of memory regions, each memory region having a cache block that includes a plurality of cache lines of a cache memory of the processor. The network computing device is further configured to determine whether a cache line selected for eviction from the cache memory corresponds to one of the plurality of memory regions and, if so, retrieve a dynamically adjustable bias value (i.e., a fractional probability) associated with the corresponding memory region. Additionally, the network computing device is configured to generate a bias comparator value for the corresponding memory region, compare the bias value of the corresponding memory region and the bias comparator value generated for the corresponding memory region, and determine whether to evict the cache line based on the comparison. Other embodiments are described herein.

ALTERNATIVE DIRECT-MAPPED CACHE AND CACHE REPLACEMENT METHOD
20170286317 · 2017-10-05 ·

A method includes storing a first block of main memory in a cache line of a direct-mapped cache, storing a first tag in a current tag field of the cache line, wherein the first tag identifies a first memory address for the first block of main memory, and storing a second tag in a previous miss tag field of the cache line in response to receiving a memory reference having a tag that does not match the tag stored in the current tag field. The second tag identifies a second memory address for a second block of main memory, and the first and second blocks are both mapped to the cache line. The method may further include storing a binary value in a last reference bit field to indicate whether the most recently received memory reference was directed to the current tag field or previous miss tag field.

SAFE TRANSMIT PACKET PROCESSING FOR NETWORK FUNCTION VIRTUALIZATION APPLICATIONS
20170249162 · 2017-08-31 ·

A transmit packet processing system includes a NIC, a memory, one or more processors in communication with the memory, and a device driver. The memory has a first set and a second set of physical memory pages. The device driver is loaded in an OS and is configured to initialize the NIC. The device driver is further configured to assign a plurality of rings to specific physical memory pages. The plurality of rings includes transmit rings and receive rings. The transmit rings are utilized by an application in the application memory space. The transmit rings are assigned to the first set of physical memory pages which are writable by the application. The receive rings are assigned to the second set of physical memory pages which are not writable by the application. The device driver is further configured to initiate a mapping of the transmit rings into the application memory space.

METHOD AND DEVICE FOR CACHE MANAGEMENT
20170249310 · 2017-08-31 ·

A method, software and device for managing a cache service layer of an online solution is described. The online solution includes a database, at least one client, a cache service layer having a plurality of nodes which are interconnected to each other and provide processing and caching power for the cache service layer, and the cache manager. The method comprises reading in a business object from the database; assigning, using a cache manager, the business object to a business object group on a first node of the cache service layer; determining, by the cache manager, the effective probability of cache expiration of the business object group; and setting an expiration time for the business object group based on the determination of the effective probability of cache expiration of the business object group.

Write reordering in a hybrid disk drive

A hybrid drive and associated methods increase the rate at which data are transferred to a nonvolatile storage medium in the hybrid drive. By using a large nonvolatile solid state memory device as cache memory for a magnetic disk drive, a very large number of write commands can be cached and subsequently reordered and executed in an efficient manner. In addition, strategic selection and reordering of only a portion of the write commands stored in the nonvolatile solid state memory device increases efficiency of the reordering process.

METHOD AND APPARATUS FOR CYCLICAL KEY-OFF FILE REPLACEMENT

A system includes a processor configured to erase external working memory and program a target image of an authenticated update file into the erased working memory. The processor is also configured to erase a first internal memory location, containing data to be replaced by an update, and program portions of the target image to the first internal memory location for finite time periods following a plurality of key-offs, until a full target image is programmed in internal memory.

PROVIDING SCALABLE DYNAMIC RANDOM ACCESS MEMORY (DRAM) CACHE MANAGEMENT USING DRAM CACHE INDICATOR CACHES

Providing scalable dynamic random access memory (DRAM) cache management using DRAM cache indicator caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in high-bandwidth memory. The DRAM cache management circuit comprises a DRAM cache indicator cache, which stores master table entries that are read from a master table in a system memory DRAM and that contain DRAM cache indicators. The DRAM cache indicators enable the DRAM cache management circuit to determine whether a memory line in the system memory DRAM is cached in the DRAM cache of high-bandwidth memory, and, if so, in which way of the DRAM cache the memory line is stored. Based on the DRAM cache indicator cache, the DRAM cache management circuit may determine whether to employ the DRAM cache and/or the system memory DRAM to perform a memory access operation in an optimal manner

ASSOCIATIVE AND ATOMIC WRITE-BACK CACHING SYSTEM AND METHOD FOR STORAGE SUBSYSTEM

In response to a cacheable write request from a host, physical cache locations are allocated from a free list, and the data blocks are written to those cache locations without regard to whether any read requests to the corresponding logical addresses are pending. After the data has been written, and again without regard to whether any read requests are pending against the corresponding logical addresses, metadata is updated to associate the cache locations with the logical addresses. A count of data access requests pending against each cache location having valid data is maintained, and a cache location is only returned to the free list when the count indicates no data access requests are pending against the cache location.

Caching of service decisions
11431639 · 2022-08-30 · ·

Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations.

MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS

In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced.