G06F2212/1041

MEMORY SHARING METHOD OF VIRTUAL MACHINES BASED ON COMBINATION OF KSM AND PASS-THROUGH

A memory sharing method of virtual machines through the combination of KSM and pass-through, including: a virtual machine manager judging whether operating systems of guests use IOMMU, if not, not participating in shared mapping of a KSM technology; if yes, judging memory pages of each guest to confirm whether the pages are mapping pages, if yes, remain the mapping pages into a host; and if not, on the premise of keeping the properties of Pass-through, using the KSM technology for all non-mapping pages to merge the memory pages with same contents among various virtual machines and perform write protection processing simultaneously. The guest memory pages are divided into those special for DMA and those for non-DMA purpose, then the KSM technology is only selectively applied to the non-DMA pages, and on the premise of keeping the properties of Pass-through, the object of saving memory resources is achieved simultaneously.

REDUCING IDLE RESOURCE USAGE
20180011789 · 2018-01-11 ·

A method, computer program product, and system for reallocating resources of an idle application or program includes a computer for running an application or a program and starting a predetermined time interval. The computer increases a number counter for each event triggered during the predetermined time interval, and the event is a predetermined trigger that is activated during the running of the application or program. The method and system includes comparing a total number of events that occur during the predetermined time interval to a threshold value. The total number of events is the value of the number counter at the end of the predetermined interval. In response to determining, by the computer, the total number of events being below the threshold value, releasing resources allocated to the program by activating, using the computer, either: i) a garbage collector application, or ii) a resource release application.

METHODS AND APPARATUS TO FACILITATE READ-MODIFY-WRITE SUPPORT IN A COHERENT VICTIM CACHE WITH PARALLEL DATA PATHS

Methods, apparatus, systems and articles of manufacture are disclosed facilitate read-modify-write support in a coherent victim cache with parallel data paths. An example apparatus includes a random-access memory configured to be coupled to a central processing unit via a first interface and a second interface, the random-access memory configured to obtain a read request indicating a first address to read via a snoop interface, an address encoder coupled to the random-access memory, the address encoder to, when the random-access memory indicates a hit of the read request, generate a second address corresponding to a victim cache based on the first address, and a multiplexer coupled to the victim cache to transmit a response including data obtained from the second address of the victim cache.

AGGRESSIVE WRITE FLUSH SCHEME FOR A VICTIM CACHE
20230004500 · 2023-01-05 ·

A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes: line type bits configured to store an indication that a corresponding cache line of the second sub-cache is configured to store write-miss data, and an eviction controller configured to evict a cache line of the second sub-cache storing write-miss data based on an indication that the cache line has been fully written.

System and method for optimizing DRAM bus switching using LLC
11567885 · 2023-01-31 · ·

The present disclosure relates to a system and method for optimizing switching of a DRAM bus using LLC. An embodiment of the disclosure includes sending a first type request from a first type queue to the second memory via the memory bus if a direction setting of the memory bus is in a first direction corresponding to the first type request, decrementing a current direction credit count by a first type transaction decrement value, if the decremented current direction credit count is greater than zero, sending another first type request to the second memory via the memory bus and decrementing the current direction credit count again by the first type transaction decrement value, and if the decremented current direction credit count is zero, switching the direction setting of the memory bus to a second direction and resetting the current direction credit count to a second type initial value.

Feature map and weight selection method and accelerating device

The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.

Message Object Traversal In High-Performance Network Messaging Architecture
20230027817 · 2023-01-26 · ·

A communications system implements instructions including maintaining a message object that includes an array of entries. Each entry of the array includes a field identifier, a data type, and a next entry pointer. The next entry pointers and a head pointer establish a linked list of entries. The instructions include, in response to a request to add a new entry to the message object, calculating an index based on a field identifier of the new entry and determining whether the entry at the calculated index within the array of entries is active. The instructions include, if the entry is inactive, writing a data type, field identifier, and data value of the new entry to the calculated index, and inserting the new entry into the linked list. The instructions include, if the entry is already active, selectively expanding the size of the array and repeating the calculating and determining.

APPROACH FOR SUPPORTING MEMORY-CENTRIC OPERATIONS ON CACHED DATA
20230021492 · 2023-01-26 ·

A technical solution to the technical problem of how to support memory-centric operations on cached data uses a novel memory-centric memory operation that invokes write back functionality on cache controllers and memory controllers. The write back functionality enforces selective flushing of dirty, i.e., modified, cached data that is needed for memory-centric memory operations from caches to the completion level of the memory-centric memory operations, and updates the coherence state appropriately at each cache level. The technical solution ensures that commands to implement the selective cache flushing are ordered before the memory-centric memory operation at the completion level of the memory-centric memory operation.

NONVOLATILE MEMORY DEVICE INCLUDING COMBINED SENSING NODE AND CACHE READ METHOD THEREOF

A cache read method of a nonvolatile memory device including a plurality of page buffer units and cache latches, each page buffer units having a sensing latch and a sensing node line is provided. The method comprises performing a first on-chip valley search (OVS) read on a selected memory cell using a first sensing node line and a first sensing latch of a first page buffer unit of the plurality of page buffer units; storing first data sensed from the selected memory cell in the first sensing latch, the first data based on a result of the first OVS read; dumping the first data to sensing node lines of at least one page buffer unit, excluding the first page buffer unit, from among the plurality of page buffer units; and performing a second OVS read on the selected memory cell using the first sensing latch.

Method and apparatus for using a storage system as main memory
11556469 · 2023-01-17 · ·

A data access system including a processor, multiple cache modules for the main memory, and a storage drive. The cache modules include a FLC controller and a main memory cache. The multiple cache modules function as main memory. The processor sends read/write requests (with physical address) to the cache module. The cache module includes two or more stages with each stage including a FLC controller and DRAM (with associated controller). If the first stage FLC module does not include the physical address, the request is forwarded to a second stage FLC module. If the second stage FLC module does not include the physical address, the request is forwarded to the storage drive, a partition reserved for main memory. The first stage FLC module has high speed, lower power operation while the second stage FLC is a low-cost implementation. Multiple FLC modules may connect to the processor in parallel.