G06F2212/602

METHOD AND APPARATUS FOR IMPLIED BIT HANDLING IN FLOATING POINT MULTIPLICATION
20230085048 · 2023-03-16 ·

A method is provided that includes performing, by a processor in response to a floating point multiply instruction, multiplication of floating point numbers, wherein determination of values of implied bits of leading bit encoded mantissas of the floating point numbers is performed in parallel with multiplication of the encoded mantissas, and storing, by the processor, a result of the floating point multiply instruction in a storage location indicated by the floating point multiply instruction.

PREFETCHING DATA IN A DISTRIBUTED STORAGE SYSTEM
20230071111 · 2023-03-09 ·

Data can be prefetched in a distributed storage system. For example, a computing device can receive a message with metadata associated with at least one request for an input/output operation from a message queue. The computing device can determine, based on the message from the message queue, an additional IO operation predicted to be requested by a client subsequent to the at least one request for the IO operation. The computing device can send a notification to a storage node of a plurality of storage nodes associated with the additional IO operation for prefetching data of the additional IO operation prior to the client requesting the additional IO operation.

Cache unit useful for secure execution

A cache unit that is configured to retain: a plurality of cache blocks; a plurality of owner indicators, and a plurality of validity marks. For each cache block of the plurality of cache blocks exists a corresponding owner indicator in the plurality of owner indicators. An owner indicator corresponding to a cache block is capable of identifying an entity that caused the cache block to be fetched to the cache unit. For each cache block of the plurality of cache blocks exists a corresponding validity mark in the plurality of validity marks. A validity mark corresponding to the cache block indicates whether a validation process performed on the cache block upon fetching thereof was successful. The cache unit may be useful for secure execution.

Method and apparatus for controlling cache line storage in cache memory
11636038 · 2023-04-25 · ·

A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls refresh operation so that data refresh does not occur for clean data only banks or the refresh rate is reduced for clean data only banks. Partitions that store dirty data can also store clean data, however other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.

Preloaded content selection graph for rapid retrieval

The described technology is generally directed towards maintaining content selection graphs in an in-memory content selection graph data store in association with respective start times that indicates when the respective graphs become active. When a request to return content selection data is received, an active graph that corresponds to the request and the current time is accessed to obtain the requested content selection data. The response data can be prebuilt, e.g., in a set of active graphs for different client types, so that the response can be returned generally as is from the active graph in the set for that particular client type. A Redis cache can be used to maintain the various graph sets, including the active graph sets and graph sets that will become active at a future time.

Using a machine learning module to dynamically determine tracks to prestage from storage to cache

Provided are a computer program product, system, and method for determining tracks to prestage into cache from a storage. Information is provided related to determining tracks to prestage from the storage to the cache in a stage group of sequential tracks including a trigger track comprising a track number in the stage group at which to start prestaging tracks and Input/Output (I/O) activity information to a machine learning module. A new trigger track in the stage group at which to start prestaging tracks is received from the machine learning module having processed the provided information. The trigger track is set to the new trigger track. Tracks are prestaged in response to processing an access request to the trigger track in the stage group.

Non-stalling, non-blocking translation lookaside buffer invalidation
11663141 · 2023-05-30 · ·

A method includes receiving, by a MMU for a processor core, an address translation request from the processor core and providing the address translation request to a TLB of the MMU; generating, by matching logic of the TLB, an address transaction that indicates whether a virtual address specified by the address translation request hits the TLB; providing the address transaction to a general purpose transaction buffer; and receiving, by the MMU, an address invalidation request from the processor core and providing the address invalidation request to the TLB. The method also includes, responsive to a virtual address specified by the address invalidation request hitting the TLB, generating, by the matching logic, an invalidation match transaction and providing the invalidation match transaction to one of the general purpose transaction buffer or a dedicated invalidation buffer.

Cache replacement mechanisms for speculative execution
11663130 · 2023-05-30 · ·

Described herein are systems and methods for cache replacement mechanisms for speculative execution. For example, some systems include, a buffer comprising entries that are each configured to store a cache line of data and a tag that includes an indication of a status of the cache line stored in the entry, in an integrated circuit that is configured to: responsive to a cache miss caused by a load instruction that is speculatively executed by a processor pipeline, load a cache line of data corresponding to the cache miss into a first entry of the buffer and update the tag of the first entry to indicate the status is speculative; responsive to the load instruction being retired by the processor pipeline, update the tag to indicate the status is validated; and, responsive to the load instruction being flushed from the processor pipeline, update the tag to indicate the status is cancelled.

ENTITIES, SYSTEM AND METHODS PERFORMED THEREIN FOR HANDLING MEMORY OPERATIONS OF AN APPLICATION IN A COMPUTER ENVIRONMENT

Embodiments herein relates e.g., to a method performed by a first entity, for handling memory operations of an application in a computer environment, is provided. The first entity obtains position data associated with data of the application being fragmented into a number of positions in a physical memory. The position data indicates one or more positions of the number of positions in the physical memory. The first entity then provides, to a second entity, one or more indications of the one or more positions indicated by the position data for prefetching data from the second entity, using the one or more indications.

Cache memory management

Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.