Patent classifications
G06F2212/6046
Cache management method using object-oriented manner and associated microcontroller
The present invention provides a microcontroller, wherein the microcontroller includes a processor, a first memory and a cache controller. The first memory includes at least a working space. The cache controller is coupled to the first memory, and is arranged for managing the working space of the first memory, and dynamically loading at least one object from a second memory to the working space of the first memory in an object-oriented manner.
CACHE MANAGEMENT METHOD AND ASSOCIATED MICROCONTROLLER
The present invention provides a microcontroller, wherein the microcontroller includes a processor, a first memory and a cache controller. The first memory includes at least a working space. The cache controller is coupled to the first memory, and is arranged for managing the working space of the first memory, and dynamically loading at least one object from a second memory to the working space of the first memory in an object-oriented manner.
JUST-IN-TIME DATA PROVISION BASED ON PREDICTED CACHE POLICIES
Systems, and methods are provided for predicting a cache policy based on application input data. Inputs provided to an application and corresponding to a usage pattern of the application can be received. The inputs can be used with a predictive model to determine a cache policy corresponding to a datastore. The cache policy can include output data to be provided via in the datastore and subsequently provided to a computing device in a just-in-time manner. The predictive model can be trained to output the cache policy based on input data received from a usage point, a provider point, or a datastore configuration.
Multi-mode set associative cache memory dynamically configurable to selectively allocate into all or a subset of its ways depending on the mode
A cache stores 2{circumflex over ()}J-byte cache lines has an array of 2{circumflex over ()}N sets each holds tags each X bits and 2{circumflex over ()}W ways. An input receives a Q-bit address, MA[(Q1):0], having a tag MA[(Q1):(QX)] and index MA[(QX1):J]. Q is at least (N+J+X1). Set selection logic selects one set using the index and tag LSB; comparison logic compares all but the LSB of the tag with all but the LSB of each tag in the selected set and indicates a hit if a match; allocation logic, when the comparison logic indicates there is not a match: allocates into any of the 2{circumflex over ()}W ways of the selected set when operating in a first mode; and into a subset of the 2{circumflex over ()}W ways of the selected set when operating in a second mode. The subset of is limited based on bits of the tag portion.
Fast unaligned memory access
Fast unaligned memory access. hi accordance with a first embodiment of the present invention, a computing device includes a load queue memory structure configured to queue load operations and a store queue memory structure configured to queue store operations. The computing device includes also includes at least one bit configured to indicate the presence of an unaligned address component for an entry of said load queue memory structure, and at least one bit configured to indicate the presence of an unaligned address component for an entry of said store queue memory structure. The load queue memory may also include memory configured to indicate data forwarding of an unaligned address component from said store queue memory structure to said load queue memory structure.
Coherency directory entry allocation based on eviction costs
A processor partitions a coherency directory into different regions for different processor cores and manages the number of entries allocated to each region based at least in part on monitored recall costs indicating expected resource costs for reallocating entries. Examples of monitored recall costs include a number of cache evictions associated with entry reallocation, the hit rate of each region of the coherency directory, and the like, or a combination thereof. By managing the entries allocated to each region based on the monitored recall costs, the processor ensures that processor cores associated with denser memory access patterns (that is, memory access patterns that more frequently access cache lines associated with the same memory pages) are assigned more entries of the coherency directory.
Just-in-time data provision based on predicted cache policies
Systems, methods, and computer readable mediums are provided for predicting a cache policy based on usage patterns. Usage pattern data can be received and used with a predictive model to determine a cache policy associated with a datastore. The cache policy can identify the configuration of predicted output data to be provisioned in the datastore and subsequently provided to a client in a just-in-time manner. The predictive model can be trained to output the cache policy based on usage pattern data received from a usage point, a provider point, or a datastore configuration.
JUST-IN-TIME DATA PROVISION BASED ON PREDICTED CACHE POLICIES
Systems, methods, and computer readable mediums are provided for predicting a cache policy based on usage patterns. Usage pattern data can be received and used with a predictive model to determine a cache policy associated with a datastore. The cache policy can identify the configuration of predicted output data to be provisioned in the datastore and subsequently provided to a client in a just-in-time manner. The predictive model can be trained to output the cache policy based on usage pattern data received from a usage point, a provider point, or a datastore configuration.
Dynamic cache allocation
One embodiment provides a system. The system includes a processor, a cache memory, a performance monitoring unit (PMU), at least one virtual machine (VM), and cache sensitivity index (CSI) logic. The processor includes at least one core. The at least one virtual machine (VM) is to execute on at least one of the at least one core. The cache sensitivity index (CSI) logic is to allocate a cache portion to a selected VM, the allocated cache portion related to a determined cache portion, determined based, at least in part, on a CSI related to the selected VM.
In-memory dataflow execution with dynamic placement of cache operations and action execution ordering
A dataflow execution environment is provided with dynamic placement of cache operations and action execution ordering. An exemplary method comprises: obtaining a current cache placement plan for a dataflow comprised of multiple operations and a corresponding current cache gain estimate; selecting an action to execute from a plurality of remaining dataflow actions based on a predefined policy; executing one or more operations in a lineage of the selected action and estimating an error as a difference in an observed execution time and an estimated execution time given by a cost model; obtaining an alternative cache placement plan for the dataflow following the execution in conjunction with a predefined new plan determination criteria being satisfied and a corresponding alternative cache gain estimate; implementing the alternative cache placement plan in conjunction with a predefined new plan implementation criteria being satisfied; and selecting a next action to execute from a plurality of remaining actions in the dataflow based on a predefined policy.