Patent classifications
G06F2212/62
Mergeable counter system and method
A system includes a first counter configured to increment or decrement in response to a triggering event. The first counter is sized to overflow. The system also includes a second counter configured to increment or decrement in response to a triggering event. The first counter and the second counter are merged to form a third counter in response to detecting an overflow triggering event for the first counter. A merge bit indicative of whether the first counter and the second counter are merged changes value in response to merging the first counter and the second counter.
VIRTUALIZED-IN-HARDWARE INPUT OUTPUT MEMORY MANAGEMENT
Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
Secure firewall configurations
A kernel driver on an endpoint uses a process cache to provide a stream of events associated with processes on the endpoint to a data recorder. The process cache can usefully provide related information about processes such as a name, type or path for the process to the data recorder through the kernel driver. Where a tamper protection cache or similarly secured repository is available, this secure information may also be provided to the data recorder for use in threat detection, forensic analysis and so forth.
Methods and apparatus to facilitate read-modify-write support in a coherent victim cache with parallel data paths
Methods, apparatus, systems and articles of manufacture are disclosed facilitate read-modify-write support in a coherent victim cache with parallel data paths. An example apparatus includes a random-access memory configured to be coupled to a central processing unit via a first interface and a second interface, the random-access memory configured to obtain a read request indicating a first address to read via a snoop interface, an address encoder coupled to the random-access memory, the address encoder to, when the random-access memory indicates a hit of the read request, generate a second address corresponding to a victim cache based on the first address, and a multiplexer coupled to the victim cache to transmit a response including data obtained from the second address of the victim cache.
PROVIDING ROLLING UPDATES OF DISTRIBUTED SYSTEMS WITH A SHARED CACHE
Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a data item key corresponding to a request from a user profile operating on a media player and receiving a version identifier corresponding to a first version of an application operating on the media player. It is determined that a shared cache includes a first value and second value for the data item key. A key component is generated corresponding to the user profile. Both the generated key component and the data item key are provided to the shared cache, and the first value of the data item as stored in the shared cache is received. The first value of the first version of the data item is updated.
EFFICIENT WORK UNIT PROCESSING IN A MULTICORE SYSTEM
Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
CONTENDED LOCK REQUEST ELISION SCHEME
A system and method for network traffic management between multiple nodes are described. A computing system includes multiple nodes connected to one another. When a home node determines a number of nodes requesting read access for a given data block assigned to the home node exceeds a threshold and a copy of the given data block is already stored at a first node of the multiple nodes in the system, the home node sends a command to the first node. The command directs the first node to forward a copy of the given data block to the home node. The home node then maintains a copy of the given data block and forwards copies of the given data block to other requesting nodes until the home node detects a write request or a lock release request for the given data block.
Reducing Memory Access Latencies During Ray Traversal
While prefetching data for a second fiber, a hierarchical data structure is traversed using a first fiber after deferring traversal for the second fiber. Then context is switched to the second fiber, and the hierarchical data structure is traversed using second fiber while prefetching data for another fiber.
Caching of metadata for deduplicated LUNs
Efficient processing of user data read requests in a deduplicated data storage system places the metadata for most frequently requested data in data structures and locations in the system hierarchy where the metadata will be most rapidly available. The total amount of such metadata makes storing all of the metadata in high speed memory expensive, and the system and method described uses both the temporal and the spatial characteristics of the user system activity in any epoch to adjust the contents of metadata cache so as to respond to the dynamics of a multi user or multi-application environment where the storage system is not made aware of the time changing mix of operations except by observation of the individual requests. A history record is used to promote metadata from the slow memory to the fast memory, and a process selection may be adjusted based on the address-space activity.
PARALLEL PROCESSING DEVICE AND MEMORY CACHE CONTROL METHOD
A memory cache control method for a parallel processing device having a plurality of nodes, wherein a first node stores first data as a client cache in a first storage device and switches an use of the stored first data to a server cache; and a second node stores the first data in a second storage device which is slower than the first storage device, records data management information which indicates that the first data is being stored by in the first storage device of the first node, and when a transmission request of the first data is received from a third node, refers to the data management information, and when the first data is stored in the first storage device of the first node and when the first data is switched to the server cache, instructs the first node to transmit the first data to the third node.