G06F2212/264

Client voting-inclusive in-memory data grid (IMDG) cache management

A client application cache access profile is created that documents accesses over time to data cached within an in-memory data grid (IMDG) cache by each of a set of client applications that utilize the IMDG. A new data request is received from one of the set of client applications that includes a client-application data caching vote that specifies whether the requesting client application wants the newly-requested data cached. In response to an IMDG cache data miss related to the new data request, a determination is made as to whether to cache the newly-requested data based upon analysis of the client application cache access profile of the client application from which the new data request was received, IMDG system performance cache costs of caching the newly-requested data, and the client-application data caching vote. The newly-requested data is cached within the IMDG cache in response to determining to cache the newly-requested data.

REDUCED PAGE LOAD TIME UTILIZING CACHE STORAGE

A cache that can be stored in a user partitioned region of storage and utilized to reduce the amount of time required to present content responsive to content requests is described. A request for content associated with a region of a user interface can be received and data corresponding to a list item in a cache can be accessed. Content associated with the data can be presented in the region of the user interface via a same presentation as a most recent presentation of the content. At a time subsequent to when the content is initially presented in the region, new data associated with the list item can be retrieved. In examples where the new data corresponds to updated data, the presentation can be modified based partly on the updated data and the new data can be written to the cache in a location corresponding to the list item.

Dynamic structural management of a distributed caching infrastructure

Embodiments of the present invention provide a method, system and computer program product for the dynamic structural management of an n-Tier distributed caching infrastructure. In an embodiment of the invention, a method of dynamic structural management of an n-Tier distributed caching infrastructure includes establishing a communicative connection to a plurality of cache servers arranged in respective tier nodes in an n-Tier cache, collecting performance metrics for each of the cache servers in the respective tier nodes of the n-Tier cache, identifying a characteristic of a specific cache resource in a corresponding one of the tier nodes of the n-Tier crossing a threshold, and dynamically structuring a set of cache resources including the specific cache resource to account for the identified characteristic.

FAULT TOLERANT CLUSTER DATA HANDLING
20220179790 · 2022-06-09 ·

The described technology is generally directed towards fault tolerant cluster data handling techniques, as well as devices and computer readable media configured to perform the disclosed fault tolerant cluster data handling techniques. Nodes in a computing cluster can be configured to generate wire format resources corresponding to operating system resources. A wire format resource can comprise a cache key and a hint information to locate data, such as a file, corresponding to the operating system resource. The wire format resource can be stored in a resource cache along with a pointer that points to the operating system resource. The wire format resource can also be provided to client devices. Nodes in the computing cluster can be configured to receive and process client instructions that include wire format resources, as well as to use hint information to re-allocate data associated with a wire format resource.

Target and initiator mode configuration of tape drives for data transfer between source and destination tape drives

Systems and methods that substantially or fully remove a commanding server from a data path (e.g., as part of a data migration, disaster recovery, and/or the like) to improve data movement performance and make additional bandwidth available for other system processes and the like. Broadly, a network interface card (e.g., host bus adapter (HBA)) of a tape drive may be configured in both a target mode to allow the tape drive to be a recipient of control commands from a server to request and/or otherwise obtain data from one or more source tape drives, and in an initiator mode to allow the tape drive to send commands to the one or more tape drives specified in the commands received from the server to request/read data from and/or write data to such one or more tape drives.

Data storage method and apparatus, and server

This disclosure relates to a data storage method and apparatus, and a server. The method includes receiving, by a first server, a write instruction sent by a second server, storing target data in a cache of a controller, detecting a read instruction for the target data, and storing the target data in a storage medium of a non-volatile memory based on the read instruction. In other words, when the second server needs to write the target data to the first server, the target data is not only written to the cache of the first server, but also written to the storage medium of the first server. This can ensure that the data in the cache is written to the storage medium promptly.

Intelligent control of cache
11275691 · 2022-03-15 · ·

A method and system for intelligent control of read-ahead cache for a client application is provided. The method receives an application profile of the client application, the application profile indicating thresholds for determining a plurality of access patterns of the client application. The method also determines a current access pattern of the client application based on the thresholds and a historical access pattern of the client application. The current access pattern and the historical access pattern are one of the plurality of access patterns. The method further dynamically enables and disables read-ahead cache for the client application based on a transition between the historical access pattern and the current access pattern.

Caching assets in a multiple cache system

A computing device includes a volatile memory that includes a first cache, a non-volatile storage that includes a second cache, and a cache service. The cache service, responsive to a cache miss, retrieves that asset and writes that asset to the first cache and not the second cache. The cache service reads the asset from the first cache responsive to requests for the asset until the asset is evicted from the first cache or until the asset is promoted to the second cache. The cache service promotes the asset to the second cache upon determining that a set of one or more criteria are satisfied including a predefined number of cache hits for the asset when it is in the first cache. The cache service reads the asset from the second cache responsive to requests for the asset until the asset is evicted from the second cache.

Computing device
11061566 · 2021-07-13 ·

A computing device includes a first processor; a second processor; a network interface communicably coupling the first and second processors to a network; an interface bus communicably coupling the first processor to the second processor; a first interface communicably coupling the second processor to the interface bus; a second interface communicably coupling the second processor to the interface bus, the second interface being separate from the first interface, wherein the second interface is configured to provide the second processor with management functionality over one or more hardware components of the computing device; and storage means communicably coupled to the second processor, wherein the second processor regulates access of the first processor to the storage means.

Methods and systems for ranking, filtering and patching detected vulnerabilities in a networked system
11075939 · 2021-07-27 · ·

Systems and methods for determining priority levels to process vulnerabilities associated with a networked computer system can include a data collection engine receiving a plurality of specification profiles, each defining one or more specification variables of the networked computer system or a respective asset. The data collection engine can receive, from a vulnerability scanner, vulnerability data indicative of a vulnerability associated with the networked computer system. A profiling engine can interrogate a computing device of the networked computer system, and receive one or more respective profiling parameters from that computing device. A ranking engine can compute a priority ranking value of the computing device based on the profile specification variables, the vulnerability data and the profiling parameters. The priority ranking value associated with the computing device can be indicative of a priority level, compared to other computing devices of the computer network, for patching a vulnerability affecting that computing device.