G06F2212/264

HIERARCHICAL PRE-FETCH PIPELINING IN A HYBRID MEMORY SERVER

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

METHOD AND SYSTEM FOR DATA PLACEMENT IN A HARD DISK DRIVE BASED ON ACCESS FREQUENCY FOR IMPROVED IOPS AND UTILIZATION EFFICIENCY
20190391748 · 2019-12-26 · ·

One embodiment facilitates a write operation in a shingled magnetic recording device. During operation, the system receives, by the storage device, data to be written to the storage device and access-frequency information associated with the data, wherein the storage device includes a plurality of concentric tracks. The system distributes a plurality of spare sector pools among the plurality of concentric tracks. The system selects a track onto which to write the data based on the access-frequency information, wherein data with a highest access-frequency is written to an outer track. The system appends the data at a current write pointer location of the selected track, thereby facilitating an enhanced data placement for subsequent access in the storage device.

Reduced page load time utilizing cache storage

A cache that can be stored in a user partitioned region of storage and utilized to reduce the amount of time required to present content responsive to content requests is described. A request for content associated with a region of a user interface can be received and data corresponding to a list item in a cache can be accessed. Content associated with the data can be presented in the region of the user interface via a same presentation as a most recent presentation of the content. At a time subsequent to when the content is initially presented in the region, new data associated with the list item can be retrieved. In examples where the new data corresponds to updated data, the presentation can be modified based partly on the updated data and the new data can be written to the cache in a location corresponding to the list item.

Hierarchical pre-fetch pipelining in a hybrid memory server

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

READ CACHING WITH EARLY REFRESH FOR EVENTUALLY-CONSISTENT DATA STORE
20190266097 · 2019-08-29 ·

A technique for managing a read cache in an eventually-consistent data store includes, in response to a read request for a specified data element, receiving the specified data element from the read cache as well as a remaining TTL (time to live) of the data element, as indicated by a timer for that data element in the read cache. If the remaining TTL falls below a predetermined value, the technique triggers an early refresh of the specified data element, prior to its expiration. Consequently, later-arriving read requests to the same data element that arrive before the data element has been refreshed experience cache hits, thus avoiding the need to perform their own time-consuming refresh operations.

HIERARCHICAL PRE-FETCH PIPELINING IN A HYBRID MEMORY SERVER

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

METHODS AND SYSTEMS FOR RANKING, FILTERING AND PATCHING DETECTED VULNERABILITIES IN A NETWORKED SYSTEM
20190260796 · 2019-08-22 · ·

Systems and methods for determining priority levels to process vulnerabilities associated with a networked computer system can include a data collection engine receiving a plurality of specification profiles, each defining one or more specification variables of the networked computer system or a respective asset. The data collection engine can receive, from a vulnerability scanner, vulnerability data indicative of a vulnerability associated with the networked computer system. A profiling engine can interrogate a computing device of the networked computer system, and receive one or more respective profiling parameters from that computing device. A ranking engine can compute a priority ranking value of the computing device based on the profile specification variables, the vulnerability data and the profiling parameters. The priority ranking value associated with the computing device can be indicative of a priority level, compared to other computing devices of the computer network, for patching a vulnerability affecting that computing device.

HIERARCHICAL PRE-FETCH PIPELINING IN A HYBRID MEMORY SERVER

A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.

DATA ACCESS MANAGEMENT IN A HYBRID MEMORY SERVER

Once or more embodiments manage access to data by accelerator systems in an out-of-core processing environment. In one embodiment, a request from an accelerator system is received for access to a given data set. An access context associated with the given data set is determined. The accelerator system is dynamically configured, based on the access context that has been determined, based on the access context that has been determined, to one of access the given data set directly from the server system; locally store a portion of the given data set in a memory; and locally store all of the given data set in the memory.

Memory sharing for working data using RDMA

A server system may include a cluster of multiple computers that are networked for high-speed data communications. Each of the computers has a remote direct memory access (RDMA) network interface to allow high-speed memory sharing between computers. A relational database engine of each computer is configured to utilize a hierarchy of memory for temporary storage of working data, including in order of decreasing access speed (a) local main memory, (b) remote memory accessed via RDMS, and (c) mass storage. The database engine uses the local main memory for working data, and additionally uses the RDMA accessible memory for working data when the local main memory becomes depleted. The server system may include a memory broker to which individual computers report their available or unused memory, and which leases shared memory to requesting computers.