G06F2212/154

CACHE REFRESH SYSTEM AND PROCESSES
20230081780 · 2023-03-16 ·

The present disclosure relates generally to computer systems and, more particularly, to a cache refresh system and related processes and methods of use. The method of refreshing data in cache memory includes: setting, by a computer system, a refresh indicator to “true”; refreshing data in the cache memory, by the computer system, upon a determination that the refresh indicator is set to “true”; and setting, by the computer system, the refresh indicator to “false” after the refreshing of the cache memory.

SHARED CACHE FOR MULTIPLE INDEX SERVICES IN NONRELATIONAL DATABASES

A computer-implemented method includes receiving, by a processing unit, from a first tenant, a query to retrieve data from a nonrelational database system. The method further includes determining, by the processing unit, that an index associated with the query is cached in a shared index cache, wherein the shared index cache stores indexes for a plurality of tenants. The method further includes retrieving, by the processing unit, a result of the query based on the index in the shared index cache. The method further includes outputting, by the processing unit, the result of the query.

Networked storage system with a remote storage location cache and associated methods thereof

Methods and systems for a networked storage system are provided. One method includes: utilizing, by a first node, a storage location cache to determine if an entry associated with a first read request for data stored using a logical object owned by a second node configured as a failover partner node of the first node exists; transmitting, by the first node, the first read request to the second node; receiving, by the first node, a response to the first read request from the second node with requested data; inserting, by the first node, an entry in the storage location cache indicating the storage location information for the data; and utilizing, by the first node, the inserted entry in the storage location cache to determine storage location of data requested by a second read request received by the first node.

Secure Storage of Datasets in a Thread Network Device

Some aspects of this disclosure relate to implementing a thread device that can associate with a thread network. The thread device includes a network processor, a first memory, and a host processor communicatively coupled to the network processor and the first memory. The first memory can be a nonvolatile memory with a first level security protection, and configured to store a first dataset including thread network parameters for the network processor to manage network functions for the thread device associated with the thread network. The host processor is configured to perform various operations associated with the first dataset stored in the first memory. The network processor can be communicatively coupled to a second memory to store a second dataset, where the second dataset has a same content as the first dataset. The network processor is configured to manage the network functions based on the second dataset. The second memory can be a volatile memory with a second level security protection that is less than the first level security protection.

PREVENTING UNAUTHORIZED TRANSLATED ACCESS USING ADDRESS SIGNING
20230070125 · 2023-03-09 ·

A host may use address translation to convert virtual addresses to physical addresses for endpoints, which may then submit memory access requests for physical addresses. The host may incorporate the physical address and a signature of the physical address generated using a private key into a translated address field of a response to a translation request. An endpoint may treat the combination as a translated address by storing it in an entry of a translation cache, and accessing the entry for inclusion in a memory access request. The host may generate a signature of the translated address from the request using the private key, with the result being compared to the signature from the request. The memory access request may be verified when the compared values match, and the memory access may be performed using the translated address.

Preemptive caching of content in a content-centric network

Preemptive caching within content/name/information centric networking environment is contemplated. The preemptively caching may be performed within content/name/information centric networking environments of the type having a branching structure or other architecture sufficient to facilitate routing data, content, etc. such that one or more nodes other than a node soliciting a content object also receive the content object.

METHOD AND APPARATUS TO AGGREGATE OBJECTS TO BE STORED IN A MEMORY TO OPTIMIZE THE MEMORY BANDWIDTH
20230129107 · 2023-04-27 ·

A network device performs packet processing operations on packets received from a network and includes a write back cache to store data (for example, counters) used to perform the packet processing operations. The data stored in the write cache in the network device are evicted from the write back cache to an external memory from time to time using a write-back operation that includes a read-modify-write of a line in the external memory. Instead of performing a separate read-modify-write for each data stored in the cache line, a single read-modify-write operation is performed for all data stored in the cache line in the write back cache. The aggregation of relatively close data for the single read-modify-write operation reduces the number of memory accesses to the external memory and improves the bandwidth to the external memory.

NETWORK INTERFACE WITH INTELLIGENCE TO BUILD NULL BLOCKS FOR UN-MAPPABLE LOGICAL BLOCK ADDRESSES
20230130859 · 2023-04-27 ·

An apparatus is described. The apparatus includes a network interface having a system interface, a media access interface and circuitry to construct a block of null values for a logical block address (LBA) in response to a remote storage system having informed the network interface that the LBA was un-mappable.

PREFETCHING DATA IN A DISTRIBUTED STORAGE SYSTEM
20230071111 · 2023-03-09 ·

Data can be prefetched in a distributed storage system. For example, a computing device can receive a message with metadata associated with at least one request for an input/output operation from a message queue. The computing device can determine, based on the message from the message queue, an additional IO operation predicted to be requested by a client subsequent to the at least one request for the IO operation. The computing device can send a notification to a storage node of a plurality of storage nodes associated with the additional IO operation for prefetching data of the additional IO operation prior to the client requesting the additional IO operation.

Deserialization of stream objects using multiple deserialization algorithms

Techniques for deserializing stream objects are disclosed. The system may receive data representing a stream object. The data can include an object descriptor, a class descriptor, and stream field values corresponding to the stream object. The system may select a particular deserialization process, from among a plurality of deserialization processes. The selection may be based at least in part on the object descriptor and the class descriptor. The system can deserialize the data representing the stream object using the selected deserialization process, yielding one or more deserialized objects.