G06F3/0638

ENHANCED NETWORK ATTACHED STORAGE (NAS) INTEROPERATING WITH AND OVERFLOWING TO CLOUD STORAGE RESOURCES

An illustrative storage management appliance is interposed between client computing devices and one or more cloud storage resources. The appliance uses cloud storage resources in conjunction with a network attached storage device configured within the appliance to provide to the client computing devices seemingly unlimited network attached storage on respective network shares. The storage management appliance monitors data objects on the network shares and when a data object meets one or more criteria for archiving, the storage management appliance archives the data object to a cloud storage resource and replaces it with a stub and preview image on the network share. When access to the stub and/or preview image is detected, the storage management appliance restores the data object from the cloud storage resource. The criteria for archiving flexibly allow individual data objects to be archived to cloud storage without archiving frequently-accessed “neighboring” data objects on the same network share.

Storage system
11604584 · 2023-03-14 · ·

In write processing of a data set group to be written to be one or more data sets to be written, a storage system performs encoding processing including processing for generating a data model showing regularity of the data set group to be written and having one or more input values as an input and the data set group as an output. In the write processing, the storage system writes the data model generated in the encoding processing and associated with a key of the data set group to be written.

Metadata track entry sorting in a data storage system

In one aspect of metadata track entry sorting in accordance with the present description, recovery logic sorts a list of metadata entries as a function of a source data track identification of each metadata entry to provide a second, sorted list of metadata entries, and generates a recovery volume which includes data tracks which are a function of one or more data target tracks identified by the sorted list of metadata entries. Because the metadata entry contents of the sorted list have been sorted as a function of source track identification number, the particular time version of a particular source track may be identified more quickly and more efficiently. As a result, recovery from data loss may be achieved more quickly and more efficiently thereby providing a significant improvement in computer technology. Other features and aspects may be realized, depending upon the particular application.

Target path selection for storage controllers

A RAID controller attached to a storage network can detect the presence of multiple pathways to the same physical storage device. A path collection module can dynamically maintain all valid pathways to all attached storage devices. A path selection module can automatically and dynamically balance and rebalance desired paths to each storage device so as to simultaneously optimize data flow and provide continuity of I/O service throughout the attached storage network.

MEMORY SYSTEMS AND DEVICES INCLUDING EXAMPLES OF ACCESSING MEMORY AND GENERATING ACCESS CODES USING AN AUTHENTICATED STREAM CIPHER
20230126741 · 2023-04-27 · ·

Examples of systems and method described herein provide for accessing memory devices and, concurrently, generating access codes using an authenticated stream cipher at a memory controller. For example, a memory controller may use a memory access request to, concurrently, perform translation logic and/or error correction on data associated with the memory access request; while also utilizing the memory address as an initialization vector for an authenticated stream cipher to generate an access code. The error correction may be performed subsequent to address translation for a write operation (or prior to address translation for a read operation) to improve processing speed of memory access requests at a memory controller; while the memory controller also generates the encrypted access code.

Database dual-core storage system based on optical disk and method using the system
11474981 · 2022-10-18 ·

A database dual-core storage system based on optical disk comprises a server, a magnetic disk storage device and an optical disk storage device connecting to the server via data connection, a database management system, a data processor and a data connector installed on the server, wherein the database management system is arranged for completing database management and data management of the magnetic disk storage device and the optical disk storage device in response to data requests; the data processor is arranged for configuring fields of a database base core and fields of a database extension core, writing data of corresponding fields into the database base core and the database extension core respectively in response to data requests; the data connector is arranged for creating data connection between the database base core and the database extension core in response to data requests. The integrity and safety of data are guaranteed.

STORAGE SYSTEM AND DATA PROCESSING METHOD IN STORAGE SYSTEM
20230132037 · 2023-04-27 · ·

A storage system includes an interface and a data compression system configured to compress reception data from the interface before the data is stored in a storage device. The data compression system is configured to compress the reception data using a first compression algorithm to generate first compressed data, use the number of appearances of each of predetermined code categories included in the first compressed data to estimate a decompression time when a second compression algorithm is used, select a second compression method including compression using the second compression algorithm when the decompression time is equal to or less than a threshold value, and select a first compression method that does not include the compression using the second compression algorithm when the decompression time is greater than the threshold value.

Memory protection unit using memory protection table stored in memory system
11474956 · 2022-10-18 · ·

An apparatus comprises processing circuitry to issue memory access requests specifying a target address identifying a location to be accessed in a memory system; and a memory protection unit (MRU) comprising permission checking circuitry to check whether a memory access request issued by the processing circuitry satisfies access permissions specified in a memory protection table stored in the memory system. The memory protection table comprises memory protection entries each specifying access permissions for a corresponding address region of variable size within an address space, where the variable size can be a number of bytes other than a power of 2.

METHODS AND SYSTEMS FOR SEAMLESSLY PROVISIONING CLIENT APPLICATION NODES IN A DISTRIBUTED SYSTEM
20230127387 · 2023-04-27 ·

In general, embodiment relate to a method for provisioning a plurality of client application nodes in a distributed system using a management node, the method comprising: creating a file system in a namespace; associating the file system with a scale out volume; mounting the file system on a metadata node in the distributed system, wherein mounting the file system comprises storing a scale out volume record of the scale out volume; storing file system information for the file system in a second file system on the management node, wherein the file system information specifies the file system and the metadata node on which the file system is mounted; wherein storing the file system information triggers distribution of the file system information to at least a portion of a plurality of client application nodes.

METHODS AND SYSTEMS FOR STORING DATA IN A DISTRIBUTED SYSTEM USING GPUS
20230126511 · 2023-04-27 ·

In general, embodiments relate to a method for storing data, the method comprising generating, by a memory hypervisor module executing on a client application node, at least one input/output (I/O) request, wherein the at least one I/O request specifies a location in a storage pool and a physical address of the data in a graphics processing unit (GPU) memory in a GPU on the client application node, wherein the location is determined using a data layout, and wherein the physical address is determined using a GPU module and issuing, by the memory hypervisor module, the at least one I/O request to the storage pool, wherein processing the at least one I/O request results in at least a portion of the data being stored at the location.