Patent classifications
G06F12/0813
STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Allocating cache memory in a dispersed storage network
A method for execution by a dispersed storage network (DSN) managing unit includes receiving access information from a plurality of distributed storage and task (DST) processing units via a network. Cache memory utilization data is generated based on the access information. Configuration instructions are generated for transmission via the network to the plurality of DST processing units based on the cache memory utilization data.
Allocating cache memory in a dispersed storage network
A method for execution by a dispersed storage network (DSN) managing unit includes receiving access information from a plurality of distributed storage and task (DST) processing units via a network. Cache memory utilization data is generated based on the access information. Configuration instructions are generated for transmission via the network to the plurality of DST processing units based on the cache memory utilization data.
Cache coherency management for multi-category memories
In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
Processing method and device
The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.
Low-latency direct cloud access with file system hierarchies and semantics
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.
Low-latency direct cloud access with file system hierarchies and semantics
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.
Systems and methods for distributed scalable ray processing
Ray tracing systems have computation units (“RACs”) adapted to perform ray tracing operations (e.g. intersection testing). There are multiple RACs. A centralized packet unit controls the allocation and testing of rays by the RACs. This allows RACs to be implemented without Content Addressable Memories (CAMs) which are expensive to implement, but the functionality of CAMs can still be achieved by implemented them in the centralized controller.
VOLATILE MEMORY DATA RECOVERY BASED ON INDEPENDENT PROCESSING UNIT DATA ACCESS
In a server system, a host computing platform can have a processing unit separate from the host processor to detect and respond to failure of the host processor. The host computing platform includes a memory to store data for the host processor. The processing unit has an interface to the host processor and the memory and an interface to a network external to the host processor and has access to the memory. In response to detection of failure of the host processor, the processing unit migrates data from the memory to another memory or storage.