Patent classifications
G06F2212/465
MEMORY ARCHITECTURE FOR EFFICIENT SPATIAL-TEMPORAL DATA STORAGE AND ACCESS
Described herein are systems, methods, and non-transitory computer readable media for memory address encoding of multi-dimensional data in a manner that optimizes the storage and access of such data in linear data storage. The multi-dimensional data may be spatial-temporal data that includes two or more spatial dimensions and a time dimension. An improved memory architecture is provided that includes an address encoder that takes a multi-dimensional coordinate as input and produces a linear physical memory address. The address encoder encodes the multi-dimensional data such that two multi-dimensional coordinates close to one another in multi-dimensional space are likely to be stored in close proximity to one another in linear data storage. In this manner, the number of main memory accesses, and thus, overall memory access latency is reduced, particularly in connection with real-world applications in which the respective probabilities of moving along any given dimension are very close.
Cloud Database System With Multi-Cash For Reducing Network Cost In Processing Select Query
Disclosed is a back-end node in a cloud database system according to some exemplary embodiments of the present disclosure to solve the problems. The back-end node may include a communication unit; a back-end cache storing buffer cache data and metadata information, wherein the buffer cache data and the metadata information correspond to a data block stored in the database system; and a processor, wherein the metadata information includes information of a front-end node which stores the data block in its front-end cache.
MULTI-STATE MIDTIER CACHE
A server includes a data cache for storing data objects requested by mobile devices, desktop devices, and server devices, each of which may execute a different configuration of an application. When a cache miss occurs, the cache may begin loading portions of a requested data object from various data sources. Instead of waiting for the entire object to load to change the object state to “valid,” the cache may incrementally update the state through various levels of validity based on the calling application configurations. When a portion of the data object used by a mobile configuration is received, the object state can be upgraded to be valid for mobile devices while data for desktop and other devices continues to load, etc. The mobile portion of the data object can then be sent to the mobile devices without waiting for the rest of the data object to load.
TECHNIQUES FOR PROVIDING I/O HINTS USING I/O FLAGS
Techniques for processing I/O operations may include: issuing, by a process of an application on a host, an I/O operation; determining, by a driver on the host, that the I/O operation is a read operation directed to a logical device used as a log to log writes performed by the application, wherein the read operation reads first data stored at one or more logical addresses of the logical device; storing, by the driver, an I/O flag in the I/O operation, wherein the I/O flag has a first flag value denoting an expected read frequency associated with the read operation; sending the I/O operation from the host to the data storage system; and performing first processing of the I/O operation on the data storage system, wherein said first processing includes using the first flag value in connection with caching the first data in a cache of the data storage system.
Storage optimization of database in volatile and non-volatile storing unit
According to an embodiment, a database device includes a volatile first storing unit, a non-volatile second storing unit, an access processing unit configured to execute an operation corresponding to an access request for each of a plurality of blocks obtained by dividing data pieces, a backup processing unit configured to write data of each of the plurality of blocks at a backup time to the second storing unit, and a block management unit. The block management unit writes, under certain conditions, data of any block stored in the first storing unit to the second storing unit, and reads data of a block targeted by an access request from the second storing unit to the first storing unit. The backup processing unit writes data of a block that is not yet written to the second storing unit among the plurality of blocks.
Local database cache
A stored procedure call may be transmitted to a database to execute a database query. As a result of the stored procedure call, results including a result record satisfying the database query and a set of records related to the result record may be received. The results may be stored in a local cache.
Smart network interface controller for caching distributed data
A request for data from a distributed table is received at a network interface controller system. The request for data from the distributed table is identified as a request to be processed by the network interface controller system instead of a processor of a host computer system. The requested data is requested and received from a memory of the computing host computer system via a computer interface of the network interface controller system. The received requested data is caused to be cached in a cache of the network interface controller system.
Seamless resource consumption with efficient caching providing reduced lag time
Disclosed embodiments relate to systems and methods for reducing lag time for progressive consumption of data content. Techniques include receiving an indication of requested data, the indication comprising: a data chunk size, and a number of data chunks, accessing a data cache, and performing a fetching operation comprising at least one of: if the data cache is empty, obtaining a first portion of the requested data from a database, or if the data cache is not empty, determining whether at least the first portion of the requested data is available in the data cache. Further techniques include providing for consumption the first portion of the data, identifying that a threshold has been reached, receiving updated values for the data chunk size and the number of data chunks, performing the fetching operation again based on the updated values, and providing for consumption a second portion of the requested data.
SERVER SIDE DATA CACHE SYSTEM
In an example embodiment, a system and method to store and retrieve application data from a cache and a database are provided. The example method may comprise receiving location data associated with application data from a user device, using the location data to determine a cache or database on which the application data is stored, and requesting application data from the cache or database. The system and method may further include monitoring requests for application data associated with instructions having a set of characteristics, identifying application data as associated with the instructions having the set of characteristics, and requesting the application data based on receiving subsequent instructions sharing the same set of characteristics.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
To be able to deal with more request information without increasing the load on a peer-to-peer database system. An information processing apparatus is provided including an acquisition unit that acquires data provided from a P2P database on the basis of request information, and a storage control unit that controls storage of the data performed by a cache storage unit.