Patent classifications
G06F16/24562
Time Optimized Communications
A time optimizing communications system and method is provided because “loose lips sink ships”. Orders get “do by” parameters, “deliver by” times and may be broken into parts according to “do by” parameters, and/or by prioritization for delivery only when the recipient has the need-to-know. Time sensitive and most secret parts are communicated just in time, some data may be sent at randomized times that may bias traffic on communications infrastructure towards bandwidth optimization. Reducing risk of decryption by adversaries occurring quickly enough to frustrate the purposes of orders. Parts may be broken into data blocks and routed and/or stored randomly. An array of pointers records details of their creation and/or storage locations to provide a key for retrieving data blocks and/or reconstructing messages; timing is managed according to mission needs, and priorities. May also reduce peak demand on communications bandwidth.
SYSTEM AND METHOD FOR PROVIDING BOTTOM-UP AGGREGATION IN A MULTIDIMENSIONAL DATABASE ENVIRONMENT
Systems and methods for supporting bottom-up aggregation in a multidimensional database computing environment. A dynamic flow, coupled with a data retrieval layer or data fetching component, which in some environments can incorporate a kernel-based data structure, referred to herein as an odometer retriever, or odometer, that manages pointers to data blocks, contains control information, or otherwise operates as an array of arrays of pointers to stored members, enables bottom-up aggregation of cube data which, for example with pure aggregating queries, provides considerable run time improvement.
Effective and scalable building and probing of hash tables using multiple GPUs
Described approaches provide for effectively and scalably using multiple GPUs to build and probe hash tables and materialize results of probes. Random memory accesses by the GPUs to build and/or probe a hash table may be distributed across GPUs and executed concurrently using global location identifiers. A global location identifier may be computed from data of an entry and identify a global location for an insertion and/or probe using the entry. The global location identifier may be used by a GPU to determine whether to perform an insertion or probe using an entry and/or where the insertion or probe is to be performed. To coordinate GPUs in materializing results of probing a hash table a global offset to the global output buffer may be maintained in memory accessible to each of the GPUs or the GPUs may compute global offsets using an exclusive sum of the local output buffer sizes.
USING SELF-MAINTAINING STRUCTURE INFORMATION FOR FASTER DATA ACCESS
A method, a system, and a computer program product for accessing data. A schema representing a structure of an object in a plurality of objects stored in a storage location is generated. Each object includes one or more data elements. Each schema identifies one or more data elements of the object, an offset location of each data element of the object, and a value of each data element of the object. A query requesting access to one or more data elements is received. A generated schema in a plurality of generated schemas representing the queried object is identified. The elements are accessed using the identified generated schema, and retrieved.
Indirect block containing references to blocks of a persistent fingerprint index
In some examples, a system performs data deduplication using a deduplication fingerprint index in a hash data structure comprising a plurality of blocks, wherein the hash data structure is stored in persistent storage, and a block of the plurality of blocks comprises fingerprints computed based on content of respective data units. The system uses an indirect block in a memory to access a given block of the plurality of blocks in the hash data structure, the indirect block containing references to blocks of the hash data structure containing the deduplication fingerprint index, and the references indicating storage locations of the plurality of blocks in the persistent storage.
Media Sharing Across Service Providers
Embodiments including methods and apparatus to share file and file recommendations are disclosed. Data is received indicating a particular media item from a first service provider, where the particular media item is accessible from the first service provider according to a first pointer. A second pointer is identified in a database according to which the particular media item is accessible from a second service provider. Data indicating the second pointer is transmitted to a media playback system, via at least one of a WAN or a LAN.
Automated materialized view table generation and maintenance
One or more computing devices, systems, and/or methods for automated materialized view table generation and maintenance are provided. A log, comprising queries and latencies of processing the queries, is evaluated to identify a list of combinations of fields that occur greater than a threshold frequency and/or occur in queries having latencies greater than a threshold latency. A materialized view generation script is executed against a main database to generate a materialized view table associated with a combination of one or more fields from the list. A middleware component is configured to selectively direct a query to the main database or to the materialized view table based upon whether the materialized view table comprises preliminary query results for fields specified by the query.
Methods for updating reference count and shared objects in a concurrent system
A method for referencing and updating objects in a shared resource environment. A reference counter counts is incremented for every use of an object subtype in a session and decremented for every release of an object subtype in a session. A session counter is incremented upon the first instance of fetching an object type into a session cache and decremented upon having no instances of the object type in use in the session. When both the reference counter and the session counter are zero, the object type may be removed from the cache. When the object type needs to be updated, it is cloned into a local cache, and changes are made on the local copy. The global cache is then locked to all other users, the original object type is detached, and the cloned object type is swapped into the global cache, after which the global cache in unlocked.
Hardware-protected reference count-based memory management using weak references
A method for managing memory, comprising: maintaining a strong reference count for a first object; establishing a first reference from the first object to a second object; establishing a second reference from the second object to the first object, wherein the second reference is a weak reference that does not increase the strong reference count of the first object; detecting that the strong reference count of the first object has reached zero; in response to detecting that the strong reference count has reached zero, invoking a corresponding action.
Distributed tagging of data in a hybrid cloud environment
A system includes a first application and a storage layer running on a cloud computing device, where the first application includes a service layer to interface over a network with a browser application running on a client computing device to provide the browser application access to the first application and a tagging module to interface over a communication connector with a second application running on a remote computing device having a database. The service layer receives requests for data from the first application and provides the requested data from the database. The tagging module is configured to tag a record of the data in response to tag requests from the first application, where the record of the data is tagged by generating an item reference to the record to enable a customized view of the data. The storage layer is configured to store the item references.