Patent classifications
G06F16/24552
RESOURCE PROVISIONING SYSTEMS AND METHODS
A method for a first set of processors and a second set of processors comprises, the first set of processors processing a set of queries, as a result of a change in utilization of the first set of processors, processing the set of queries using the second set of processors. The change in processors is independent of a change in storage resources, the storage resources shared by the first set of processors and the second set of processors.
METHOD AND APPARATUS TO REDUCE CACHE STAMPEDING
An apparatus comprises a memory having a data cache stored therein and a control circuit operably coupled thereto. The control circuit is configured to update that data cache in accordance with a scheduled update time. In the latter regards, by one approach, the control circuit computes selected entries for the data cache prior to the scheduled update time pursuant to a prioritization scheme to provide a substitute data cache. At the scheduled update time, the control circuit switches the substitute data cache for the data cache such that data queries made subsequent to the scheduled update time access the substitute data cache and not the data cache.
Efficient traversal of hierarchical datasets
In one embodiment, a method comprises receiving a request for a particular user identification (ID) to perform a particular operation on a particular data object. An entitlement cache associates each operation that the particular user ID is entitled to perform with a first encoding of a tuple of a plurality of tuples. An object mapping cache associates each tuple of the plurality of tuples with a second encoding of each tuple of the plurality of tuples. An object mapping is used to determine a first tuple. The object mapping cache is used to determine a first vector of one of more left values based on the first tuple. The entitlement cache is used to determine a second vector of one or more value pairs. In response to identifying a match between the first vector and the second vector, the particular user ID is granted access to the particular data object.
Cache conscious techniques for generation of quasi-dense grouping codes of compressed columnar data in relational database systems
Herein are techniques for dynamic aggregation of results of a database request, including concurrent grouping of result items in memory based on quasi-dense keys. Each of many computational threads concurrently performs as follows. A hash code is calculated that represents a particular natural grouping key (NGK) for an aggregate result of a database request. Based on the hash code, the thread detects that a set of distinct NGKs that are already stored in the aggregate result does not contain the particular NGK. A distinct dense grouping key for the particular NGK is statefully generated. The dense grouping key is bound to the particular NGK. Based on said binding, the particular NGK is added to the set of distinct NGKs in the aggregate result.
Method and apparatus for stress management in a searchable data service
Method and apparatus for stress management in a searchable data service. The searchable data service may provide a searchable index to a backend data store, and an interface to build and query the searchable index, that enables client applications to search for and retrieve locators for stored entities in the backend data store. Embodiments of the searchable data service may implement a distributed stress management mechanism that may provide functionality including, but not limited to, the automated monitoring of critical resources, analysis of resource usage, and decisions on and performance of actions to keep resource usage within comfort zones. In one embodiment, in response to usage of a particular resource being detected as out of the comfort zone on a node, an action may be performed to transfer at least part of the resource usage for the local resource to another node that provides a similar resource.
INTERACTIVE ANALYTICS WORKFLOW WITH INTEGRATED CACHING
A data analytics application receives a workflow that includes a sequence of tools. Each tool performs a data analytics function. The data analytics application processes a data file using the sequence of tools to generate a result item representing an outcome of the processing of the data file. The data analytics application stores one or more metadata files, each of which includes data generated by an interactive tool in the sequence during the processing of the data file. The data analytics application receives a user input through an interactive element associated with an interactive tool in the sequence. The interactive element can modify an operation of the interactive tool based on the user input. The data analytics application retrieves the metadata file for the interactive tool and processes the metadata file by using a subset of the sequence of tools and the user input to generate a different result item.
Maintenance of clustered materialized views on a database system
A cluster view method of a database to perform compaction and clustering of database objects, such as database materialized view is shown. The database can comprise a cache to store changes to storage units of tables of the database objects. The cluster view method can implement clustering to remove data based on the cache and clustering to group the data of the materialized view.
Dashboard loading from a cloud-based data warehouse cache
Dashboard loading from a cloud-based data warehouse cache, including determining that a result for a first query is stored in a cache of a cloud-based data warehouse; sending, in response to the result being stored in the cache, to the cloud-based data warehouse, a request for the result from the cache; and providing, based on the result for the first query, one or more dashboard visualizations.
Synthesizing disparate database entries for hardware component identification
A device retrieves historical data and new data each a respective hardware component identifier and a respective associated value. The device creates a synthesized set of data by having subsets for anomalous data, data that is associated with an attenuation signal, and other data. The device discards the anomalous data and weights the data associated with an attenuation signal. The device generates a searchable database, the searchable database including each hardware component named by an entry of the synthesized set of data, along with an associated value determined based on the weighted value of the entry. The device receives user input of a search query, and outputs search results based on a comparison of the user input of the search query to entries of the searchable database.
METHOD AND SYSTEM OF USING A LOCAL HOSTED CACHE AND CRYPTOGRAPHIC HASH FUNCTIONS TO REDUCE NETWORK TRAFFIC
The described method and system enables a client at a branch office to retrieve data from a local hosted cache instead of an application server over a WAN to improve latency and reduce overall WAN traffic. A server at the data center may be adapted to provide either a list of hashes or the requested data based on whether a hosted cache system is enabled. A hosted cache at the client side may provide the data to the client based on the hashes. The hashes may be generated to provide a fingerprint of the data which may be used to index the data in an efficient manner.