Patent classifications
G06F16/2433
Data partitioning and parallelism in a distributed event processing system
An event processing system for processing events in an event stream is disclosed. The system is configured for determining a stage for a continuous query language (CQL) query being processed by an event processing system and/or determining a stage type associated with the stage. The system is also configured for determining a transformation to be computed for the stage based at least in part on the stage type and/or determining a classification for the CQL query based at least in part on a plurality of rules. The system can also be configured for generating a transformation in a Directly Acyclic Graph (DAG) of a data transformation pipeline for the stage based at least in part on the partitioning criteria for the stage. In some examples, the system can also be configured for determining a partitioning of the stage based at least in part on the transformation.
STORAGE CLUSTER CONFIGURATION
Storage cluster configuration for computing resources of a storage system is disclosed. A cluster configuration can be based on client indicated cluster criteria. Further, a cluster configuration can be based on non-client indicated criteria, such as, system requirements, regulatory compliance, industry best practices, etc. Determined candidate cluster configurations that can satisfy client criteria can be organized according to a selection preference, to enable selection of a preferred cluster configuration from the candidate cluster configurations. Candidate cluster configurations can result from recursive combinatorial searching, with pruning, of an entity space resulting from an ontological analysis of storage system computing resources. Pruning can be accelerated based on heuristic selection of a fork attribute. A K-D tree subjected to dimensional normalization can be employed to interpolate an attribute value. Interpolation can be performed from predetermined sets of data, for example from storage system models or historical storage system performance.
DATA MIGRATION BY QUERY CO-EVALUATION
Techniques are disclosed to migrate data via query co-evaluation. In various embodiments, an input data associated with a source database S and a target schema T to which the input data is to be migrated is received. A set of relational conjunctive queries from target schema T to source database S is received. Query co-evaluation is performed on the received set of relational conjunctive queries to transition data from source database S to target schema T.
Query method, apparatus, electronic device and storage medium
The present disclosure discloses a query method, an apparatus, an electronic device, and a storage medium, and relates to the technical field of spatio-temporal big data in big data technologies. The specific implementation scheme is as follows: receiving a query request sent by a terminal device, where the query request is used to request to query dynamic association relationships of a target entity and includes an identifier of the target entity and query starting and ending time; determining at least one time bucket to be queried in a query database according to the query starting and ending time, where each time bucket stores the dynamic association relationships of the target entity in a time period corresponding to the time bucket; and querying the dynamic association relationships of the target entity in the at least one time bucket according to the identifier of the target entity.
DATA DISTRIBUTION PROCESS CONFIGURATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure relates data distribution process configuration methods and apparatuses, electronic devices and storage media. The method includes: in response to detecting a trigger operation that represents creating a service process of a data distribution service, displaying a workbench of the service process; the workbench includes an area for displaying node plug-ins and an area for displaying a canvas; in response to detecting a trigger operation that represents dragging a node plug-in to the canvas, obtaining node plug-ins of the service process; in response to detecting a trigger operation that represents connecting the node plug-ins, obtaining configuration information of the service process, the configuration information represents a data distribution process from a data source end to a data receiving end. In the embodiment, the device data at the data source end can be distributed to various receiving ends according to the configuration information, and the rule setting can be completed without the user focuses on the specific implementation logic.
GRAPH OPERATIONS ENGINE FOR TENANT MANAGEMENT IN A MULTI-TENANT SYSTEM
Methods, systems, and computer storage media for providing a multi-tenant system that executes graph language requests using graph operations of a graph language. A graph language request—that configures tenant data for tenants in a multi-tenant system—is executed using a graph operations engine. The graph operations engine receives and parses a graph language request that includes a list of tenants and a definition of data operations. The set of data operations of the definition are executed on a tree of data operation nodes comprising a plurality of leaf nodes and a root node. Executing the data operations is based on graph language actions (e.g., composition, transformation, and aggregation) that support asynchronously returning results data associated with configuring the Tenant data. Executing the data operations of the definition causes generation of results data (e.g., root node results or leaf node results) configuration of the tenant data in the multi-tenant system.
OPTIMIZED TENANT SCHEMA GENERATION
A system includes a memory and a processor, where the processor is in communication with the memory. The processor is configured to receive a request to create a tenant schema within a database, where the database includes one or more tenant schemas associated with one or more tenants. The tenant schema associated with a tenant of the one or more tenants is created, where the tenant schema is empty. It is determined whether the database includes a template schema. Upon determining the template schema exists, command is sent to the database to copy the template schema to the tenant schema associated with the tenant.
Event prediction
A system includes at least one connector configured to gather at least partially incomplete data from at least one data source. The gathered data is communicated to a model definition module that converts at least a subset of the gathered data into a prediction model in accordance with a received definition. A prediction module receives a prediction query and, in response, supplies an event prediction based on the prediction model.
Executing untrusted commands from a distributed execution model
Systems and methods are disclosed for generating a distributed execution model with untrusted commands. The system can receive a query, and process the query to identify the untrusted commands. The system can use data associated with the untrusted command to identify one or more files associated with the untrusted command. Based on the files, the system can generate a data structure and include one or more identifiers associated with the data structure in the distributed execution model. The system can distribute the distributed execution model to one or more nodes in a distributed computing environment for execution.
SYSTEM AND METHOD FOR SQL SERVER RESOURCES AND PERMISSIONS ANALYSIS IN IDENTITY MANAGEMENT SYSTEMS
Embodiments as disclosed allow identity management with respect to SQL database by discovering substantially database objects and their entitlements and associating them with corresponding identities within the identity management system, thus providing insights into such SQL server entitlements and their associated identities, even across multiple SQL servers within an enterprise environment.