Patent classifications
G06F16/24566
Execution of a genetic algorithm with variable evolutionary weights of topological parameters for neural network generation and training
A method includes generating, by a processor of a computing device, an output set of models corresponding to a first epoch of a genetic algorithm and based on an input set of models of the first epoch. The input set and the output set includes data representative of a neural network. The method includes determining a particular model of the output set based on a fitness function. A first topological parameter of a first model of the input set is modified to generate the particular model of the output set. The method includes modifying a probability that the first topological parameter is to be changed by a genetic operation during a second epoch of the genetic algorithm that is subsequent to the first epoch. The method includes generating a second output set of models corresponding to the second epoch and based on the output set and the modified probability.
Execution of a genetic algorithm having variable epoch size with selective execution of a training algorithm
A method includes generating, by a processor of a computing device, a first plurality of models (including a first number of models) based on a genetic algorithm and corresponding to a first epoch of the genetic algorithm. The method includes determining whether to modify an epoch size for the genetic algorithm during a second epoch of the genetic algorithm based on a convergence metric associated with at least one epoch that is prior to the second epoch. The second epoch is subsequent to the first epoch. The method further includes, based on determining to modify the epoch size, generating a second plurality of models (including a second number of models that is different than the first number) based on the genetic algorithm and corresponding to the second epoch. Each model of the first plurality of models and the second plurality of models includes data representative of neural networks.
Techniques for optimizing graph database queries
Examples described herein generally relate to executing a received graph database query. The received graph database query can be converted into a recursive common table expression (CTE). Multiple alternative processes for executing the recursive CTE can be generated based on the recursive CTE. A cost associated with each of the multiple alternative processes can be determined. One of the multiple alternative processes can be converted into a multi-step sequence based on the associated cost. The multi-step sequence can be executed on a database to retrieve a set of results in response to the received graph database query.
Recursive data traversal model
One or more embodiments interpret a configuration graph to efficiently and optimally construct requests and fetch data from a datastore. The values of objects of a requested data type are used to generate additional queries for pre-fetching data from the datastore. Specifically, the values are used to query for and retrieve a corresponding subset of objects of another, related data type. Recursively querying for and retrieving objects of related data types based on already retrieved objects builds a data cache of relevant objects. The cached, relevant objects may be useful in subsequent queries that are likely to follow the initial query.
GENERATING A DECISION TREE MODEL DURING QUERY EXECUTION VIA A RELATIONAL DATABASE SYSTEM
A database system is operable to execute a request to generate a decision tree model. A training set of rows are determined based on accessing a plurality of rows of a relational database table of a relational database. First query data is generated for execution based on the training set of rows. First query output is generated based on executing the first query data. A first portion of the decision tree model data is built based on the first query output. Additional query data is generated for execution based on the first query output. Additional query output is generated based on executing the additional query data. An additional portion of the decision tree model data is built based on the additional query output. Model output for the decision tree model is generated via processing input data in conjunction with processing the decision tree model data.
HYPER-FOLDING INFORMATION IN A UNIFORM INTERACTION FEED
Embodiments of the present invention provide a computer system, a computer program product, and a method that comprises generating a context data tree for each variable in a plurality of variables based on a received input; determining data folding points for each generated context data tree; conducting a hyper-folding process on the determined data folding points in each context data tree, wherein the hyper-folding process converts each generated context data tree into a single data tree; and automatically loading the single data tree into an application.
EXECUTING A QUERY EXPRESSION VIA A DATABASE SYSTEM BY PROCESSING A PRIOR ROW INDEX IDENTIFIER
A query processing system is operable to receive a query expression that includes a call to a computing window function indicating an expression that includes a column reference that includes a prior row index identifier. The computing window function is executed based on accessing at ordered set of rows of the database indicated in the call to the computing window function. An output column is generated based on generating output for each row of a set of rows in the ordered set of rows by evaluating the expression based on performing at least one operation upon a column value, determined based on applying the column reference, of a previous row in the ordered set of rows. A query resultant for the query expression is generated based on the output column generated for the rows in the ordered set of rows.
NEAR-ZERO DOWNTIME RELOCATION OF A PLUGGABLE DATABASE ACROSS CONTAINER DATABASES
Embodiments minimize downtime involved in moving a PDB between CDBs by allowing read-write access to the PDB through most of the moving operation, and by transparently forwarding connection requests, for the PDB, from the source CDB to the destination CDB. The files of a source PDB are copied from a source CDB to a destination CDB, during which the source PDB may be in read-write mode. The source PDB is then closed to write operations so that changes to the source PDB cease. Another round of recovery is performed on the PDB clone, which applies all changes that have been performed on the source PDB during the copy operation and the PDB clone is opened for read and write operations. Forwarding information is registered with the source location, which information is used to automatically forward connection requests, received at the source location for the moved PDB, to the destination location.
Unified data processing across streaming and indexed data sets
Systems and methods are described for unified processing of indexed and streaming data. A system enables users to query indexed data or specify processing pipelines to be applied to streaming data. In some instances, a user may specify a query intended to be run against indexed data, but may specify criteria that includes not-yet-indexed data (e.g., a future time frame). The system may convert the query into a data processing pipeline applied to not-yet-indexed data, thus increasing the efficiency of the system. Similarly, in some instances, a user may specify a data processing pipeline to be applied to a data stream, but specify criteria including data items outside the data stream. For example, a user may wish to apply the pipeline retroactively, to data items that have already exited the data stream. The system can convert the pipeline into a query against indexed data to satisfy the users processing requirements.
Time-based query processing in analytics computing system
Various examples are directed to systems and methods for processing queries against a process model. An analytics computing system may access query data describing a query to an analytics computing system. The analytics computing system may access process model code that comprises first function code for evaluating a function having a first input and a second input. The first function code may indicate that the first input is at a first time context corresponding to a first discrete time. And that the second input is at a second time context corresponding to a second discrete time period adjacent the first discrete time period. The analytics computing system may execute the process model code using a value for a first input at the first discrete time period and a value for a second input at the second discrete time period.