Patent classifications
G06F16/2386
COMPUTERIZED SYSTEM FOR PROGRAMMATIC MAPPING OF RECORD LINEAGE BASED ON DATA FLOW THROUGH DATA STORAGE COMPONENTS
An apparatus includes processing circuitry and a memory storing instructions that, when executed by the processing circuitry, cause the apparatus to identify a plurality of components and a data flow that interconnects the plurality of components. The instructions cause the apparatus to determine a lineage of a record generated by the plurality of components based on the data flow. The lineage indicates the data flow from a first component to a second component of the plurality of components to generate the record. The instructions cause the apparatus to present, to a user, a visual depiction of the lineage of the record. The visual depiction indicates the data flow of a query through at least the first component and the second component of the plurality of the components to generate the record.
SYSTEMS AND METHODS FOR DETERMINING THE SHAREABILITY OF VALUES OF NODE PROFILES
The present disclosure relates to determining the shareability of values of node profiles. Record objects and electronic activities of a system of record corresponding to a data source provider may be accessed. Each record object may correspond to a record object type and have one or more object field-value pairs. Node profiles may be maintained. Values of fields corresponding to a predetermined type of field including fewer than a predetermined threshold number of data source providers may be identified. A restriction tag used to restrict populating other node profiles may be generated. Provision of the value with a second data source provider may be restricted.
Data Transfer Resiliency During Bulk To Streaming Transition
An indication to migrate requested data objects stored in a source database environment to a destination database environment is received. Some of data objects have many-to-one relationships with other data objects in the source database environment. At least one snapshot file generated by the source database environment is transferred to a destination database environment in bulk transfer mode. Subsequent incoming data received during bulk transfer mode, after the indication is stored in a temporary table. Upon completion of migration of the requested data objects the system transitions from bulk transfer mode to streaming mode. The subsequent incoming data from the temporary table is transferred to the destination database environment in response to the transition to streaming mode. Additional data received after the temporary table is empty is transferred from the source database environment to the destination database environment without use of the temporary table.
Data digital decoupling of legacy systems
Methods, systems, and computer-readable storage media for determining for each query in a set of high-cost queries, an access pattern to data objects accessed by the query in a legacy system, determining for each query in a set of low-cost queries, an access pattern to each data object accessed by the query in the legacy system, providing a first set of design patterns representative of first data objects of the legacy system to be offloaded to the target system and a second set of design patterns representative of second data objects of the legacy system to remain on the legacy system, and executing at least one design pattern of the first set of design patterns to offload one or more first data objects to the target system.
METHOD OF PROCESSING DATA IN A DATABASE BY SIMULTANEOUSLY PROCESSING ROW DATA OF KEY PAIRS WHICH INCLUDE A KEY OF A FIRST TABLE IN THE DATABASE, AND A MATCHING KEY AND A UNIQUE ROW OF A SECOND TABLE IN THE DATABASE
A method is provided for processing data in a database, wherein the database includes a first table and a second table. Each of the tables have a plurality of rows of data, wherein a key identifies one or more rows of data in the tables. There is a plurality of matching key pairs among the tables. Each key pair includes a key of a first table, and a matching key and a unique row of a second table. In operation, the method involves simultaneously processing row data of key pairs associated with a first row of the second table, and then simultaneously processing row data of key pairs associated with any remaining rows of the second table in sequential row order.
METHOD OF BATCH PROCESSING DATA THAT IS STORED IN MULTIPLE TABLES AS A PLURALITY OF ROWS OF DATA BY READING OUT AND BATCH PROCESSING DATA FROM ONLY A PORTION OF A ROW FROM EACH OF TABLES THAT IS TO BE USED IN BATCH PROCESSING LOGIC
A method is provided for batch processing data that is stored in multiple tables and is organized in the tables as a plurality of rows of data and a plurality of columns. Each row is identified by a key, and each column represents a field having a unique field name. The batch processing is performed using batch processing logic. In operation, the batch processing is performed by reading out data from only a portion of a row from each of the tables that is to be used for the batch processing logic by specifying the key of the row from the respective table, and the unique field names in the row of the respective table to be used for the batch processing logic. The remaining portion of the data in the row of the respective table is not read out from the row. Batch processing is then performed on the read out data using the batch processing logic. The batch processed data is then written back into the same row of the respective table that the data was read out from. The resultant row of each of the tables includes the batch processed data, and the remaining portion of the data in the row in each of the tables that was not read out from the row.
TUNING EXTERNAL INVOCATIONS UTILIZING WEIGHT-BASED PARAMETER RESAMPLING
Techniques are disclosed for tuning external invocations utilizing weight-based parameter resampling. In one example, a computer system determines a plurality of samples, each sample being associated with a parameter value of a plurality of potential parameter values of a particular parameter. The computer system assigns weights to each of the parameter values, and then selects a first sample for processing via a first external invocation based on a weight of the parameter value of the first sample. The computer system then determines feedback data associated with a level of performance of the first external invocation. The computer system adjusts the weights of the parameter values of the particular parameter based on the feedback data. The computer system then selects a second sample of the plurality of samples to be processed via execution of a second external invocation based on the adjustment of weights of the parameter values.
AUTOMATED DATA SET PROCESSING AND VISUALIZATION FOR MULTI-MODULE PRICING INSIGHTS GRAPHICAL USER INTERFACE
A method for data ingestion for a data visualization platform comprises receiving a plurality of data sets, generating and storing a merged data set based on the plurality of received data sets, and, receiving, via a graphical user interface, a first input comprising an instruction to perform a data standardization operation. The method comprises, in response to receiving the first input, applying a data standardization operation to the merged data set to process the merged data set to generate a standardized data set. The method comprises receiving, via the interface, a second input comprising an instruction to perform a data analytics operation, and responsively applying the data analytics operation to the standardized data set to generate insights data. The method includes receiving, via the interface, a third input comprising an instruction to perform a data visualization operation, and responsively generating one or more data visualizations based on the insights data.
SYSTEMS AND PROCESSES FOR ITERATIVELY TRAINING A RENUMERATION TRAINING MODULE
Systems and processes for iteratively training a training module are described herein. In various embodiments, the process includes: (1) retrieving bulk data comprising a plurality of raw position data elements from a plurality of data sources, (2) transforming the raw position data elements according to preconfigured classification guidelines to generate standardized position data element groups; (3) training a raw training module by iteratively processing each of the standardized position data element groups through a raw training module to generate respective output renumeration values; (4) updating one or more emphasis guidelines based on a comparison of the respective output renumeration values; (5) processing an input position data element set with a trained training module to generate a display renumeration value; and (6) modifying a display based on the display renumeration value.
SYSTEMS AND METHODS FOR GENERATING A FILTERED DATA SET
The present disclosure relates to generating a filtered data set. Data from a plurality of systems of record of a plurality of data source providers may be accessed. A master data set generated using the data accessed from the plurality of systems of record may be maintained. Restriction policies including one or more rules for restricting sharing of data may be maintained. A filtered data set may be generated for a data source provider responsive to an application of restriction policies of other data source providers to the master data set. The filtered data set may be provisioned.