Patent classifications
G06F8/4452
System and method for inferencing of data transformations through pattern decomposition
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system can provide a service to recommend actions and transformations, on an input data, based on patterns identified from the functional decomposition of a data flow for a software application, including determining possible transformations of the data flow in subsequent applications. Data flows can be decomposed into a model describing transformations of data, predicates, and business rules applied to the data, and attributes used in the data flows.
System and method for metadata-driven external interface generation of application programming interfaces
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system provides a programmatic interface, referred to herein in some embodiments as a foreign function interface, by which a user or third-party can define a service, functional and business types, semantic actions, and patterns or predefined complex data flows based on functional and business types, in a declarative manner, to extend the functionality of the system.
Generating a platform-agnostic data pipeline via a low code transformation layer systems and methods
Systems and methods for generating a platform-agnostic data pipeline via a low code transformation layer are disclosed. The system receives one or more user selections of (i) nodes and (ii) links linking the nodes, indicating a data pipeline architecture of transfer/management/flow of data via a GUI. In response to receiving a user selection to implement the data pipeline, the system automatically identifies/generates a set of code portions, based on one or more software objects (e.g., JSON objects) associated with the user selections indicating the data pipeline architecture. The system then identifies a platform identifier associated with a remote server and generates a set of executable instructions (e.g., a script, executable program, or other file) associated with the data pipeline architecture by using a transformation component. The system then provides the executable instructions to the remote server to host the data pipeline.
Decoupling loop dependencies using buffers to enable pipelining of loops
Decoupling loop dependencies using first in, first out (FIFO) buffers or other types of buffers to enable pipelining of loops is disclosed. By using buffers along with tailored ordering of their writes and reads, loop dependencies can be decoupled. This allows the loop to be pipelined and can lead to improved performance.
System and method for dynamic, incremental recommendations within real-time visual simulation
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system can include a software development component and graphical user interface, referred to herein in some embodiments as a pipeline editor, or Lambda Studio IDE, that provides a visual environment for use with the system, including providing real-time recommendations for performing semantic actions on data accessed from an input HUB, based on an understanding of the meaning or semantics associated with the data.
System and method for ontology induction through statistical profiling and reference schema matching
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system can perform an ontology analysis of a schema definition, to determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes. A reference HUB including one or more schemas can be used to analyze data flows, and further classify or make recommendations such as, for example, transformations enrichments, filtering, or cross-entity data fusion of an input data.
Machine learning pipeline skeleton instantiation
Operations include obtaining a machine learning (ML) pipeline skeleton that indicates a set of first functional blocks to use to process a new dataset of a new ML project. The operations also include obtaining a relationship mapping that maps dataset features to respective functional blocks, the relationship mapping indicating correspondences between dataset features of existing datasets of existing ML projects and usage of second functional blocks of existing ML pipelines of the existing ML projects. The operations also include mapping the first functional blocks to respective portions of the new dataset based on the relationship mapping. In addition, the operations include instantiating the pipeline skeleton with respective code snippets that each correspond to a respective first functional block of the set of first functional blocks, the respective code snippets each including one or more respective code elements that are based on the mapping of the first functional blocks.
METHODS AND SYSTEMS FOR NESTED STREAM PREFETCHING FOR GENERAL PURPOSE CENTRAL PROCESSING UNITS
A method and hardware system to remove the overhead caused by having stream handling instructions in nested loops. Where code contains inner loops, nested in outer loops, a compiler pass identifies qualified nested streams and generates ISA specific instructions for transferring stream information linking an inner loop stream with an outer loop stream, to hardware components of a co-designed prefetcher. The hardware components include a frontend able to decode and execute instructions for a stream linking information transfer mechanism, a stream engine unit with a streams configuration table (SCT) having a field for allowing a subordinate stream to stay pending for values from its master stream, and a stream prefetch manager with buffers for storing values of current elements of a master stream, and with a nested streams control unit for reconfiguring and iterating the streams.
SYSTEMS AND METHODS FOR AUTOMATICALLY MODIFYING PIPELINED ENTERPRISE SOFTWARE
Systems and methods for version control of pipelined enterprise software are disclosed. Exemplary implementations may: store information for executable code of software applications that are installed and executable by users, receive first user input from a first user that represents selection by the first user of a first software pipeline for execution; receive second user input from a second user that represents a second selection by the second user of a second software pipeline for execution, wherein the second software pipeline includes different versions of software applications that are included in the first software pipeline; facilitate execution of the first software pipeline for the first user; and facilitate execution of the second software pipeline for the second user at the same time as the execution of the first software pipeline for the first user.
PIPELINE MANAGEMENT TOOL
Systems, methods, and non-transitory computer readable media are provided for managing pipelines of operations on data. A system may access data and provide a set of functions for the data. The system may receive a user's selection of one or more functions from the set of functions. The system may generate a pipeline of operations for the data based on the user's selection. The pipeline of operations may include the function(s) selected by the user.