G06F8/44

Systems using computation graphs for flow solvers

An embodiment of a method can create a directed acyclic graph (DAG) from a programmer specified set of computation units to solve, in a computer program, physics based simulations of physical systems, and the DAG can be used to analyze and debug the computer program. In this method, the computer program can be created by automatically determining dependency relationships in the set of computation units and automatically schedule their execution. The method can also automatically allocate memory for the computation units.

Management of building of software packages using a trusted execution environment

Systems and methods providing a processing device to receive, by a software build process executing in a trusted execution environment (TEE) of a first computer system, software source code from a second computer system. The processing device generates a software package by compiling the software source code. The processing device also generates a first signature of the software package and sends the first signature to the second computer system. Responsive to receiving, from the second computer system, a second signature comprising the first signature signed by the second computer system, the processing device further deploys the software package on the first computer system.

SEARCH BASED APPROACH FOR GENERATING CONTROLLER MODELS

A method includes obtaining a binary code of a controller. The method also includes decompiling the binary code of the controller to generate a source code. The method further includes generating one or more abstract syntax trees based on the source code. The method further includes generating an interpretable model based on the one or more abstract syntax trees. The interpretable model is interpretable by subject matter experts.

Multi-lingual code generation with zero-shot inference

A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.

MULTISTAGE COMPILER ARCHITECTURE

A system includes a compiler including a plurality of compiler blocks. The compiler blocks of the plurality of compiler blocks are compossible. The compiler is configured to identify one or more resources in a hardware to execute a set of low-level instructions that is generated from a high-level function in a high-level code. The compiler is further configured to determine one or more processing operations to be performed that is associated with the high-level function in the high-level code. The determining of the one or more processing operations occurs based on architecture of the hardware. The compiler is configured to compile the high-level function in the high-level code of the application into the set of low-level instructions to be executed on the hardware.

SYSTEMS AND METHODS FOR FACILITATING GENERATION AND DEPLOYMENT OF MACHINE LEARNING SOFTWARE APPLICATIONS

Generally described, one or more aspects of the present application relate to improving the process of generating and deploying software applications in a network environment, particularly software applications that incorporate or rely upon machine learning models. More specifically, the present disclosure provides specific user interface features and associated computer-implemented features that may effectively, from a user's perspective, remove most of the complexities associated with writing and deploying code and developing and improving machine learning models. For example, the present disclosure may provide user-friendly visual building blocks that allow users to build and customize machine learning workflows that can then be turned into a full software application and optimized and deployed at target destinations of the users' choice.

COMBINING MODEL-DRIVEN APPLICATIONS AND CANVAS-TYPE APPLICATIONS WITH APPLICATION LIFECYCLE MANAGEMENT

Systems and methods for generating an application store metadata corresponding to a plurality of sub-applications, combining model-driven application and canvas-type applications. Lifecycle components of the plurality of sub-applications are coupled to each other using one or more data relationships defined by an embedding model and the stored metadata. The metadata points to a library associated with the plurality of sub-applications, and wherein the library comprises a newest version of one or more of the lifecycle components. The compiled plurality of sub-applications can then be run.

Software testing in parallel threads with a record-locking database

Test cases written to test a software application can be dynamically distributed among different sets of test cases that can be executed simultaneously in different parallel threads, thereby speeding up testing relative to executing the test cases sequentially in a single thread. To avoid database conflicts that may occur when different test cases in different parallel threads attempt to access the same database simultaneously, testing of the software application can be performed in association with a record-locking database that locks database records individually instead of locking entire database tables or locking data structures that are larger than individual records. Locking individual database records can reduce and/or eliminate the chances that a test case in one parallel thread will be unable to access a record in the database because another test case in another parallel thread is simultaneously accessing the same database.

Automated code generation using analysis of design diagrams

Methods, systems, and computer-readable media for automated code generation using analysis of design diagrams are disclosed. A diagram-to-code system determines one or more security properties of a plurality of components associated with a software product. Relationships between the components are indicated in a software design diagram. At least some of the security properties are determined using input to a user interface. The diagram-to-code system generates one or more secure code packages based (at least in part) on the software design diagram and the one or more security properties. The secure code package(s) implement one or more security controls associated with the software product. The secure code package(s) are provided to a developer. The secure code package(s) and additional program code from the developer are compiled into a compiled software product. Execution of the compiled software product mitigates security vulnerabilities using the one or more security controls.

Interpreter for interpreting a data model algorithm and creating a data schema
11520565 · 2022-12-06 · ·

A computing device for interpreting a data model algorithm includes an object searcher, an interpreter, and a translator. The object searcher is configured to search for attributes within datasets generated from at least one method of an instantiation of the data model algorithm in a development mode workflow. The interpreter is configured to evaluate the attributes, identify attributes having a use type, identify the type information of the identified attribute, and create data schema using the identified attributes and type information. The use type can be determined based on attribute values or an interface type associated with an identified attribute. The translator is configured to compare the data schema with another data schema in response to selecting the data model algorithm for inclusion in a production mode workflow.