G06F8/437

Type inference in dynamic languages

To improve the technological process of programming a computer using a dynamic programming language, generate a first portion of training data which maps types in the dynamic programming language to corresponding functions and methods by performing information retrieval on documentation libraries in the dynamic programming language and/or generate a second portion of training data which maps program variables to the corresponding functions and methods by performing data flow analysis on a plurality of pre-existing programs written in the dynamic programming language. Train a neural network on the first and/or second portions of training data to infer unknown types in the dynamic programming language. Carry out inference with the trained neural network to infer the unknown types. Facilitate programming in the dynamic programming language based on the inferred unknown types. Optionally, execute a resulting program.

Partial data type promotion to exploit efficient vectorization in microprocessors

Aspects of the invention include a compiler detecting an expression in a loop that includes elements of mixed data types. The compiler then promotes elements of a sub-expression of the expression to a same intermediate data type. The compiler then calculates the sub-expression using the elements of the same intermediate data type.

SYSTEMS AND METHODS FOR HANDLING MACRO COMPATIBILITY FOR DOCUMENTS AT A STORAGE SYSTEM

Systems and methods for handling macro compatibility for documents at a storage system are provided. A document to be stored on a network-based storage system is identified. The document is created using a first document processing application that uses a first programming language that is incompatible with the network-based storage system. The document includes macros in the first programming language. A semantic context for an object included in a macro is determined. The macro defines a function to be performed with respect to the object. In response to a determination, based on the semantic context of the object, that the object corresponds to multiple object types, a set of candidate object types for the object is identified. The function is converted into multiple sets of operations represented in a second programming language. Each set of operations is associated with a candidate object type and one set of operations is to be performed with respect to the object responsive to receiving an indication of a candidate object type for the object during execution of the macro. The document including the multiple sets of operations represented in the second programming language is stored on the network-based storage system. The second programming language is compatible with the network-based storage system.

Accessing a migrated member in an updated type

Techniques for accessing a migrated method include: identifying a request to invoke a method defined by a particular type; identifying, in the particular type: an older version of the method that is (a) associated with a method name and (b) configured to return values of a first return type, and a current version of the method that is (a) associated with the method name and (b) configured to return values of a second return type; determining that the first request specifies the first return type; responsive to determining that the first request specifies the first return type: executing the current version of the method to obtain a value of the second return type; applying one or more conversion functions to convert the value of the second return type to a value of the first return type; returning the value of the first return type responsive to the first request.

Executing a parametric method within a specialized context

A parametric constant resolves to different values in different contexts, but a single value within a particular context. An anchor constant is a parametric constant that allows for a degree of parametricity for an API point. The context for the anchor constant is provided by a caller to the API point. The anchor constant resolves to an anchor value that records specialization decisions for the API point within the provided context. Specialization decisions may include type restrictions, memory layout, and/or memory size. The anchor value together with an unspecialized type of the API point result in a specialized type of the API point. A class object representing the specialized type is created. The class object may be accessible to the caller, but the full value of the anchor value is not accessible to the caller. The API point is executed based on the specialization decisions embodied in the anchor value.

USER CUSTOMIZABLE COMPILER ATTRIBUTES FOR CODE CHECKING
20230251837 · 2023-08-10 ·

Disclosed herein is technology to use customized compiler attributes to check source code. An example method may include: accessing, by a processing device executing a compiler, a source code that comprises a compiler attribute associated with a programming construct, wherein the compiler attribute is defined in the source code; executing, by the processing device, a function of the compiler to check the programming construct at a location in the source code, wherein the function checks the programming construct by evaluating the compiler attribute associated with the programming construct; determining, by the processing device executing the compiler, whether to generate a message indicating a status of the check; and generating, by the processing device executing the compiler, object code based on the source code that comprises the compiler attribute.

Artificial intelligence engine with enhanced computing hardware throughput
11762635 · 2023-09-19 · ·

An artificial intelligence (“AI”) engine is disclosed with AI-engine modules and a plurality of learning agents. The AI-engine modules include instructor, learner, and predictor modules. The learner module is configured to train a plurality of AI models in parallel, and the instructor module is configured to coordinate with a plurality of simulators for respectively training the AI models. The learning agents are configured to process training requests from the instructor on data from the simulators for training the AI models. The learner module is further configured to first train the AI models on a first batch of similar data synchronously pooled in a memory of the learner module with a first processor. The learner module is further configured to subsequently train the AI models on a second, different batch of similar data synchronously pooled in the memory of the learner module with the first processor.

Performance optimization of class instance comparisons

An embodiment includes executing a code interpretation engine such that the interpretation engine interprets a first portion of a source code that includes a first comparison between a first pair of operands. The embodiment also includes performing, in memory, a first bitwise comparison between a block A1 and a block B1 of the first portion of the source code. The embodiment also speeds up execution of the first portion of the source code responsive to the first bitwise comparison producing a negative result. The embodiment speeds up the first portion by omitting at least one of (i) a second bitwise comparison between a block A2 and a block B2, and (ii) a field-wise comparison between a block A3 and a block B3.

SYSTEMS, METHODS AND MEDIA FOR DYNAMICALLY SHAPED TENSORS USING LIQUID TYPES
20220027255 · 2022-01-27 ·

Systems, methods, and processor readable media are described for verifying software. A liquid type system is used by a programming language to allow source code to define tensor variables with dimensionality and/or shape dynamically defined at runtime. The dimensionality and shape of a tensor variable invoked in the source code, as well as the data type of the constituent elements of such a tensor variable, may be defined by a static type that may be verified at compile time.

HETEROGENEITY-AGNOSTIC AND TOPOLOGY-AGNOSTIC DATA PLANE PROGRAMMING
20210365253 · 2021-11-25 ·

The present disclosure provides a compiler operative to convert computer-executable instructions for a network data plane written in a heterogeneity-agnostic and topology-agnostic programming language into an intermediate representation, then compile the intermediate representation into multiple executable representations according to topological constraints of the network. Users may develop software-defined network functionality for a data center network composed of heterogeneous network devices by writing code in a programming language implementing heterogeneity-agnostic and topology-agnostic abstractions, while the compiler synthesizes heterogeneity-dependent and topology-dependent computer-executable object code implementing the software-defined network functionality across network devices of the data center network by analyzing logical dependencies and network topology to determine dependency constraints and resource constraints.