Patent classifications
G06F8/47
Native emulation compatible application binary interface for supporting emulation of foreign code
A function is compiled against a first application binary interface (ABI) and a second ABI of a native first instruction set architecture (ISA). The second ABI defines context data not exceeding a size expected by a third ABI of a foreign second ISA, and uses a subset of registers of the first ISA that are mapped to registers of the second ISA. Use of the subset of registers by the second ABI results in some functions being foldable when compiled using both the first and second ABIs. First and second compiled versions of the function are identified as foldable, or not, based on whether the compiled versions match. Both the first and second compiled versions are emitted into a binary file when they are not foldable, and only one of the first or second compiled versions is emitted into the binary file when they are foldable.
HETEROGENEITY-AGNOSTIC AND TOPOLOGY-AGNOSTIC DATA PLANE PROGRAMMING
The present disclosure provides a compiler operative to convert computer-executable instructions for a network data plane written in a heterogeneity-agnostic and topology-agnostic programming language into an intermediate representation, then compile the intermediate representation into multiple executable representations according to topological constraints of the network. Users may develop software-defined network functionality for a data center network composed of heterogeneous network devices by writing code in a programming language implementing heterogeneity-agnostic and topology-agnostic abstractions, while the compiler synthesizes heterogeneity-dependent and topology-dependent computer-executable object code implementing the software-defined network functionality across network devices of the data center network by analyzing logical dependencies and network topology to determine dependency constraints and resource constraints.
COMPILING DOMAIN-SPECIFIC LANGUAGE CODE TO GENERATE EXECUTABLE CODE TARGETING AN APPROPRIATE TYPE OF PROCESSOR OF A NETWORK DEVICE
Systems and methods for programming a network device using a domain-specific language (DSL) are provided. According to one embodiment, source code in a form of a DSL, describing a slow-path task that is to be performed by a network device, is received by a processing resource. A determination is made regarding one or more types of processors are available within the network device to implement the slow-path task. For each portion of the source code, a preferred type of processor is determined by which the portion of the source code would be most efficiently implemented. When the preferred type of processor is available within the network device, executable code is generated targeting the preferred type of processor based on the portion of the source code; otherwise, intermediate code is generated in a form of a high-level programming language, targeting a general purpose processor of the network device.
CROSS-LANGUAGE COMPILATION METHOD AND DEVICE
A compilation method includes obtaining a source program code. The source program code includes a first function in a first language code and a second function in a second language code. The first language code is a native language. The second language code is a non-native language. The method also includes generating a third language code based on the source program code. The third language code includes a third function, a fourth function and a fifth function. The third function is generated based on the first function. The fourth function is generated based on the second function. The fifth function is generated based on the first function and the second function. Executing the third function invokes the fourth function via the fifth function.
Method and system for runtime instrumentation of software methods
A computerized system of a computing system implementing a .NET framework and useful for instrumenting virtual-machine-based applications includes a computer store containing data, wherein the data comprises: a native library; a computer processor in the computerized system, which computer processor: provides a virtual machine with a Just-In-Time Compilation function; loads the native library into a process memory; redirects the Just-In-Time Compilation function to a custom implementation, wherein the custom compilation function: creates a representation of one or more methods being compiled; and compares the one or more methods against a policy of methods to be instrumented; and determines that one or more methods match the policy of methods to be instrumented.
Using artificial intelligence to optimize software to run on heterogeneous computing resource
Systems and methods are described that implement a tool chain which receives original software source code, analyzes the code and divides the code into modules that run optimally on the available heterogeneous resources. For example, the toolchain system segments original source code into code segments, and determine the specialized processor resource, such as a digital signal processing (DSP) processor, Field Programming Gate Array (FPGA), Graphical Processing Unit (GPU), and the like, that most optimally performs computations of the particular code segment. A parsing engine determines the processor of the heterogenous resources, based on a set of rules and/or a trained classifier (e.g., a trained machine learning model). New code segments can be generated that can be executed on the determined type of processor. Further, the system enables application programming interfaces (APIs) that can interface the new code segment with other generated code segments and/or some portions of the original code.
EVALUATING A FLOATING-POINT ACCURACY OF A COMPILER
A mechanism for evaluating a floating-point accuracy of a vehicle driving compatible compiler includes testing code compiled by a vehicle driving compatible compiler with code compiled by a testing environment compatible compiler, executing the vehicle driving compatible compiled code involves executing addition type floating points operations to provide a first floating point result, executing the testing environment compatible compiled code to perform addition type floating points operations to provide a second floating point result that corresponds to the first floating point result, comparing the first floating point result to the second floating point results to provide a comparison result, and determining the floating-point accuracy of vehicle driving compatible compiler based on the comparison result.
Multi-process model for cross-platform applications
A multi-process model to support compiling applications for multiple platforms is described. In one embodiment, applications designed for execution on a mobile platform can be ported to and/or compiled for execution on a desktop/laptop platform without requiring modification of the core program code of the mobile application. The mobile application is executed using a multi-process (e.g., two or more process) model in which the core mobile application program generates content that is displayed by a host process. The host process enables automatic translation of program calls to generate mobile user interface elements into program calls that generate user interface elements of the host platform. The translation can be performed using a multi-process (e.g., two or more process) model in which the core application program generates content that is displayed by a host process.
System and method of optimizing instructions for quantum computers
A quantum computing system includes a quantum processor having a plurality of qubits, a classical memory, and a classical processor. The classical processor is configured to compile a quantum program into logical assembly instructions in an intermediate language, aggregate the logical assembly instructions together into a plurality of logical blocks of instructions, generate a logical schedule for the quantum program based on commutativity between the plurality of logical blocks, generate a tentative physical schedule based on the logical schedule, the tentative physical schedule includes a mapping of the logical assembly instructions in the logical schedule onto the plurality of qubits of the quantum processor, aggregate instructions together within the tentative physical schedule that do not reduce parallelism, thereby generating an updated physical schedule; generate optimized control pulses for the aggregated instructions, and execute the quantum program on the quantum processor with the optimized control pulses and the updated physical schedule.
System, information processing method, and program for directly executing arithmetic logic on various storages
Provided are a system, an information processing method, and a program capable of improving a speed of information processing without using an intermediate code or the like even in a case where a plurality of heterogeneous devices are used. A system 1 includes: a source acquisition part 200 that acquires a source code; an arithmetic logic identification part 202 that identifies an arithmetic logic from the source code by using a predetermined API; an arithmetic logic supply part 208 that supplies the arithmetic logic to a compiler of a processor designated on the basis of the source code; a correspondence table creation part 210 that, when an object storage 400 stores a result obtained by compiling of the compiler of the designated processor as an execution image, creates a processor correspondence table in which a path to the execution image, which is stored in the object storage 400, in the designated processor is associated with the designated processor and stores the processor correspondence table in the object storage 400; and a correspondence relation determination part 212 that stores a correspondence relation, in which the arithmetic logic supplied by the arithmetic logic supply part 202 is associated with a storage path of the processor correspondence table stored in the object storage 400, in the object storage 400.