G06F8/47

Proxy compilation for execution in a foreign architecture controlled by execution within a native architecture

A proxy compiler may be used within a native execution environment to enable execution of non-native instructions from a non-native execution environment as if being performed within the native execution environment. In particular, the proxy compiler coordinates creation of a native executable that is uniquely tied to a particular non-native image at the creation time of the non-native image. This allows a trusted relationship between the native executable and the non-native image, while avoiding a requirement of compilation/translation of the non-native instructions for execution directly within the native execution environment.

Unified intermediate representation

A system decouples the source code language from the eventual execution environment by compiling the source code language into a unified intermediate representation that conforms to a language model allowing both parallel graphical operations and parallel general-purpose computational operations. The intermediate representation may then be distributed to end-user computers, where an embedded compiler can compile the intermediate representation into an executable binary targeted for the CPUs and GPUs available in that end-user device. The intermediate representation is sufficient to define both graphics and non-graphics compute kernels and shaders. At install-time or later, the intermediate representation file may be compiled for the specific target hardware of the given end-user computing system. The CPU or other host device in the given computing system may compile the intermediate representation file to generate an instruction set architecture binary for the hardware target, such as a GPU, within the system.

SOFTWARE DEVELOPMENT DEVICE AND SOFTWARE DEVELOPMENT PROGRAM
20220229642 · 2022-07-21 · ·

A software development device enables software to be shared between controllers using semiconductor devices having different specifications. The software development device generates an execution code executed by a controller having one or more pads from a source code. The software development device includes an analysis module for analyzing the source code to extract a designation for the one or more pads; and a generation modules for generating an execution code including a code corresponding to the extracted designation for the pads with reference to hardware of a target controller.

QUANTUM PROCESSING SYSTEM
20210398007 · 2021-12-23 ·

A method, apparatus, system, and computer program product for quantum processing. A target quantum programming for a process for a quantum computer is identified. A universal gate set is selected based on a computer type. Any operation possible for a particular quantum computer can be performed using the universal gate set. Instructions for the process in a source quantum programming language are sent to a source quantum language translator which outputs a digital model representation of quantum computer components that are arranged to perform the process using the instructions. The digital model representation of the quantum computer components and the universal gate set are sent to a target quantum language translator, which outputs the instructions for operations for the process in a target quantum programming language using the digital model representation of the quantum computer components and the universal gate set for the computer type for the quantum computer.

Dataflow graph programming environment for a heterogenous processing system

Examples herein describe techniques for generating dataflow graphs using source code for defining kernels and communication links between those kernels. In one embodiment, the graph is formed using nodes (e.g., kernels) which are communicatively coupled by edges (e.g., the communication links between the kernels). A compiler converts the source code into a bit stream and/or binary code which configure a heterogeneous processing system of a SoC to execute the graph. The compiler uses the graph expressed in source code to determine where to assign the kernels in the heterogeneous processing system. Further, the compiler can select the specific communication techniques to establish the communication links between the kernels and whether synchronization should be used in a communication link. Thus, the programmer can express the dataflow graph at a high-level (using source code) without understanding about how the operator graph is implemented using the heterogeneous hardware in the SoC.

Re-targetable interface for data exchange between heterogeneous systems and accelerator abstraction into software instructions

Embodiments herein describe techniques for interfacing a neural network application with a neural network accelerator that operate on two heterogeneous computing systems. For example, the neural network application may execute on a central processing unit (CPU) in a computing system while the neural network accelerator executes on a FPGA. As a result, when moving a software-hardware boundary between the two heterogeneous systems, changes may be made to both the neural network application (using software code) and to the accelerator (using RTL). The embodiments herein describe a software defined approach where shared interface code is used to express both sides of the interface between the two heterogeneous systems in a single abstraction (e.g., a software class).

USING ARTIFICIAL INTELLIGENCE TO OPTIMIZE SOFTWARE TO RUN ON HETEROGENEOUS COMPUTING RESOURCE
20220206770 · 2022-06-30 ·

Systems and methods are described that implement a tool chain which receives original software source code, analyzes the code and divides the code into modules that run optimally on the available heterogeneous resources. For example, the toolchain system segments original source code into code segments, and determine the specialized processor resource, such as a digital signal processing (DSP) processor, Field Programming Gate Array (FPGA), Graphical Processing Unit (GPU), and the like, that most optimally performs computations of the particular code segment. A parsing engine determines the processor of the heterogenous resources, based on a set of rules and/or a trained classifier (e.g., a trained machine learning model). New code segments can be generated that can be executed on the determined type of processor. Further, the system enables application programming interfaces (APIs) that can interface the new code segment with other generated code segments and/or some portions of the original code.

Support device and support program
11360462 · 2022-06-14 · ·

The objective of the present invention is to simplify the transfer of a program that has been edited. A support device, which assists in the development of a program executed by a target device provided in factory automation (FA), carries out a transfer process for transferring the program to the target device. The program includes a control program for controlling a machine and an HMI program for processing a variable used by the control program. When the control program or the HMI program has been edited, the support device simultaneously transfers the control program and the HMI program respectively to a control device and an HMI device.

Load module compiler
11354103 · 2022-06-07 · ·

The disclosure invention provides a method for executing a program compiled for a source architecture on a machine having a different target architecture, a non-transitory computer readable medium configured to store instructions for performing such a method, and a system for performing such a method.

LOADER AND RUNTIME OPERATIONS FOR HETEROGENEOUS CODE OBJECTS

Described herein are techniques for executing a heterogeneous code object executable. According to the techniques, a loader identifies a first memory appropriate for loading a first architecture-specific portion of the heterogeneous code object executable, wherein the first architecture specific portion includes instructions for a first architecture, identifies a second memory appropriate for loading a second architecture-specific portion of the heterogeneous code object executable, wherein the second architecture specific portion includes instructions for a second architecture that is different than the first architecture, loads the first architecture-specific portion into the first memory and the second architecture-specific portion into the second memory, and performs relocations on the first architecture-specific portion and on the second architecture-specific portion.