Patent classifications
G06F8/452
SYSTEM AND METHOD FOR COMPILING HIGH-LEVEL LANGUAGE CODE INTO A SCRIPT EXECUTABLE ON A BLOCKCHAIN PLATFORM
A computer-implemented method (and corresponding system) is provided, that enables or facilitates the execution of a portion of source code, written in a high-level language (HLL), on a blockchain platform. The method and system can include a blockchain compiler, arranged to convert a portion of high-level source code into a form that can be used with a blockchain platform. This may be the Bitcoin blockchain or an alternative. The method can include: receiving the portion of source code as input; and generating an output script comprising a plurality of op_codes. The op_codes are a subset of op_codes that are native to a functionally-restricted, blockchain scripting language. The outputted script is arranged and/or generated such that, when executed, the script provides, at least in part, the functionality specified in the source code. The blockchain scripting language is restricted such that it does not natively support complex control-flow constructs or recursion via jump-based loops or other recursive programming constructs. The step of generating the output script may comprise the unrolling at least one looping construct provided in the source code. The method may further comprise providing or using an interpreter or virtual machine arranged to convert the output script into a form that is executable on a blockchain platform.
PROCESSING DEVICE FOR A PARALLEL COMPUTING SYSTEM AND METHOD FOR PERFORMING COLLECTIVE OPERATIONS
The disclosure relates to a parallel computing system comprising a plurality of processing devices for performing an application. Each processing device is configured to obtain a local result, wherein a global result of a collective operation depends on the local results of the plurality of processing devices, and to distribute the local result of the processing device to one or more of the other processing devices, in response to determining that the global result is based only on the local result of the processing device, that is a likelihood that the global result is based only on the local result of the processing device is greater than a likelihood threshold value, or that the global result is based only on the local result of the processing device and a further local result of a further processing device of the plurality of processing devices.
Computer Implemented Program Specialization
A computerized technique for program simplification and specialization combines a partial interpretation of the program based on a subset of program functions to obtain variable states with concrete values at a program “neck.” These concrete values are then propagated as part of an optimization transformation that simplifies the program based on these constant values, for example, by eliminating branches that are never taken based on the constant values.
COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM AND INFORMATION PROCESSING METHOD
A recording medium stores a program for generating a source code that indicates processing on a sparse matrix and for causing a computer to execute a process including: acquiring second codes by optimizing, with a convex polyhedral model, a first code in which loop processing on a matrix is written in a static control part format; converting the second codes into source code candidates, based on sparse matrix information that indicates a variable that represents a non-zero element of the sparse matrix, expression information that indicates an operation expression that corresponds to a function included in the second codes, and data type information that indicates a type to be used for the variable; and selecting the source code from among the source code candidates in accordance with evaluation of processing performance for the sparse matrix in a case where each of the source code candidates is used.
TECHNIQUES FOR PARALLEL EXECUTION
Apparatuses, systems, and techniques to identify instructions for advanced execution. In at least one embodiment, a processor performs one or more instructions that have been identified by a compiler to be speculatively performed in parallel.
Optimizing memory bandwidth in spatial architectures
A technique to facilitate efficient, parallelized execution of a program using a multiprocessor system having two or more processors includes detecting and, optionally, minimizing broadcast data communication between a shared memory and two or more processors. To this end, the broadcast space of a data structure is generated as an intersection of the reuse space of the data structure and the placement space of a statement accessing the data structure. A non-empty broadcast space implies broadcast data communication that can be minimized by rescheduling the statement accessing the data structure.
PARTIAL DATA TYPE PROMOTION TO EXPLOIT EFFICIENT VECTORIZATION IN MICROPROCESSORS
Aspects of the invention include a compiler detecting an expression in a loop that includes elements of mixed data types. The compiler then promotes elements of a sub-expression of the expression to a same intermediate data type. The compiler then calculates the sub-expression using the elements of the same intermediate data type.
OFFLOAD SERVER, OFFLOAD CONTROL METHOD, AND OFFLOAD PROGRAM
An offload server includes: an application code analysis section configured to analyze source code of an application; a data transfer designation section configured to, on the basis of a result of the code analysis, designate GPU processing for a loop statement by using at least one selected from the group of directive clauses, of OpenACC, consisting of a ‘kernels’ directive clause, a ‘parallel loop’ directive clause, and a ‘parallel loop vector’ directive clause; and a parallel processing designation section configured to identify loop statements in the application, and, for each of the identified loop statements, specify a statement specifying application of parallel processing by the GPU and perform compilation.
OFFLOAD SERVER, OFFLOAD CONTROL METHOD, AND OFFLOAD PROGRAM
An offload server (1) includes: an application code analysis section (112) configured to analyze source code of an application; a data transfer designation section (113) configured to, on the basis of a result of the code analysis, designate a data transfer to be collectively performed on, before starting GPU processing and after finishing the GPU processing, of variables that need to be transferred between a CPU and a GPU, those which are not mutually referenced nor mutually updated between CPU processing and the GPU processing and which are only to be returned to the CPU as a result of the GPU processing; a parallel processing designation section (114) configured to identify loop statements in the application, and, for each of the identified loop statements, specify a statement specifying application of parallel processing by the GPU and perform compilation.
SYSTEMS AND METHODS FOR DISTRIBUTED DECISION-MAKING AND SCHEDULING
An embodiment of the disclosed invention is a computer-implemented method for performing automated decision-making, which includes operating one or more loop(s) of sequential steps that receive data from the environment or from another source, interpret the data, decide on a course of action, and then execute the course of action. During the operation of the one or more loop(s), the method includes a self-monitor function that detects and corrects errors. Another embodiment is a loop architecture for performing automated decision-making that includes an API, three support modules, a receive module, an interpret module, a decide module, an execute module, and an orchestration layer. Another embodiment is a method for implementing a loop architecture to perform a task, wherein the method includes implementing handlers to perform the receive, interpret, decide, and execute functions, and implementing a topology definition.