Patent classifications
G06F8/4452
Information processing device and method for assigning task
A computer calculates memory access rates for respective tasks on basis of hardware monitor information obtained by monitoring operating states of hardware during execution of an application program. The tasks correspond to respective syntax units specified in the application program. The computer assigns, on basis of the calculated memory access rates, a first task to a socket in a processor in response to an instruction for executing the first task.
SYSTEMS AND METHODS FOR AUTOMATICALLY MODIFYING PIPELINED ENTERPRISE SOFTWARE
Systems and methods for version control of pipelined enterprise software are disclosed. Exemplary implementations may: store information for executable code of software applications that are installed and executable by users, receive first user input from a first user that represents selection by the first user of a first software pipeline for execution; receive second user input from a second user that represents a second selection by the second user of a second software pipeline for execution, wherein the second software pipeline includes different versions of software applications that are included in the first software pipeline; facilitate execution of the first software pipeline for the first user; and facilitate execution of the second software pipeline for the second user at the same time as the execution of the first software pipeline for the first user.
SYSTEM AND METHOD FOR DYNAMIC LINEAGE TRACKING, RECONSTRUCTION, AND LIFECYCLE MANAGEMENT
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system can provide data governance functionality such as, for example, provenance (where a particular data came from), lineage (how the data was acquired/processed), security (who was responsible for the data), classification (what is the data about), impact (how impactful is the data to a business), retention (how long should the data live), and validity (whether the data should be excluded/included for analysis/processing), for each slice of data pertinent to a particular snapshot in time; which can then be used in making lifecycle decisions and dataflow recommendations.
Program optimization method, program optimization program, and program optimization apparatus
A program optimization method, executed by an arithmetic processing device, includes collecting profile information including a runtime analysis result by causing a computer to execute an original program to be optimized, calculating a calculation wait time based on the profile information, and generating a tuned-up program, when the calculation wait time is longer than a first threshold, by inserting an SIMD operation control line that performs an SIMD operation for an instruction in IF statement in the loop when an SIMD instruction ratio in the loop in the original program is lower than a second threshold.
Method and system for converting a single-threaded software program into an application-specific supercomputer
The invention comprises (i) a compilation method for automatically converting a single-threaded software program into an application-specific supercomputer, and (ii) the supercomputer system structure generated as a result of applying this method. The compilation method comprises: (a) Converting an arbitrary code fragment from the application into customized hardware whose execution is functionally equivalent to the software execution of the code fragment; and (b) Generating interfaces on the hardware and software parts of the application, which (i) Perform a software-to-hardware program state transfer at the entries of the code fragment; (ii) Perform a hardware-to-software program state transfer at the exits of the code fragment; and (iii) Maintain memory coherence between the software and hardware memories. If the resulting hardware design is large, it is divided into partitions such that each partition can fit into a single chip. Then, a single union chip is created which can realize any of the partitions.
BOTTLENECK DETECTION DEVICE AND COMPUTER READABLE MEDIUM
A target apparatus (20) includes a bottleneck term calculation unit (22) and a running function recording scheduler (24). The bottleneck term calculation unit (22) acquires a performance graph, which is generated about run of a single program being a running subject or about run of a plurality of programs being a running subject and which indicates correspondence between a lapse of time and a load quantity being set as a load. The bottleneck term calculation unit (22) calculates, using a performance graph, a bottleneck term indicating a term where the load quantity in a limit status continues. The running function recording scheduler (24) records, during next run of the single program or of the plurality of programs to be run after the run, being an origin of generation of the performance graph, of the single program or the plurality of programs, a function which is run during the bottleneck term, using a running function recording module (23).
AUTOMATED GENERATION OF MACHINE LEARNING MODEL PIPELINE COMBINATIONS
A computer receives a dataset and a set of ML pipeline components to generate a preferred ensemble of Machine Learning (ML) pipelines. An Automated Learning (AutoML) tool is applied to generate a plurality of ML pipelines. A performance value is determined for each pipeline, and a set of candidate pipelines is identified based on the performance values. The candidate pipelines are combined into candidate ensembles. A database provides historic performance data for a plurality of historic ensembles applied to a plurality of historic datasets. A metamodel is trained to identify patterns within the historic performance data, and a applies the patterns to generate predicted ensemble performance values for the candidate ensembles. A preferred ensemble is selected based on the predicted performance value rankings.
Automatic compiler dataflow optimization to enable pipelining of loops with local storage requirements
Systems, apparatuses and methods may provide for technology that detects one or more local variables in source code, wherein the local variable(s) lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the local variable(s), and incorporate the pipeline execution code into an output of a compiler. In one example, the pipeline execution code includes an initialization of a pool of buffer storage for the local variable(s).
Method and apparatus for predicting and scheduling copy instruction for software pipelined loops
A method for scheduling instructions for execution on a computer system includes scanning a plurality of loop instructions that are modulo scheduled to identify a first instruction and a second instruction that both utilize a register of the computer system upon execution of the plurality of instructions. The loop has a first initiation interval. The first instruction defines a first value of the register in a first iteration of the loop and the second instruction redefines the value of the register to a second value in a subsequent iteration of the loop prior to a use of the first value in the first iteration of the loop. A copy instruction is inserted in the loop instructions to copy the first value prior to execution of the second instruction. A schedule is determined after the insertion of the one or more copy instructions giving a second initiation interval.
OFF-LOAD SERVERS SOFTWARE OPTIMAL PLACEMENT METHOD AND PROGRAM
[Problem] An application is adapted to an environment and operated with high performance.
[Solution] An off-load pattern creation section 115 configured to perform a code conversion according to a deployment destination environment and a verification environment performance measurement section 119 configured to compile a code-converted application, deploy the compiled application to a verification machine 14, and execute processing for measuring a performance of the application in the event of performing off-loading to the verification machine 14, are repeated, and an actual environment performance measurement test execution section 123 configured to, after an execution file has been deployed, perform an automatic execution of performance test items extracted by an actual environment performance measurement test extraction section 122, by using an operation device, and a control section 11 configured to perform an environment adaptation process that executes one or more of code conversion step, resource amounts setting step, a deployment place selection step, a performance measurement step, and a performance measurement test step, are included.