Patent classifications
G06F8/4435
Eliminating dead stores
Dataflow optimization by dead store elimination focusing on logically dividing a contiguous storage area into different portions by use to allow a different number and type of dataflow and dead store techniques on each portion. A first storage portion, containing the storage for control flow related metadata, is split from a remaining storage portion. Liveness analysis is executed on the first storage portion using bitvectors with each bit representing four bytes. The remaining storage portion, containing the temporary storage for computational values, is processed using a deadness-range-based dataflow analysis. IN and OUT sets for each basic block are generated by processing blocks GEN and KILL sets by performing a backwards intersection dataflow analysis. Stores that write to the set of dead ranges in the IN sets of blocks are eliminated as dead stores.
Editor for generating computational graphs
Techniques for generating a dataflow graph include generating a first dataflow graph with a plurality of first nodes representing first computer operations in processing data, with at least one of the first computer operations being a declarative operation that specifies one or more characteristics of one or more results of processing of data, and transforming the first dataflow graph into a second dataflow graph for processing data in accordance with the first computer operations, the second dataflow graph including a plurality of second nodes representing second computer operations, with at least one of the second nodes representing one or more imperative operations that implement the logic specified by the declarative operation, where the one or more imperative operations are unrepresented by the first nodes in the first dataflow graph.
Method and apparatus for retaining optimal width vector operations in arbitrary/flexible vector width architecture
A method and apparatus to optimize a list of vector instructions using dynamic programming, in particular memoization, by generating a table containing instruction subvectors having individual (parts), contiguous (superparts) and repeated (broadcasts) lanes. Because the instructions in the table are subvectors selected to have individual, contiguous and repeated lanes in the registers, compiler optimizations can be enhanced. Introduction of such dynamic programming allows for speculative lane optimizations, as well as improved analysis-guided optimizations, either of which can be performed alone or in combination with other optimizations, whether or not they make use of dynamic programming.
METHOD FOR GENERATING SOURCE CODE
A method for generating source code from one or more blocks of a block diagram that comprises at least two non-virtual blocks and at least one signal link between two non-virtual blocks includes: transforming the block diagram into an intermediate representation, wherein transforming the block diagram into the intermediate representation comprises transforming a first block having access to a multi-component variable; successively optimizing the intermediate representation; and translating the optimized intermediate representation into source code. Transforming the first block comprises: testing whether a block pair made up of the first block and an adjacent block comprises an equal assignment; and removing any assignments in which a reference to the same variable exists on both sides.
SYSTEM AND METHOD TO COMPARE MODULES FOR THE COMMON CODE, REMOVE THE REDUNDANCY AND RUN THE UNIQUE WORKFLOWS
One example method includes receiving a code request, parsing code associated with the code request, storing, in a staging database, a portion of the code, traversing a codebase to identify any code in the codebase that matches the portion of the code, and when code is found in the codebase that matches the portion of the code, incrementing a green count and pushing the portion of the code to a redundant code bin, and when no code is found in the codebase that matches the portion of the code, incrementing a red counter and updating the codebase to include the portion of the code.
Function-level redundancy detection and optimization
The present disclosure provides computer-executable tools which, implemented in a programming language library, may enable source code written using the library to be compiled to object code instrumented for function-level dynamic analysis of memory allocation functions. By tracking heap reads and writes of each target function, symbols may be mapped to memory addresses allocated therefor, and values of input arguments of functions may be mapped to values of output returns. Based on this information, pure functions which embody redundant computations across multiple executions thereof may be identified, while non-pure functions may be screened out. Among pure functions, candidate functions which are executed having the same arguments and returns across multiple executions thereof may be identified, and these functions may be re-compiled to generate object code wherein redundant subsequent executions are avoided, and return values from a first execution thereof are reused across subsequent executions, reducing computational cost.
Storage structure for pattern mining
A computer implemented method includes obtaining an original graph data structure including multiple stored nodes connected by multiple edges. The stored nodes include multiple operation stored nodes and multiple data stored nodes. The method further includes generating an auxiliary graph data structure from the original graph data structure. The auxiliary graph data structure includes the operation stored nodes. The method further includes executing a pattern mining tool on the auxiliary graph data structure to obtain a pattern list, traversing the auxiliary graph data structure to identify multiple instances of identified patterns in the pattern list, and presenting the instances.
SYSTEM AND METHOD FOR DYNAMIC DEAD CODE ANALYSIS
Various methods, apparatuses/systems, and media for dynamic code analysis using aspect oriented programming (AOP) are disclosed. A processor (i) creates a list of all method names associated with an application before launching the application and writes into a file; (ii) at runtime, reads the method names from the file into a hash set and using AOP load time weaving, each time a method is invoked, pointcut around the execution of the method to remove method name from the hash set in memory; (iii) periodically overwrites the file with a dump of current entries in the hash set for fault tolerance; (iv) for every subsequent restart of the application, the processor repeats from processes (ii) and (iii). After running processes (i)-(iv) for a predetermined time period (a month, a quarter etc.), the processor creates a final with methods that have not been invoked for potential deletion.
TRAINING DATA AUGMENTATION VIA PROGRAM SIMPLIFICATION
Techniques regarding augmenting one or more training datasets for training one or more AI models are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise training augmentation component that can generate an augmented training dataset for training an artificial intelligence model by extracting a simplified source code sample from a source code sample comprised within a training dataset.
Determining when to perform and performing runtime binary slimming
Multiple execution traces of an application are accessed. The multiple execution traces have been collected at a basic block level. Basic blocks in the multiple execution traces are scored. Scores for the basic blocks represent benefits of performing binary slimming at the corresponding basic blocks. Runtime binary slimming is performed of the application based on the scores of the basic blocks.