G06F8/443

Computer Processing and Outcome Prediction Systems and Methods
20230127715 · 2023-04-27 ·

Computer processing and outcome prediction systems and methods used to generate algorithm time prediction polynomials, inverse algorithm time prediction polynomials, determine race conditions, determine when a non-linear algorithm can be treated as if it were linear, as well as automatically generate parallel and quantum solutions from classical software or from the relationship between monotonic attribute values.

Generating closures from abstract representation of source code
11474797 · 2022-10-18 · ·

A device may receive source code and identify, based on the source code, an abstract syntax tree representing an abstract syntactic structure of the source code. Based on the abstract syntax tree, the device may identify a closure, the closure implementing a function based on at least a portion of the abstract syntax tree. In addition, the device may perform an action based on the closure.

Methods and devices for modifying a runtime environment of imaging applications on a medical device
11599343 · 2023-03-07 · ·

A method, an improvement node, a system and a computer program for computing an improvement result for a runtime environment of at least one application, on a device in a medical context. An embodiment of the method includes detecting a state of the runtime environment on the device; accessing a database with the state detected, to retrieve a corresponding at least one candidate improvement result; using the at least one candidate improvement result retrieved, for test-wise execution on a test infrastructure in which the state of the runtime environment detected is provided identically; measuring improvement parameters of the test-wise execution; and adding, upon the improvement parameters measured meeting defined requirements, candidate improvement results, of the at least one corresponding candidate improvement result retrieved, for which the improvement parameters measured meet defined requirements.

Reduced instructions to generate global variable addresses

In order to reduce the number of instructions that the compiler generates to load the address of a global variable into a register, the compiler uses a technique that analyzes the global variables used in each function in order to estimate which global variables will be located within the same memory page and have a common base address. A base global variable is selected for each function whose address is fully resolved. The address of each subsequent global variable is constructed using an offset relative to the address of the base global variable that is based on the subsequent global variable's position in a global variable order list.

Systems and methods for mapping software applications interdependencies
11635948 · 2023-04-25 · ·

Systems and methods method for mapping between function calls and entities of the computer program. The method includes executing a computer program in a first computing environment; determining a first entity of the computer program to track; assigning an identifier to the first entity; determining the first entity has been accessed by at least one function call; and mapping the at least one function call with the identifier of the first entity; generating a cluster including the at least one function, wherein the cluster may be executed independently from the rest of the computer program.

Computer architecture for executing quantum programs
11599344 · 2023-03-07 · ·

A computer system, designed according to a particular architecture, compiles and execute a general quantum program. Computer systems designed in accordance with the architecture are suitable for use with a variety of programming languages and a variety of hardware backends. The architecture includes a classical computer and a quantum device (which may be remote from the local computer) which includes both classical execution units and a quantum processing unit (QPU).

Systems and methods for legacy source code optimization and modernization

Disclosed herein are embodiments of systems, methods, and products for modernizing and optimizing legacy software. A computing device may perform an automated runtime performance profiling process. The performance profiler may automatically profile the legacy software at runtime, monitor the memory usage and module activities of the legacy software, and pinpoint/identify a subset of inefficient functions in the legacy software that scale poorly or otherwise inefficient. The computing device may further perform a source code analysis and refactoring process. The computing device may parse the source code of the subset of inefficient functions and identify code violations within the source code. The computing device may provide one or more refactoring options to optimize the source code. Each refactoring option may comprise a change to the source code configured to correct the code violations. The computing device may refactor the source code based on a selected refactoring option.

Static safety analysis for control-flow linearization

A static safety analysis for control-flow linearization receives a control flow graph (CFG) and an intermediate representation of a computer program, and identifies, for a given loop, all memory load instructions belonging to one side of a diamond-shape structure in the CFG. For each representation of an address of each memory load instruction identified, determining whether it is used on all other sides of the diamond-shape structure. Responsive to determining each representation of an address of each memory load instruction on the one side of the diamond-shape structure is used on all other sides of the diamond-shape structure, determining whether an immediate predecessor of a top of the diamond-shape structure for the given loop post-dominates a header of the given loop. Responsive to determining the immediate predecessor of the top of the diamond-shape structure for the given loop post-dominates the header of the given loop, affirming safety of linearization.

TECHNIQUES FOR INFERRING INFORMATION
20230123811 · 2023-04-20 ·

Apparatuses, systems, and techniques to infer information from one or more sets of data. In at least one embodiment, a processor uses one or more neural networks to infer information from one or more sets of data based, at least in part, on one or more dynamically configurable dimensions of the one or more sets of data.

SYSTEM AND METHOD FOR IMPLEMENTING A PLATFORM AND LANGUAGE AGNOSTIC SMART SDK UPGRADE MODULE

Various methods, apparatuses/systems, and media for automatically upgrading an application are disclosed. A processor creates a dynamic machine learning (ML) model; trains the dynamic ML model and scans for SDK upgrade for the application against the dynamic ML model by implementing ML algorithm for predictions. The processor executes the SDK upgrade in response to detecting that the training of the dynamic ML model is completed to trigger the processor to perform the following automated processes: implement the ML algorithm against the trained dynamic ML model to generate predictive results data for deprecated reference corresponding to the application; evaluate the predictive results data to determine whether there is a match for the deprecated reference; and when it is determined that there is a match for the deprecated reference, automatically replace code and upgrade the application to newer version of the programming language specification.