Patent classifications
G06F8/43
Systems and methods for controlling access to secure debugging and profiling features of a computer system
The present disclosure describes systems and methods for controlling access to secure debugging and profiling features of a computer system. Some illustrative embodiments include a system that includes a processor, and a memory coupled to the processor (the memory used to store information and an attribute associated with the stored information). At least one bit of the attribute determines a security level, selected from a plurality of security levels, of the stored information associated with the attribute. Asserting at least one other bit of the attribute enables exportation of the stored information from the computer system if the security level of the stored information is higher than at least one other security level of the plurality of security levels.
LOGICALLY SPLITTING OBJECT CODE INTO MODULES WITH LAZY LOADING OF CONTENT
A method for receiving a first portion of object code, analyzing a first portion of object code in a static manner to determine a call tree hierarchy, dividing, by a synthetic compiler, the first portion of object code into a plurality of modules; and starting to run the first portion of object code to start a runtime phase, with the running of the first portion of the object code including: (i) lazy loading of the modules of the plurality of modules of the first portion of object code, and/or (ii) eager unloading of the modules of the plurality of modules of the first portion of object code.
Machine-learning models to assess coding skills and video performance
A method includes receiving uncompilable code from a candidate. The method further includes extracting features from the uncompilable code. The method further includes outputting, with a coding machine-learning model, compilable code based on the uncompilable code and the extracted features. The method further includes generating a coding score based on the uncompilable code and the compilable code. The method further includes receiving first media of one or more answers to questions provided by the candidate during an interview. The method further includes outputting, with a media machine-learning model, one or more corresponding ratings for the one or more answers. The method further includes generating a media score based on the one or more corresponding ratings. The method further includes generating a total score based on the coding score and the media score.
Instrumentation for nested conditional checks
Aspects include executing a first phase that includes injecting instrumentation into program code in response to identifying an inner conditional check in the program code and running the instrumented program with a representative workload. The injecting includes duplicating the inner conditional check and placing a duplicate of the inner conditional check before a respective original nested conditional check in the program code to create an instrumented program. The instrumented program includes a plurality of basic blocks including original basic blocks and a newly added basic block that includes the duplicate of the inner conditional check. The method also includes executing a second phase that includes collecting execution frequency values from counters associated with the basic blocks to form metadata used to make optimization decisions for the program code.
Static versioning in the polyhedral model
An approach is presented to enhancing the optimization process in a polyhedral compiler by introducing compile-time versioning, i.e., the production of several versions of optimized code under varying assumptions on its run-time parameters. We illustrate this process by enabling versioning in the polyhedral processor placement pass. We propose an efficient code generation method and validate that versioning can be useful in a polyhedral compiler by performing benchmarking on a small set of deep learning layers defined for dynamically-sized tensors.
Automatically mapping binary executable files to source code by a software modernization system
Techniques are described for enabling a software modernization system to automatically map binary executable files and other runtime artifacts (e.g., application binaries, Java ARchive (JAR) files, .NET Dynamic Link Library (DLL) files, process identifiers, etc.) to source code associated with the binary executable files, e.g., as part of modernization processes aimed at migrating users' applications to a cloud service provider's infrastructure. A software modernization service of a cloud provider network provides discovery agents and other tools that are capable of creating an inventory of users' software applications and collecting profile data about the software applications. Various techniques are described for automatically identifying the source code associated with software applications identified by a discovery agent in a user's computing environment, thereby improving the efficiency of various software modernization analyses and other modernization processes.
MAPPING NATURAL LANGUAGE AND CODE SEGMENTS
Techniques are provided for mapping natural language to code segments. In one embodiment, the techniques involve receiving a document and software code, wherein the document comprises a natural language description of a use of the code, generating, via a vectorization process performed on the document, at least one vector or word embedding, generating, via a natural language processing technique performed on the at least one vector or word embedding, a first label set, generating, via a machine learning analysis of the software code, a second label set, determining, based on a comparison of the first label set and the second label set, a match confidence between the document and the software code, wherein the match confidence indicates a measure of similarity between the first label set and the second label set, and upon determining that the match confidence exceeds a predefined threshold, mapping the document to the software code.
Multi-core I/O trace analysis
Improved mechanisms and techniques for recording and aggregating trace information from multiple computing modules of a storage system may be provided. On a storage system having multiple computing modules, where each computing module has multiple processing cores, processing cores may record trace information for I/O operations in dedicated local memory—i.e., memory in the same computing module as the processing core that is dedicated to the computing module. One of the processing cores may be configured to aggregate trace information from across multiple computing modules into its dedicated local memory by accessing trace information from the dedicated local memories of the other computing modules in addition to its own. The aggregated information in one dedicated local memory then may be analyzed for functionality and/or performance and additional action taken based on the analysis.
Automated system capacity optimization
A method, system, and computer program product for implementing automated system capacity optimization is provided. The method includes retrieving from plug-in components running on a plurality of hardware and software sources, metrics data associated with the plug-in components. The metrics data is cross-referenced with respect to operational sizing recommendations for each plug-in component based on aggregated disparate sizing guidelines and resulting software code modules are generated. Software and hardware requirements for enabling target computing components are determined based on results of executing the software code modules and operational functionality of the target computing components are enabled in accordance with the software and hardware requirements.
AUTOMATED SYSTEM CAPACITY OPTIMIZATION
A method, system, and computer program product for implementing automated system capacity optimization is provided. The method includes retrieving from plug-in components running on a plurality of hardware and software sources, metrics data associated with the plug-in components. The metrics data is cross-referenced with respect to operational sizing recommendations for each plug-in component based on aggregated disparate sizing guidelines and resulting software code modules are generated. Software and hardware requirements for enabling target computing components are determined based on results of executing the software code modules and operational functionality of the target computing components are enabled in accordance with the software and hardware requirements.