Patent classifications
G06F11/3624
Identifying root causes of software defects
Root cause identification of a software defect includes identifying, in program code of a software feature, hedge code of the software feature based on errors induced from temporarily substituting program code of the software feature with substitute program code and obtaining an error graph for the hedge code, obtaining error logs of an application that incorporates the software feature, the error logs indicating errors with the software feature of the application, automatically generating an application error graph reflective of the errors with the software feature of the application, mapping the application error graph to the error graph for the hedge code, and based on the mapping aligning one of more errors reflected in the application error graph to error(s) reflected in the error graph for the hedge code, identifying the hedge code as inducing a root error identified in the application error graph.
Cross jobs failure dependency in CI/CD systems
A build fail of a job in a development pipeline of an application development system is analyzed. A determination as to whether the build fail affects other jobs in the development pipeline is made. In response to determining that the build fail affects at least one of the other jobs of the plurality of jobs, an alert identifying the at least one of the other jobs affected by the build fail is generated.
Verifying data structure consistency across computing environments
A technique for verifying data structure consistency across computing environments includes computing a first signature for a data structure of an application subject to checkpointing corresponding to a first computing environment residing on a first computer. A second signature for the data structure of the application corresponding to a second computing environment residing on a second computer is computed. The first and second signatures are compared to determine whether a change to the data structure exists. Responsive to a lack of change to the data structure based on the comparison, a mobility operation is enabled for the application between the first computer and the second computer.
Debugging deep neural networks
A method, computer system, and a computer program product for debugging a deep neural network is provided. The present invention may include identifying, automatically, one or more debug layers associated with a deep learning (DL) model design/code, wherein the identified one or more debug layers include one or more errors, wherein a reverse operation is introduced for the identified one or more debug layers. The present invention may then include presenting, to a user, a debug output based on at least one break condition, wherein in response to determining the at least one break condition is satisfied, triggering the debug output to be presented to the user, wherein the presented debug output includes a fix for the identified one or more debug layers in the DL model design/code and at least one actionable insight.
System and methods for live debugging of transformed binaries
A method, system, or apparatus to debug software that is reorganized in memory is presented. An interactive debugging session is established with an executable code component corresponding to a packed binary file includes machine code that corresponds to blocks of original source code. A randomly reorganized layout of the machine code is generated in memory based on a transformation defined in a function randomization library. An in-memory object file is created by using a debug data component corresponding to the packed binary file. The debug data component includes symbol table information to debug the blocks of the original source code generated prior to the randomly reorganized layout. The symbol table information is updated based on the randomly reorganized layout of the machine code, and the debugger program is instructed to load the in-memory object file with the updated symbol information to debug the blocks of the original source code.
Providing for multi-process log debug without running on a production environment
Methods, computer program products, and/or systems are provided that perform the following operations: determining that a log multi-process debug mode is specified; obtaining a log file for debugging a source code, wherein the log file includes a plurality of log records; inserting a plurality of process identifier fields into each current log record in the log file; inserting a new log record into the log file for a created new process; and providing for performance of debugging for the source code based in part on the plurality of process identifier fields inserted into each current log record.
Realization of functional verification debug station via cross-platform record-mapping-replay technology
An efficient and cost-effective method for usage of emulation machine is disclosed, in which a new concept and use model called debug station is described. The debug station methodology lets people run emulation using a machine from one vendor, and debug designs using a machine from another vendor, so long as these machines meet certain criteria. The methodology and its associated hardware hence are called a ‘platform neutral debug station.’ The debug station methodology breaks loose usage of emulation machines, where people can choose the best machine for running a design, and the best machine for debugging, and they do not need to be the same. Unlike the past, where people needed to run emulation and debug a design using same emulator from beginning to the end, the mix-and-match method described herein allows users to use emulators in the most efficient way, and often most cost effective too.
Dynamic system for active detection and mitigation of anomalies in program code construction interfaces
Embodiments of the invention are directed to active detection and mitigation of anomalies in program code construction interfaces. The system provides a proactive plug-in with a dynamic machine learning (ML) anomaly detection model cloud component structured to dynamically detect architectural flaws in program code in real-time in a user coding interface. In particular, the system activates a machine learning (ML) anomaly detection plug-in for dynamically analyzing the first technology program code being constructed in the user coding interface. Moreover, the system modifies, via the ML anomaly detection plug-in, the user coding interface to embed interface elements associated with the one or more flaws in the first technology program code detected by the ML anomaly detection model cloud component.
Runtime Error Prediction System
During a software development lifecycle of a software application, application code is modified and multiple versions are built and packaged to be installed on different computing systems, such as on a software development computing system, a software testing computing systems, and/or production or end-user computing systems. A runtime error optimization engine analyzes, using a first artificial intelligence model, a build package to predict whether it may encounter runtime errors causing an installation to fail. When an error is identified, a runtime error orchestration engine may utilize a second artificial intelligence model to identify a solution, where the runtime error orchestration engine rebuilds the build package based on an identified solution and initiates installation via a deployment pipeline.
AUTOMATED DISTRIBUTED COMPUTING TEST EXECUTION
In computer-implemented method, computer system, and/or computer program product, a processor(s) obtains a test (of steps(s)) to verify program code for deployment in distributed computing system. The processor(s) determines pre-defined operations correlating to the step(s). The processor(s) automatically distributes the pre-defined operations to a resources of a distributed computing system, for execution. The processor(s) monitors the execution and saves at least one screenshot as each step. The processor(s) generates a user interface with a status indicator. The processor(s) continuously update the user interface, based on the monitoring, to reflect a progression of the portion of the one or more resources through the step(s).