G06F11/3688

Pre-migration detection and resolution of issues in migrating databases systems
11550765 · 2023-01-10 · ·

Implementations include providing, by a computer-executed migration advisor executing within a run-time of a source database system, a query data set including queries processed by the source database system during production use of the source database system, providing, by the migration advisor, an object data set including data representative of database objects stored within a database of the source database system, generating, by the migration advisor, a list of query-level features and a list of object-level features, each feature in the list of query-level features and each feature in the list of object-level features including a feature that is deprecated in a target database system, resolving one or more issues represented by features of one or more of the list of query-level features and the list of object-level features, and executing migration of the database of the source database system to the database of the target database system.

MONITORING STACK MEMORY USAGE TO OPTIMIZE PROGRAMS

A computer system determines stack usage. An intercept function is executed to store a stack marker in a stack, wherein the intercept function is invoked when a program enters or exits each function of a plurality of functions of the program. A plurality of stack markers are identified in the stack and a memory address is determined for each stack marker during execution of the program to obtain a plurality of memory addresses. The plurality of memory addresses are analyzed to identify a particular memory address associated with a greatest stack depth. A stack usage of the program is determined based on the greatest stack depth. Embodiments of the present invention further include a method and program product for determining stack usage in substantially the same manner described above.

System and method for test selection according to test impact analytics

A system and method for determining a relative importance of a selected test in a plurality of tests, comprising a computational device for receiving one or more characteristics relating to an importance of the code, an importance of each of the plurality of tests, or both; and for determining the relative importance of the selected test according to said characteristics.

Deployment strategies for continuous delivery of software artifacts in cloud platforms

Computing systems, for example, multi-tenant systems deploy software artifacts in data centers created in a cloud platform using a cloud platform infrastructure language that is cloud platform independent. The system receives an artifact version map that identifies versions of software artifacts for data center entities of the data center and a cloud platform independent master pipeline that includes instructions for performing operations related to services on the data center, for example, deploying software artifacts, provisioning computing resources, and so on. The system receives a deployment manifest that provides declarative specification of deployment strategies for deploying software artifacts in data centers. The system implements a deployment operator that executes on a cluster of computing systems of the cloud platform to implement the deployment strategies.

Anti-pattern detection in extraction and deployment of a microservice

Disclosed are various embodiments for anti-pattern detection in extraction and deployment of a microservice. A software modernization service is executed to analyze a computing application to identify various applications. When one or more of the application components are specified to be extracted as an independently deployable subunit, anti-patterns associated with deployment of the independently deployable subunit are determined prior to extraction. Anti-patterns may include increases in execution time, bandwidth, network latency, central processing unit (CPU) usage, and memory usage among other anti-patterns. The independently deployable subunit is selectively deployed separate from the computing application based on the identified anti-patterns.

Green artificial intelligence implementation

A model designer creates models for machine learning applications while focusing on reducing the carbon footprint of the machine learning application. The model designer can automatically extract features of a machine learning application from requirements documents and automatically generate source code to implement that machine learning application. The model designer then uses computing statistics of previous models and machine learning applications to determine hardware limitations or restrictions to be placed on machine learning application or model. The designer then adds or adjusts the source code to enforce these hardware limitations and restrictions.

Techniques for large-scale functional testing in cloud-computing environments

Techniques are disclosed for generating an execution plan for performing functional tests in a cloud-computing environment. Infrastructure resources and capabilities (e.g., system requirements) may be defined within an infrastructure object (e.g., a resource of a declarative infrastructure provisioner) that stores a code segment that implements the resource or capability. Metadata may be maintained that indicates what particular capabilities are applicable to each infrastructure resource. Using the metadata, the system can generate an execution plan by combining code segments for each resource with code segments defining each capability in accordance with the metadata. The execution plan may include programmatic instructions that, when executed, generate a set of test results. The system can execute instructions that cause the set of test results to be presented at a user device.

System and method for detecting errors in a task workflow from a video stream

A system for detecting errors in task workflows from a real time video feed records. The video feed that shows a plurality of steps being performed to accomplish a plurality of tasks through an automation process system. The system splits the video feed into a plurality of video recordings which are valid breakpoints determined through cognitive Machine Learning Engine, where each video recording shows a single task. For each task from among the plurality of tasks, the system determines whether the task fails and the exact point of failure for that task. If the system determines that the task fails, the system determines a particular step where the task fails. The system flags the particular step as a failed step. The system reports the flagged step for troubleshooting.

Auto-intrusive data pattern and test case generation for system validation

Techniques for auto-intrusive data pattern and test case generation for negative service testing are described. A test engine obtains negative test information specifying negative test input examples or schemas associated with tests that are expected to fail. A test generator generates multiple test cases based on the negative test information. A test execution orchestrator splits each test case up into actions that are inserted into queues, where workflow execution agents perform the tests by reading from the queues and interacting with services. The tests may also include adjusting a rate of transactions allowed between top-level services and/or downstream services. Results from the testing are analyzed by a test analysis engine and used to inform the services or the test originator of test cases where the expected failures did not arise.

DEFECT REPORTING IN APPLICATION TESTING
20180004648 · 2018-01-04 ·

The present subject matter relates to defect reporting in application testing. In an implementation, a category of application testing is determined based on a testing instance of an application. The category of application testing is indicative of an aspect of the application, being tested. A list of previously reported defects associated with the determined category of application testing is displayed in a display layer over the testing instance of the application. A first user-input indicative of one of acceptance and rejection of a previously reported defect, from the list, with respect to the testing instance of the application is received. The first user-input is aggregated with previous user-inputs indicative of one of acceptance and rejection of the previously reported defect. It is determined whether the previously reported defect is irrelevant to the testing instance of the application based on the aggregation.