Patent classifications
G06F11/328
GREEN CLOUD COMPUTING RECOMMENDATION SYSTEM
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating customized recommendations for environmentally-conscious cloud computing frameworks for replacing computing resources of existing datacenters. One of the methods involves receiving, through a user interface presented on a display of a computing device, data regarding a user's existing datacenter deployment and the user's preferences for the new cloud computing framework, generating one or more recommendations for environmentally-conscious cloud computing frameworks based on the received data, and presenting such recommendations through the user interface for the user's review and consideration.
SMART TEST DATA WORKLOAD GENERATION
In an approach for smart test data workload generation, a processor receives a plurality of expected image frames for a user interface application to be tested. The plurality of expected image frames is pre-defined and represents a series of workflows and operations of the user interface application to be expected based on a design requirement. A processor calculates a first set of hash-values for each corresponding expected image frame. A processor samples the user interface application with a frequency to a plurality of testing image frames during a test run on the user interface application. A processor calculates a second set of hash-values for each sampled testing image frame. A processor compares the first set of hash-values to the second set of hash-values. A processor verifies that the second set of hash-values matches the first set of hash-values.
DATABASE OBSERVATION SYSTEM
Systems, methods, and storage media are provided that are useful in a computing environment for receiving, modifying, and transforming service level information from database servers and entities in a hosted database environment. Multiple application programming interface (API) calls are made by a database observation system to request information for multiple service level indicators from database servers belonging to multiple different entities. Database observation system receives and aggregates the information for multiple service level indicators from each of the database servers belonging to multiple different entities. The database observation system provides, within a dashboard interface, the aggregated information for each of the multiple service level indicators, individual service level indicator scores, and aggregated service level indicator scores for each of the database servers for each of the multiple entities.
DETERMINING METRICS FOR DATA RECORDS
A computer-based system may be configured to collect metadata and/or the like indicative of all the metrics exposed from a data pipeline (e.g., an ETL pipeline, etc.) and transform the metrics into a single group of metrics user-facing, user-specific, user-configured, and/or the like metrics that allow the maturity and quality of data and/or data records to be analyzed and/or displayed. Collected metrics can be agnostic of a source data flow component of a data pipeline and/or resource technology (e.g., API, etc.). Collected metrics may indicate a measure of data freshness, data duplication, new data records, updated date data records, data errors, and/or the like.
Visualization of outliers in a highly-skewed distribution of telemetry data
Systems and methods for enhancing the representation of outliers in a distribution of telemetry data of a monitored system are provided. According to one embodiment, telemetry data of the monitored system may be continuously collected. Frequency values representing a frequency of occurrence of corresponding telemetry data of the collected telemetry data may be generated by aggregating the collected telemetry data. As the vast majority of telemetry data is expected to represent a normal operating state of the system and relatively few, if any, of the telemetry data (e.g., outliers) will be indicative of one or more events of significance, the resulting distribution of the frequency values is highly skewed. In order to facilitate visualization of the distribution that accentuates the outliers, display characteristics may be calculated for the frequency values by applying a visualization model based on a weighted combination of multiple data transformations to each of the frequency values.
SYSTEMS AND METHOD FOR ANALYZING SOFTWARE AND TESTING INTEGRATION
An assessment system can generate a software quality value based on testing results and analysis of a multitude of factors that impact a readiness evaluation. For example, the system generates a software quality score (e.g., an Applause Quality Score “AQS”) that enables development teams to understand the level of quality they are achieving for a given release and build-over-build. In various examples, the system generates a data-driven score to enable development teams or quality assurance teams to make decisions for when a build is ready for release. In further embodiments, the system can integrate user interfaces that present a software quality score in a user dashboard that is linked to version control systems. On review and acceptance of the score, a user can trigger the release of their new code or product.
SYSTEMS AND METHODS FOR NARROWING THE SCOPE OF A PROBLEM WHEN A MODEM IS BRICKED
Embodiments of the systems and methods disclosed herein relate to a modem having a processor including a Unified Extensible Firmware Interface (UEFI) driver. The UEFI driver can be configured to provide a software interface between an operating system for the modem and firmware for the modem. The modem can include a boot diagnostic driver configured to run from the UEFI driver and execute a diagnostic test when the modem is booting up. The boot diagnostic driver can be configured to generate a signal based on a result of the diagnostic test.
MANAGING DATABASE QUOTAS WITH A SCALABLE TECHNIQUE
A method and system for providing a scaling quota for a database system have been developed. The method defines a product that is defined by a client using a quota application programming interface (API). A report is created for the defined product with the quota API that is unique to the defined product and specifies a product quota and a limit endpoint for the report. The product quota is managed with a message broker by keeping an updated quota count for each report and product quota. An approval or rejection message is generated by the message broker for the client once the updated quota count reaches the limit endpoint. Finally, a response to the approval or rejection message from the client is generated for the database client by a limit provider application programming interface (API).
SCANNING A COMPUTING SYSTEM TO DETERMINE COMPUTING PROFILE AND PROBLEMS TO RECOMMEND AN ACTION TO INITIATE WITH THE COMPUTING SYSTEM
Provided are a computer program product, system, and method for scanning a computing system to determine a computing system profile and problems to recommend actions to initiate with the computing system. A package is transmitted to the computing system including package code to scan the computing system to determine a computing system profile comprising a computing architecture and installed applications at the computing system. The computing system profile is processed to determine a recommended action to perform with respect to the computing system to improve operations of the computing system based on the computing system profile. A display element is generated in a user interface with information on the recommended action to enable a user of the computing system to implement the recommended action. The package code executes within the computing system without communicating over the network to an external system outside of a computing environment of the computing system.
Multiple Application Smoke Test
A collection of multiple software applications may be tested when a patch is issued. However, an emergency patch for a time-sensitive incident may not allow for full regression or functional testing. Provided herein are techniques for performing a multiple application smoke test. Access information, login information, and a success indicator are obtained for each of a plurality of software applications. A test plan including two or more test packages is determined. Each test package indicates a subset of applications and includes access information, login information, and success indicators corresponding to the subset. The test packages are executed in parallel, including authenticating, loading, and validating. Logs are generated and a user interface is provided to present the logs and whether validation of the application interfaces passed or failed, and a failure reason for failed tests.