Patent classifications
G06F11/0709
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
ANOMALY DETECTION USING TENANT CONTEXTUALIZATION IN TIME SERIES DATA FOR SOFTWARE-AS-A-SERVICE APPLICATIONS
A system may include a historical time series data store that contains electronic records associated with Software-as-a-Service (“SaaS”) applications in a multi-tenant cloud computing environment (including time series data representing execution of the SaaS applications). A monitoring platform may retrieve time series data for the monitored SaaS application from the historical time series data store and create tenant vector representations associated with the retrieved time series data. The monitoring platform may then provide the retrieved time series data and tenant vector representations together as final input vectors to an autoencoder to produce an output including at least one of a tenant-specific loss reconstruction and tenant-specific thresholds for the monitored SaaS application. The monitoring platform may utilize the output of the autoencoder to automatically detect an anomaly associated with the monitored SaaS application.
Scalable runtime validation for on-device design rule checks
An apparatus to facilitate scalable runtime validation for on-device design rule checks is disclosed. The apparatus includes a memory to store a contention set, one or more multiplexors, and a validator communicably coupled to the memory. In one implementation, the validator is to: receive design rule information for the one or more multiplexers, the design rule information referencing the contention set; analyze, using the design rule information, a user bitstream against the contention set at a programming time of the apparatus, the user bitstream for programming the one or more multiplexors; and provide an error indication responsive to identifying a match between the user bitstream and the contention set.
Predicting and managing requests for computing resources or other resources
Requests for computing resources and other resources can be predicted and managed. For example, a system can determine a baseline prediction indicating a number of requests for an object over a future time-period. The system can then execute a first model to generate a first set of values based on seasonality in the baseline prediction, a second model to generate a second set of values based on short-term trends in the baseline prediction, and a third model to generate a third set of values based on the baseline prediction. The system can select a most accurate model from among the three models and generate an output prediction by applying the set of values output by the most accurate model to the baseline prediction. Based on the output prediction, the system can cause an adjustment to be made to a provisioning process for the object.
Transportation of configuration data with error mitigation
A method for mitigating errors in the transportation of configuration data may include identifying, at a development system, dependent configuration data associated with a first transport request. The dependent configuration data may implement a customization to a software application hosted at a production system. A reference table identifying the dependent configuration data may be sent to the production system. A missing object list identifying dependent configuration data absent from the production system may be generated at the production system based on the reference table. The missing object list may be sent to the development system where a corrective action may be performed such that the dependent configuration data identified by the missing object list as being absent from the production system is sent to the production system in the first transport request and/or a second transport request. Related systems and articles of manufacture, including computer program products, are also provided.
ENHANCED PERFORMANCE DIAGNOSIS IN A NETWORK COMPUTING ENVIRONMENT
Embodiments provide enhanced performance diagnosis in a network computing environment. In response to an occurrence of a performance issue for a node while under operating conditions, common logs for applications on the node are analyzed. The applications are respectively registered in advance for diagnosis services. The applications each register rules in advance for the diagnosis services. At a time of the performance issue, debug programs are automatically issued to generate debug level logs respectively for the applications. Debug level logs are analyzed according to the rules to determine a root cause of the performance issue. A potential solution to the root cause of the performance issue is determined using the rules, without having to recreate the operating conditions occurring during the performance issue. The potential solution to rectify the root cause of the performance issue is executed without having to recreate the operating conditions occurring during the performance issue.
Role-based failure response training for distributed systems
Methods, systems, and computer-readable media for role-based failure response training for distributed systems are disclosed. A failure response training system determines a failure mode associated with an architecture for a distributed system comprising a plurality of components. The training system generates a scenario based at least in part on the failure mode. The scenario comprises an initial state of the distributed system which is associated with one or more metrics indicative of a failure. The training system provides, to a plurality of users, data describing the initial state. The training system solicits user input representing modification of a configuration of the components. The training system determines a modified state of the distributed system based at least in part on the input. The performance of the distributed system in the modified state is indicated by one or more modified metrics differing from the one or more initial metrics.
Remote debug for scaled computing environments
Techniques and apparatus for remotely accessing debugging resources of a target system are described. A target system including physical compute resources, such as, processors and a chipset can be coupled to a controller remotely accessible over a network. The controller can be arranged to facilitate remote access to debug resources of the physical compute resources. The controller can be coupled to debug pin, such as, those of a debug port and arranged to assert control signals on the pins to access debug resources. The controller can also be arranged to exchange information elements with a remote debug host to include indication of debug operations and/or debug results.
Anti-pattern detection in extraction and deployment of a microservice
Disclosed are various embodiments for anti-pattern detection in extraction and deployment of a microservice. A software modernization service is executed to analyze a computing application to identify various applications. When one or more of the application components are specified to be extracted as an independently deployable subunit, anti-patterns associated with deployment of the independently deployable subunit are determined prior to extraction. Anti-patterns may include increases in execution time, bandwidth, network latency, central processing unit (CPU) usage, and memory usage among other anti-patterns. The independently deployable subunit is selectively deployed separate from the computing application based on the identified anti-patterns.
Method and system for auto live-mounting database golden copies
A method and system for auto live-mounting database golden copies. Specifically, the disclosed method and system entail reactively auto live-mounting golden copy databases on hosts or proxy hosts based on the operational state of one or more database hosts and/or one or more assets (or databases) residing on the database host(s). Should a database host prove to be unresponsive, through periodic monitoring, databases residing on the database host may be brought back online on a proxy database host using stored golden copies respective of the aforementioned databases. Alternatively, should a given database on any database host exhibit an operational abnormality (e.g., an error, failure, etc.), the given database may be brought back online on the database host or a proxy database host using a stored golden copy respective of the given database. Accordingly, through the disclosed method and system, database outages may be minimized.