Patent classifications
G06F11/3055
Device telemetry control
Various example embodiments for supporting device telemetry control are presented. Various example embodiments may provide a customer of a device, which is monitoring the device based on device telemetry whereby the device exposes device data of the device based on device telemetry control information of the device such that the data of the device may be accessed by the customer, with control over device telemetry of the device. Various example embodiments may provide a customer, which may access device data of a device based on device telemetry supported by the device, with additional control over access to the device data of the device via device telemetry by providing the customer with control over the device telemetry including enabling the customer to insert customer device telemetry control information into the device telemetry control information of the device that controls device telemetry on the device.
Policy enforcement and performance monitoring at sub-LUN granularity
Techniques are provided for enforcing policies at a sub-logical unit number (LUN) granularity, such as at a virtual disk or virtual machine granularity. A block range of a virtual disk of a virtual machine stored within a LUN is identified. A quality of service policy object is assigned to the block range to create a quality of service workload object. A target block range targeted by an operation is identified. A quality of service policy of the quality of service policy object is enforced upon the operation using the quality of service workload object based upon the target block range being within the block range of the virtual disk.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
DYNAMIC RESOURCE PROVISIONING FOR USE CASES
A computer-implemented method, according to one embodiment, includes: receiving, at a computer, a request to facilitate a testing environment, where the request specifies a number and type of resources to be included in the testing environment. A database which lists available resources in systems and/or devices that are in communication with the computer is inspected and the available resources are compared to the number and type of resources specified in the request to be included in the testing environment. In response to determining that a valid combination of the available resources meets the number and type of resources specified in the request to be included in the testing environment, the database is updated to indicate that each of the resources in the valid combination are in use. Moreover, the request is satisfied by returning information about the resources in the valid combination.
TOOL FOR BUSINESS RESILIENCE TO DISASTER
Methods, systems, and computer programs are presented for estimating downtime and recovery time after a disaster. One method includes an operation for calculating component fragility functions for components of a facility that are vulnerable to damage after a disaster. Further, the method includes calculating component recovery functions for the components of the facility. The component recovery functions indicate a probability of recovery after a disaster over time. The method further includes operations for calculating a facility fragility function and a facility recovery function based on the component fragility functions and the component recovery functions, and for determining a downtime for the facility for a given intensity associated with the disaster. Further, the method includes an operation for causing presentation of the downtime for the facility on a user interface (UI).
Communication between independent containers
Techniques related to communication between independent containers are provided. In an embodiment, a first programmatic container includes one or more first namespaces in which an application program is executing. A second programmatic container includes one or more second namespaces in which a monitoring agent is executing. The one or more first namespaces are independent of the one or more second namespaces. A monitoring agent process hosts the monitoring agent. The monitoring agent is programmed to receive an identifier of the application program. The monitoring agent is further programmed to switch the monitoring agent process from the one or more second namespaces to the one or more first namespaces. After the switch, the monitoring agent process continues to execute in the second programmatic container, but communication is enabled between the application program and the monitoring agent via the monitoring agent process.
Role-based failure response training for distributed systems
Methods, systems, and computer-readable media for role-based failure response training for distributed systems are disclosed. A failure response training system determines a failure mode associated with an architecture for a distributed system comprising a plurality of components. The training system generates a scenario based at least in part on the failure mode. The scenario comprises an initial state of the distributed system which is associated with one or more metrics indicative of a failure. The training system provides, to a plurality of users, data describing the initial state. The training system solicits user input representing modification of a configuration of the components. The training system determines a modified state of the distributed system based at least in part on the input. The performance of the distributed system in the modified state is indicated by one or more modified metrics differing from the one or more initial metrics.
Method and system for auto live-mounting database golden copies
A method and system for auto live-mounting database golden copies. Specifically, the disclosed method and system entail reactively auto live-mounting golden copy databases on hosts or proxy hosts based on the operational state of one or more database hosts and/or one or more assets (or databases) residing on the database host(s). Should a database host prove to be unresponsive, through periodic monitoring, databases residing on the database host may be brought back online on a proxy database host using stored golden copies respective of the aforementioned databases. Alternatively, should a given database on any database host exhibit an operational abnormality (e.g., an error, failure, etc.), the given database may be brought back online on the database host or a proxy database host using a stored golden copy respective of the given database. Accordingly, through the disclosed method and system, database outages may be minimized.
Adaptive time window-based log message deduplication
Example techniques for adaptive time window-based log message deduplication are described. In an example, message values are obtained from received log messages. Further, the number of log messages received in a time window having a message value is counted. A log message from which the message value is obtained and the counted number are transmitted upon expiry of the time window. A length of a time window in which a subsequent counting of log messages is to be performed is determined based on various parameters.
Patient assurance system and method
In one example, an ambulatory medical device is provided. The ambulatory medical device includes a plurality of subsystems, at least one sensor configured to acquire data descriptive of a patient, a user interface and at least one processor coupled to the at least one sensor and the user interface. The at least one processor is configured to identify subsystem status information descriptive of an operational status of each subsystem of the plurality of subsystems and to provide a device health report for the ambulatory medical device via the user interface, the device health report being based on the operational status of each subsystem.