Patent classifications
G06F9/5055
CLOUD FEDERATION AS A SERVICE
A Cloud federator may be used to allow seamless and transparent access by a Cloud Client to Cloud services. Federation may be provided on various terms, including as a subscription based real-time online service to Cloud Clients. The Cloud federator may automatically and transparently effect communication between the Cloud Client and Clouds and desired services of the Clouds, and automatically perform identity federation. A Service Abstraction Layer (SAL) may be implemented to simplify Client communication, and Clouds/Cloud services may elect to support the SAL to facilitate federation of their services.
DETERMINING AVAILABILITY OF NETWORK SERVICE
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining availability of network service. In some implementations, a request indicating a location and a communication service level is received. A first subset of service providers or communication technologies is determined based on outputs generated by multiple first machine learning models each trained to predict service availability for different service providers or communication technologies. A second subset is selected from the first subset based on outputs generated by multiple second machine learning models trained to predict availability of different communication service levels for different service providers or communication technologies. At least one service provider or communication technology is selected from the second subset based on output generated by a third machine learning model. A response to the request indicating the selected service provider or communication technology is provided.
CONTROL METHOD AND APPARATUS OF CLUSTER RESOURCE, AND CLOUD COMPUTING SYSTEM
This disclosure relates to a control method and apparatus of cluster resources, and a cloud computing system, and relates to the field of computer technologies. The method includes: in the case where a to-be-controlled resource is a to-be-expanded resource, determining a binding relationship between the to-be-expanded resource and an application; adding the to-be-expanded resource that is initialized into a resource pool of a corresponding application having the binding relationship with the to-be-expanded resource; generating a to-be-executed data packet of a to-be-processed application according to a deployment type of the to-be-processed application; and deploying the to-be-executed data packet on the to-be-expanded resource in the resource pool of the to-be-processed application for execution.
CPU Resource Reservation Method and Apparatus, and Related Device Thereof
Provided are a Central Processing Unit (CPU) resource reservation method, apparatus, and device, and a computer-readable memory medium. The method includes: selecting a target working node according to a received Virtual Machine (VM) startup request; obtaining a total number of virtual cores and a number of allocatable physical cores in the target working node statistically; performing calculation to obtain an available CPU quota according to the total number of virtual cores and the number of allocatable physical cores; and performing CPU resource reservation configuration on the target working node by use of the available CPU quota. According to the CPU resource reservation method, the reservation of CPU resources in a VM system may be implemented more flexibly and efficiently.
Digital-Twin-Enabled Artificial Intelligence System for Distributed Additive Manufacturing
An information technology system for a distributed manufacturing network includes an additive manufacturing platform configured to manage workflows for a set of distributed manufacturing network entities associated with the distributed manufacturing network. The information technology system includes a set of digital twins generated by the additive manufacturing platform. The information technology system includes an artificial intelligence system configured to be executed by a data processing system in communication with the additive manufacturing platform. The artificial intelligence system is trained to generate process parameters for the workflows managed by the additive manufacturing platform using data collected from the set of distributed manufacturing network entities. The information technology system includes a control system configured to adjust the process parameters during an additive manufacturing process performed by at least one of the set of distributed manufacturing network entities.
METHODS AND APPARATUS TO HANDLE DEPENDENCIES ASSOCIATED WITH RESOURCE DEPLOYMENT REQUESTS
An example apparatus includes a dependency graph generator to generate a dependency graph based on a resource request file specifying a first resource and a second resource to deploy to a resource-based service, the dependency graph representative of the first resource being dependent on a second resource, a verification controller to generate a status indicator after a determination that a time-based ordering of a first request relative to a second request satisfies the dependency graph, and a resource controller to cause transmission of the first request and the second request to the resource-based service based on the dependency graph, and, after determining that the time-based ordering of the first request relative to the second request satisfies the dependency graph, cause transmission of the status indicator to a user device.
METHOD AND SYSTEM TO PLACE RESOURCES IN A KNOWN STATE TO BE USED IN A COMPOSED INFORMATION HANDLING SYSTEM
In general, the invention relates to providing computer implemented services using information handling systems. One or more embodiments of the invention includes receiving a request to decompose a composed information handling system, wherein the composed information handling system comprises a hardware resource, obtaining a cleaning requirement for the hardware resource, initiating, based on the cleaning requirement, a cleaning operation on the hardware resource, receive a confirmation that the cleaning operation is complete, and after receiving the confirmation, set a state of the hardware resource to allocatable.
METHODS AND DECENTRALIZED SYSTEMS THAT EMPLOY DISTRIBUTED MACHINE LEARNING TO AUTOMATICALLY INSTANTIATE AND MANAGE DISTRIBUTED APPLICATIONS
The current document is directed to methods and systems that automatically instantiate complex distributed applications by deploying distributed-application instances across the computational resources of one or more distributed computer systems and that automatically manage instantiated distributed applications. Automatic deployment of multiple instances of a distributed application across computational resources, such as distribution of microservices of a microservice-based application across one or more distributed computer systems, and scaling of instantiated distributed applications are computationally difficult optimization problems that are not amenable to traditional centralized approaches. The current document discloses decentralized, distributed automated methods and systems that instantiate and manage distributed applications. Reinforcement-learning-based agents are installed within the computational resources of one or more distributed computer systems. Distributed-application instances are initially distributed to one or more agents. The agents then exchange distributed-application instances among themselves in order to locally optimize the set of distributed-application instances that they each manage.
CONFIGURABLE DEPLOYMENT OF DATA SCIENCE ENVIRONMENTS
An example computing platform is configured to (i) cause a client device to display an interface for deploying a new data science environment, where the interface presents (a) a list of data science applications and (b) a set of user-defined configuration parameters, (ii) receive, from the client device, data indicating (a) a user selection of a given data science application from the list and (b) a user selection of one or more user-defined configuration parameters from the set, (iii) based on the user selection of the given data science application, determine a deployment template for use in deploying the new data science environment, the deployment template specifying (a) an executable environment package and (b) a set of predefined configuration parameters, and (iv) use the given executable environment package, the set of predefined configuration parameters, and the one or more user-defined configuration parameters to deploy the new data science environment.
INCREMENTAL ANALYSIS OF LEGACY APPLICATIONS
A method, system, and computer program product for automated increment analysis of legacy applications are provided. The method receives a set of service properties for a service to be generated from a set of applications. The set of applications are associated with a set of resources. A subset of resources are determined based on the set of service properties. The subset of resources are to be included in the service. A resource graph of the subset of resources is generated based on the subset of resources and the set of service properties. The method generates a service increment including at least a portion of the subset of resources based on the resource graph and the set of service properties.