Patent classifications
H04L12/923
System and method for device configuration update
A configurable device for use in a solution architecture includes computing resources. The configurable device further includes a computing resources state manager. The computing resources state manager obtains an out-of-band modification to the computing resources. The computing resources state manager, in response to obtaining the out-of-band modification, generates an out-of-band configuration based on the out-of-band modification. The computing resources state manager further, in response to obtaining the out-of-band modification, updates restoration information for the computing resources based on the out-of-band configuration.
Techniques for excess resource utilization
Techniques to utilize excess resources in a cloud system, such as by enabling an auxiliary resource utilizer to use resources while they are not needed to support primary resource utilizers, are described herein. Some embodiments are directed to identifying and allocating excess capacity of resources in a cloud system to auxiliary resource utilizers based on one or more policies. In various embodiments, excess resources in one or more of the set of resources in the cloud system, or cloud resources, may be determined based on monitoring utilization of the cloud resources by the primary resource utilizers. In many embodiments, an auxiliary resource utilizer that is in compliance with a set of utilization policies may be identified and the excess resources may be allocated to the auxiliary resource utilizer.
User defined quality-control resource usage
Techniques for user-defined quality control of resource usage are provided. In one technique, resource split data is received that indicates a split of a total resource amount that is associated with a content delivery campaign. In response, based on the split, a first resource amount and a second resource amount is determined, each a subset of the total resource amount. The first resource amount is associated with a first utilization strategy and a first mapping function and the second resource amount is associated with a second utilization strategy and a second mapping function. In response to receiving a request from a client device, an entity of the client device is determined and associated with the first mapping function. A quality score of the entity is determined and, based on the first mapping function and the quality score, an adjustment factor is determined. A content item selection event is conducted based on the adjustment factor.
METHODS AND APPARATUS TO PROVIDE A CUSTOM INSTALLABLE OPEN VIRTUALIZATION APPLICATION FILE FOR ON-PREMISE INSTALLATION VIA THE CLOUD
Methods, apparatus, systems and articles of manufacture to provide a custom installable open virtualization application file for on-premise installation via the cloud are disclosed. An example apparatus includes a resource processor to determine a resource capacity for an agent in a private cloud network; a file manipulator to modify an open virtualization appliance (OVA) file by modifying a descriptor file of the OVA file to configure the resource capacity for the agent in the private cloud network, the OVA file being deployed in a public cloud network; and a first interface to transmit an indication to a location of the modified OVA file to a user device, the location of the modified OVA file being the same location as the OVA file.
Managing planned adjustment of allocation of resources in a virtualised network
A Network Functions Virtualisation Management and Orchestration system, NFV-MANO, for managing resources in a Network Function Virtualisation Infrastructure, NFVI, has elements (NFVO, VNFM) for orchestrating and managing virtual resources to provide a network service and has an allocation element (VIM) for managing an allocation of physical resources for the virtual resources. One of the elements for orchestrating (NFVO, VNFM), obtains (210) information about which of the virtual resources could be affected by a planned adjustment of the allocation, and determines an impact (220) of the planned adjustment on a network service, based on this information. An indication based on the impact on the network service is sent (230) to the allocation element (VIM), which implements the planned allocation according to the indication. This can help enable a better trade off between allocation efficiency and quality of network service.
A System in a Data Processing Network and a Method Therein for Enabling Routing of Data Flows To or From a Service in the Data Processing Network
Embodiments herein relate to a method performed by a network controller node (130) in a data processing network (100) for enabling routing of data flows to or from a service (150) in the data processing network (100). The network controller node (130) receives information indicating network requirements on the data processing network (100) by a service (150) to be initiated in the data processing network (100). Also, the network controller node (130) determines a network identifier for the service (150) in the data processing network (100) based on the obtained network requirements. Embodiments herein also relate to a method performed by a resource controller node (140) in a data processing network (100) for enabling routing of data flows to or from a service (150) in the data processing network (100). The resource controller node (140) obtains information indicating network requirements on the data processing network (100) by a service (150) to be initiated in the data processing network (100). Also, the resource controller node (140) determine a network identifier for the service (150) in the data processing network (100) based on the obtained network requirements. Furthermore, embodiments herein also relate to a network controller node (130) and a resource controller node (140) for enabling routing of data flows to or from a service (150) in the data processing network (100).
Minimizing overhead of applications deployed in multi-clouds
A computer readable storage medium and methods for distributing an application among computing nodes in a distributed processing system. A method estimates a cost of storing information pertaining to the application on different computing nodes; estimates a cost for computing resources required to execute the application on different computing nodes; estimates a cost of inter-node communication required to execute the application on different computing nodes; and selects at least one computing node to execute the application based on minimizing a total of at least one of the cost estimates.
ORDER PROCESSING
An order processing system is provided, including a first client (100), a second client (110), an order server (120), a resource manager (130), and a resource transfer server (140). The order server communicates with the resource transfer server through the resource manager according to a received order preprocessing request to complete transfer of a prepaid resource. Then the order server performs order settlement according to a received order settlement request or order cancellation request.
Quota-based resource scheduling
The present disclosure relates to dynamically scheduling resource requests in a distributed system based on usage quotas. One example method includes identifying usage information for a distributed system including atoms, each atom representing a distinct item used by users of the distributed system; determining that a usage quota associated with the distributed system has been exceeded based on the usage information, the usage quota representing an upper limit for a particular type of usage of the distributed system; receiving a first request for a particular atom requiring invocation of the particular type of usage represented by the usage quota; determining that a second request for a different type of usage of the particular atom is waiting to be processed; and processing the second request for the particular atom before processing the first request.
Distributed catalog service for data processing platform
An apparatus in one embodiment comprises at least one processing device having a processor coupled to a memory. The one or more processing devices are operative to configure a plurality of distributed processing nodes to communicate over a network, to abstract content locally accessible in respective data zones of respective ones of the distributed processing nodes into respective catalogs of a distributed catalog service in accordance with a layered extensible data model, and to provide in the distributed processing nodes a plurality of microservices for performing processing operations on at least one of the layered extensible data model and the catalogs. The layered extensible data model comprises a plurality of layers including a core data model layer and at least one extensions layer. The microservices may comprise at least one microservice to alter the layered extensible data model and at least one microservice to query one or more of the catalogs.