Patent classifications
G06F2209/508
Method and apparatus for cloud service
Aspects of the disclosure provide methods and apparatuses for cloud service. Fro example, an apparatus in a cloud for providing a cloud service includes processing circuitry. The processing circuitry receives a request including at least first characteristics associated with a variable. In an example, the first characteristics include complete information for describing the variable. The processing circuitry generates a message including the first characteristics associated with the variable and an updated value of the variable and sends the message to a recipient.
METHOD FOR DYNAMIC RESOURCES ALLOCATION AND APPARATUS FOR IMPLEMENTING THE SAME
A computer-implemented resource allocation method is provided, which comprises, in a computing environment comprising a resource management unit and a cluster comprising a cluster management node and a cluster node running an application program: receiving, by the resource management unit, a request for allocating one or more system resources to the application program; retrieving, by the resource management unit, from the cluster management node, an identifier of the cluster node running the application program; dynamically updating system physical resources allocated to the cluster node by updating a resource allocation file managed by an operating system of a computing machine on which the cluster is running, based on the identifier of the cluster node and the received request.
MULTIPLE METRIC-BASED WORKLOAD BALANCING BETWEEN STORAGE RESOURCES
An apparatus comprises a processing device configured to determine a workload level of each storage resource in a set of two or more storage resources, the workload levels being based at least in part on a processor performance metric, a memory performance metric, and a load performance metric. The processing device is also configured to identify a performance imbalance rate for the set of two or more storage resources, and to perform workload balancing for the set of two or more storage resources responsive to (i) the performance imbalance rate for the set of two or more storage resources exceeding a designated imbalance rate threshold and (ii) at least one storage resource in the set of two or more storage resources having a workload level exceeding a designated threshold workload level.
METHOD AND SYSTEM FOR PROVIDING HIGH EFFICIENCY, BIDIRECTIONAL MESSAGING FOR LOW LATENCY APPLICATIONS
A system and a method for routing a message to an application over a connection oriented session in a Kafka messaging platform environment are provided. The method includes: acquiring a plurality of partitions from the Kafka messaging platform; designating a first partition from among the plurality of partitions as a sticky partition; generating a plurality of routing keys that are configured to route to the sticky partition; receiving a subscription from a service that corresponds to a first application; transmitting, to the first application, a first routing key that identifies the subscription from among the plurality of routing keys; and receiving messages from Kafka services that are routed by the first routing key to the first application. For any particular application or set of applications, a plurality of connection oriented sessions may be used to achieve load balancing and high availability.
CLOUD NODE ROUTING
A router architecture that facilitates cloud exchange point routing is disclosed. The architecture relies upon B-nodes to connect branch network to cloud, S-nodes to connect services, and V-nodes to connect cloud to cloud. The nodes can be essentially stateless with node configuration stored outside a router, which facilitates ripping and replacement of nodes. Multiple virtual private clouds can be implemented with respective pluralities of Kubernetes pods and a master Kubernetes cluster. Consumer premises equipment is coupled to a first virtual private cloud that forms a virtual extensible local area network with a second private cloud.
Methods and apparatus to determine container priorities in virtualized computing environments
An example apparatus includes memory, and at least one processor to execute instructions to assign first containers to a first cluster and second containers to a second cluster based on the first containers including first allocated resources that satisfy a first threshold number of allocated resources and the second containers including second allocated resources that satisfy a second threshold number of allocated resources, determine a representative interaction count value for a first one of the first containers, the representative interaction count value based on a first network interaction metric corresponding to an interaction between the first one of the first containers and a combination of at least one of the first containers and at least one of the second containers, and generate a priority class for the first one of the first containers based on the representative interaction count value.
RELATIVE DISPLACEABLE CAPACITY INTEGRATION
A system may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include analyzing a host system, detecting one or more specifications of the host system, and determining a displaceable capacity of the host system. The determining a displaceable capacity of the host system may include identifying a workload on the host system, establishing a workload priority for the workload, and defining a task priority of a task. The operations may include computing service metrics of the host system. The operations may include displacing a portion of the workload using the displaceable capacity.
SCALABLE SOFTWARE DEPLOYMENT ON AUTONOMOUS MOBILE ROBOTS
Various aspects related to methods, systems, and computer readable media for scalable software deployment on autonomous mobile robots are described herein. A mobile robotics system can include a storage component configured to store a containerized software package, a server in operative communication with the storage component, and, an autonomous mobile robot (AMR) in operative communication with the server. The containerized software installation package is configured to direct the AMR to maneuver to perform at least one robotic task, monitor computational resource usage of resources of the AMR associated with the at least one robotic task, and, responsive to a determination that computational resource usage at the AMR is or will be above a threshold, sending a request to the server to perform a portion of processing tasks such that resource usage at the AMR is reduced to below the threshold or maintained below the threshold.
COMPUTATIONAL STORAGE WITH PRE-PROGRAMMED SLOTS USING DEDICATED PROCESSOR CORE
The technology disclosed herein provides a method including determining one or more dedicated computations storage programs (CSPs) used in a target market for a computational storage device, storing the dedicated CSPs in one or more pre-programmed computing instruction set (CIS) slots in the computational storage device, translating one or more instructions of the dedicated CSPs for processing using a native processor, loading one or more instructions of programmable CSPs to a CSP processor implemented within an application specific integrated circuit (ASIC) of the computational storage device, and processing the one or more instructions of the programmable CSPs using the CSP processor.
EDGE FUNCTION-GUIDED ARTIFICAL INTELLIGENCE REQUEST ROUTING
Edge function-guided artificial intelligence (AI) request routing is provided by applying a machine learning model to predictors of cloud endpoint hydration to determine hydration levels of cloud endpoints, of a hybrid cloud environment, that provide AI processing, determining, for each edge component of a plurality of edge components of the hybrid cloud environment and each cloud endpoint of the cloud endpoints, alternative flow paths between the edge component and the cloud endpoint, the alternative flow paths being differing routes for routing data between the edge component and the cloud endpoint, and the alternative flow paths being of varying flow rates determined based on the hydration levels of the cloud endpoints, and dynamically deploying edge functions on edge component(s), the edge functions configuring the edge component(s) to alternate among the alternative flow paths available in routing AI processing requests from the edge component(s) to target cloud endpoints of the cloud endpoints.