Patent classifications
G06F2209/505
Method and Apparatus for Creating Container, Device, Medium, and Program Product
A method for creating a container, an apparatus for creating a container, a device, a medium, and a program product are provided. The method includes: acquiring a description file of a to-be-scheduled container group (Pod), where the description file of the Pod is used for describing resource demand information; determining, based on the description file of the to-be-scheduled Pod and idle resource information of each of work nodes, a target work node from the work nodes, and binding the to-be-scheduled Pod to the target work node; and sending a container runtime interface (CRI) request to a container engine, where the CRI request is used for instructing node to create a target container at the target work based on configuration information in the CRI request, and the configuration information is used for limiting an authority of the target container.
DYNAMICALLY SCALING CONTROL PLANE FOR INGRESS SERVICES FOR LARGE NUMBERS OF APPLICATIONS WITH MINIMAL TRAFFIC DISRUPTION
Dynamically scaling control plane for ingress services for large numbers of applications with minimal traffic disruption includes receiving an estimate of a number of applications to be executed by multiple clusters implemented by an orchestrator platform. Each cluster includes multiple containers. The multiple clusters implement a centralized controller that control execution of the applications by the multiple clusters. The centralized controller is sharded into a variable number of controllers that collectively control the estimated number of applications based on the estimate of the number of applications and a pre-determined number of applications that each controller can control. Each controller of the variable number of controllers controls an execution of a respective subset of the applications. In response to a change in the number of applications over time, the number of controllers is modified based on a number of applications to be executed by the multiple clusters at any given time.
CLOUD INFRASTRUCTURE RECOMMENDATIONS TO DEPLOY PODS
In an example, a computer implemented method may include receiving a request including a set of workload descriptors. Further, the method may include parsing the set of workload descriptors to determine a set of pods and a set of constraints associated with the set of pods and determining a relationship between the set of pods based on the set of constraints. Furthermore, the method may include categorizing the set of pods into a set of resource clusters based on the determined relationship and determining a cloud infrastructure to deploy the set of resource clusters based on an optimization parameter. Upon determining the cloud infrastructure, the determined cloud infrastructure may be recommended to deploy the set of resource clusters.
CONTAINER RUNTIME OPTIMIZATION
A method, system, and computer program product for implementing container runtime optimization and execution is provided. The method includes enabling a container management instance and a container runtime comprising specified operational attributes associated with a container. Supervisor tree code is embedded within the container runtime and definition software is executed. The definition software describes specified digital endpoints of an associated application process being executed by the container. The container is enabled for operational functionality and an external interface is enabled for communications with the supervisor tree code. The container management instance is executed in response to a command received via the external interface.
EXTENDABLE CONTAINER-ORCHESTRATION SYSTEM CONTROLLERS
In an example embodiment, a solution is provided for a container-orchestration service that allows a custom resource to reflect an entire software application while still splitting the actual work out into independent microservices. Specifically, the concepts of an extendable controller and controller extensions are introduced. An extendable controller defines an extendable custom resource. This custom resource is still the main resource describing the entire application, but does so in a way that extensions are referenced that can be defined in their own resources (called extension resources). The extendable controller itself is surrounded by extension controllers, which are responsible for certain aspects of the system that need to be considered in atomic transactions, such as high-availability configuration or scale-out.
Cost-savings using ephemeral hosts in infrastructure as a service environments based on health score
Various examples are disclosed for placing virtual machine (VM) workloads in a computing environment. Ephemeral workloads can be placed onto reserved instances or reserved hosts in a cloud-based VM environment. If a request to place a guaranteed workload is received, ephemeral workloads can be evacuated to make way for the guaranteed workload.
Namespaces as units of management in a clustered and virtualized computer system
An example method of managing an application in a virtualized computing system that includes a cluster of hosts managed by a virtualization management server, the hosts including a virtualization layer executing on hardware platforms is described. The method includes: receiving a specification for a namespace at the virtualization management server, the specification defining resource constraints and authorization constraints for the namespace; preparing an environment within the virtualized computing system for the namespace in response to the specification, the environment including: a resource pool implementing at least a portion of the resource constraints as reservations and limits of resources in the virtualized computing system; and a user access policy implementing the authorization constraints within the virtualized computing system for the namespace; and managing, by the virtualization management server as a single unit, workloads of the application, the workloads deployed on the virtualization layer within the resource pool consistent with the user access policy.
METHOD AND APPARATUS OF DEPLOYING A CLUSTER, AND STORAGE MEDIUM
Provided are a method and an apparatus of deploying a cluster, and a storage medium, which relate to the field of computer technologies and may be applied to the cloud computing technology. The method includes steps described below. Master node configuration information of a cluster master node and slave node configuration information of each cluster slave node are determined. Independent configuration is performed on the cluster master node according to the master node configuration information. The slave node configuration information is sent to each cluster slave node, so that each cluster slave node performs independent configuration on the cluster slave node according to the slave node configuration information.
SYSTEMS AND METHODS FOR UNIVERSAL DATA INGESTION
Systems and methods for ingesting different data types using a are disclosed. According to one embodiment, a method for universal data ingestion may include: (1) receiving, at a data ingestion layer in a multi-layer pod, data from a data producer, wherein the data may be in any format; (2) ingesting, by the data ingestion layer, the data using a producer proxy agent or an ingestion application programming interface (API); (3) staging, by a data messaging/staging layer in the multi-layer pod, the ingested data; (4) enriching or transforming, by a data enrichment/transformation layer in the multi-layer pod, the staged data based on at least one customer requirement; and (5) routing, by a data connection layer in the multi-layer pod, the enriched or transformed data from the data messaging/staging layer to a data store at an appropriate velocity.
SERVICES THREAD SCHEDULING BASED UPON THREAD TRACING
One embodiment provides a method, including: producing, for each of a plurality of containers, a resource profile for each thread in each of the plurality of containers; identifying, for each of the plurality of containers and from, at least in part, the resource profiles, container dependencies between threads on a single of the plurality of containers; determining service dependencies between threads across different of the plurality of containers; scheduling, based upon the container dependencies and the service dependencies, threads to cores, wherein the scheduling is based upon minimizing thread processing times; and publishing the container dependencies and the service dependencies on a registry of the node clusters.