Patent classifications
H04L67/1012
Inline SPF service provider designation
Sender Policy Framework (SPF) is one of the most widely used methods of distinguishing electronic mail that is authorized by the purported sending domain from unauthorized mail. SPF policies are published into a domain's DNS and then looked up and evaluated by mail receivers. Due to the complexity and limitations of the SPF specification, implementation mistakes are widespread. This problem is compounded by the common practice of nesting SPF policies which introduces hidden risks, particularly exceeding DNS lookup limits. To address these issues, inline service provider designation may be configured to capture the benefits of existing techniques without their associated costs. Additionally, the domain owner may enjoy simplified SPF service provider onboarding and policy failover redundancy to protect against SPF service provider disruptions, thus improving policy availability uptime.
Methods and apparatus to manage compute resources in a hyperconverged infrastructure computing environment
Methods, apparatus, systems and articles of manufacture are disclosed for managing compute resources in a computing environment. Disclosed examples are to select an offering workload in a computing environment to lend at least one resource to a needy workload in the computing environment; Disclosed examples are also to cause a host associated with the offering workload to at least one of (i) instantiate a first virtual machine when the host is implemented with a second virtual machine or (ii) instantiate a first container when the host is implemented with a second container. Disclosed examples are further to assign the first virtual machine or the first container to the needy workload.
Methods and apparatus to manage compute resources in a hyperconverged infrastructure computing environment
Methods, apparatus, systems and articles of manufacture are disclosed for managing compute resources in a computing environment. Disclosed examples are to select an offering workload in a computing environment to lend at least one resource to a needy workload in the computing environment; Disclosed examples are also to cause a host associated with the offering workload to at least one of (i) instantiate a first virtual machine when the host is implemented with a second virtual machine or (ii) instantiate a first container when the host is implemented with a second container. Disclosed examples are further to assign the first virtual machine or the first container to the needy workload.
CLOUD PROVISIONING READINESS VERIFICATION
Technology described herein can verify readiness of a customer tenant/customer for cloud provisioning based on the customer tenant. An example method comprises collecting, by a system comprising a processor, data based on user input from a customer tenant via a user interface, the data comprising information of a configuration setting that is to be used for cloud provisioning of a device based on the configuration setting. The method comprises, in response to collecting the configuration setting, analyzing, by the system, the configuration setting by comparing the configuration setting with respect to a selected rules check. The method comprises, in response to the analyzing of the configuration setting, determining, by the system, whether the configuration setting is acceptable for use in the cloud provisioning based on the configuration setting.
CLOUD PROVISIONING READINESS VERIFICATION
Technology described herein can verify readiness of a customer tenant/customer for cloud provisioning based on the customer tenant. An example method comprises collecting, by a system comprising a processor, data based on user input from a customer tenant via a user interface, the data comprising information of a configuration setting that is to be used for cloud provisioning of a device based on the configuration setting. The method comprises, in response to collecting the configuration setting, analyzing, by the system, the configuration setting by comparing the configuration setting with respect to a selected rules check. The method comprises, in response to the analyzing of the configuration setting, determining, by the system, whether the configuration setting is acceptable for use in the cloud provisioning based on the configuration setting.
Edge computing platform capability discovery
Systems and methods for establishing a connection with an edge application server are provided. A user equipment (UE) in a wireless communication network establishes a connection with an edge application server to offload the data processing of an application executing on the UE to the edge application server. The UE communicates key performance indicators (KPIs) associated with the application to the edge data network. The KPIs indicate the resources that application uses to process the data. In response, the UE receives edge application server parameters from multiple servers in the edge data network that meet or exceed the KPIs. The parameters include compute, graphical compute, memory and storage parameters with various levels of specificity. The UE selects one of the edge application servers to process the data on behalf of the application based on the parameters.
Abstraction layer to cloud services
Aspects of the disclosure relate to providing cloud computing resources from one or more cloud service providers for a client computing device through a computing platform. The client computing device may benefit from an economy of scale while being able to obtain different types of cloud services over a plurality of cloud providers. The client computing device may request an initial amount of cloud services and subsequently may request cloud services that utilize a requested amount of cloud resources. The requested amount of cloud resources may be apportioned among the plurality of cloud service providers, to provide the requested cloud service. The computing platform may also support a cloud abstraction layer interacting between client computing device and one or more cloud providers so that the client computing device can obtain cloud service in a transparent manner.
Internet of things solution deployment in hybrid environment
Example methods are provided to deploy an Internet of Things (IoT) solution in a hybrid environment. The methods include deploying a first agent application on a first edge gateway of a first vendor by the first edge gateway. The first agent application is configured to collect a first set of information associated with the first edge gateway. The methods include deploying a second agent application on a second edge gateway of a second vendor by the second edge gateway. The second agent application is configured to collect a second set of information associated with the second edge gateway. In response to a determination of a first virtualized computing environment on the first edge gateway or a second virtualized computing environment on the second edge gateway fulfils a first requirement of a template to deploy the IoT solution, the methods include deploying the IoT solution in the first virtualized computing environment, the second virtualized computing environment, or both.
Optimizing device-to-device communication protocol selection in an edge computing environment
A method for optimizing device-to-device communication protocol selection in an edge computing environment is provided. The method includes: receiving a request for a service from a user device, wherein the computing system is one of plural edge computing devices in an edge computing environment; determining computational tasks performed in providing the service; selecting, using a machine learning model, a set of the edge computing devices to perform the computational tasks and communication protocols for the set of the edge computing devices to use while performing the computational tasks, wherein the machine learning model is configured to select the set of the edge computing devices and the communication protocols based on minimizing a time to perform the computational tasks; and sending instructions to perform the computational tasks, thereby causing the set of the edge computing devices to perform the service in response to the request from the user device.
Technologies for assigning workloads to balance multiple resource allocation objectives
Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.