Patent classifications
G06F9/5077
ACCELERATING TABLE LOOKUPS USING A DECOUPLED LOOKUP TABLE ACCELERATOR IN A SYSTEM ON A CHIP
In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
NOISY-NEIGHBOR DETECTION AND REMEDIATION
Noisy-neighbor detection and remediation is provided by performing real-time monitoring of workload processing and associated resource consumption of application components that use shared resource(s) of a computing environment, determining workload and shared resource consumption patterns for each of the application components, for each application, of a plurality of applications, that includes at least one application component of the application components, correlating the determined workload and shared resource consumption patterns of each of those application component(s) and determining a correlated shared resource usage pattern for that application, performing impact analysis to determine impact of the applications on each other, and identifying noisy-neighbor(s) that use the one or more shared resources and automatically raising an alert indicating those noisy-neighbor(s).
ENVOY FOR MULTI-TENANT COMPUTE INFRASTRUCTURE
A data management and storage (DMS) cluster of peer DMS nodes manages data of a tenant of a multi-tenant compute infrastructure. The compute infrastructure includes an envoy connecting the DMS cluster to virtual machines of the tenant executing on the compute infrastructure. The envoy provides the DMS cluster with access to the virtual tenant network and the virtual machines of the tenant connected via the virtual tenant network for DMS services such as data fetch jobs to generate snapshots of the virtual machines. The envoy sends the snapshot from the virtual machine to a peer DMS node via the connection for storage within the DMS cluster. The envoy provides the DMS cluster with secure access to authorized tenants of the compute infrastructure while maintaining data isolation of tenants within the compute infrastructure.
APPARATUSES AND METHODS FOR SCHEDULING COMPUTING RESOURCES
Apparatus and methods for scheduling computing resources is disclosed that facilitate the cooperation of resource managers in the resource layer and workload schedulers in the workload layer working together so that resource managers can efficiently manage and schedule resources for horizontally and vertically scaling resources on physical hosts shared among workload schedulers to run workloads.
MULTILAYER PROCESSING ENGINE IN A DATA ANALYTICS SYSTEM
Methods, systems, and computer storage media for providing a multilayer processing engine of a multilayer processing system. The multilayer processing engine supports an event layer, a metadata layer, and a multi-tier processing layer. The metadata layer can refer to a functional layer that operates via a sequential hierarchy of functional layers (i.e., event layer and multi-tier processing layer) to analyze incoming event streams and configure a downstream processing configuration. The metadata layer provides for dynamic metadata-based configuration of downstream processing of data associated with the event layer and the multi-tier processing layer. The multilayer processing system can be a data analytics system—operating via a serverless distributed computing system. The data analytics system implements the multilayer processing engine as a serverless data analytics management engine for processing high frequency data at scale based on dynamically-generated processing code—generated based on a downstream processing configuration—that supports automatically processing the data.
CLOUD-BASED SYSTEMS FOR OPTIMIZED MULTI-DOMAIN PROCESSING OF INPUT PROBLEMS USING MACHINE LEARNING SOLVER TYPE SELECTION
Various embodiments of the present disclosure provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for determining optimized solutions to input problems in a containerized, cloud-based (e.g., serverless) manner. In one embodiment, an example method is provided. The method comprises: receiving a problem type of an input problem originating from a client computing entity; mapping the problem type to one or more selected solver types; generating one or more container instances of one or more compute containers, each compute container corresponding to a selected solver type; generating a problem output using the one or more container instances; and providing the problem output comprising a solution to the input problem to the client computing entity. In various embodiments, optimized solutions for input problems are determined using a cloud-based multi-domain solver system configured to dynamically allocate computing and processing resources between different solution-determining tasks.
SYSTEMS AND METHODS FOR PERFORMANCE-AWARE CONTROLLER NODE SELECTION IN HIGH AVAILABILITY CONTAINERIZED ENVIRONMENT
Embodiments described herein provide for an election procedure, in a high availability (“HA”) environment, for a backup controller to assume operations performed by a master controller in the event that the master controller becomes unreachable. The master controller may be associated with (e.g., provisioned on) the same set of hardware as one or more worker nodes, and may control operation of the one or more worker nodes. The election procedure may be performed based on performance metrics, location, or efficiency metrics associated with candidate backup controllers (e.g., cloud-based backup controllers), including performance of communications between particular backup controllers and the one or more worker nodes.
Emulated edge locations in cloud-based networks for testing and migrating virtualized resources
Various techniques for emulating edge locations in cloud-based networks are described. An example method includes generating an emulated edge location in a region. The emulated edge location can include one or more first computing resources in the region. A host in the region may launch a virtualized resource a portion of the one or more first computing resources. Output data that was output by the virtualized resource in response to input data can be received and reported to a user device, which may provide a request to migrate the virtualized resource to a non-emulated edge location. The non-emulated edge location may include one or more second computing resources that are connected to the region by an intermediary network. The virtualized resource can be migrated from the first computing resources to at least one second computing resource in the non-emulated edge location.
Generation of cloud service inventory
A data model characterizing a plurality of resources is received. The data model associates a first resource within a first remote computing environment with a first tag and a second resource within a second remote computing environment with a second tag. The data model is received from a database that is separate from the first remote computing environment and the second remote computing environment. The plurality of resources is grouped based on the first tag and the second tag. The grouping can form a first group associated with the first tag and a second group associated with the second tag. A first list of resources characterizing the first group and a second list of resources characterizing the second group is provided. Related apparatus, systems, techniques and articles are also described.
Dynamic resource allocation of cloud instances and enterprise application migration to cloud architecture
Cloud migration may be performed by identifying applications that are currently operating in the enterprise and performing certain determinations as to whether those applications are proper candidates for the migration to the cloud. One example method of operation may provide identifying at least one application operating on an enterprise network, retrieving current usage data of the at least one application, comparing the current usage data of the at least one application to a threshold amount of usage data to determine whether the application has exceeded the threshold amount of usage data. Next, the creation of an instance process may be performed on an entity operating outside the enterprise network and the application may be operated via the instance process and otherwise terminated in the enterprise network to alleviate resources.