Patent classifications
G06F9/5044
MEMORY BARRIER ELISION FOR MULTI-THREADED WORKLOADS
A system includes a memory, at least one physical processor in communication with the memory, and a plurality of threads executing on the at least one physical processor. A first thread of the plurality of threads is configured to execute a plurality of instructions that includes a restartable sequence. Responsive to a different second thread in communication with the first thread being pre-empted while the first thread is executing the restartable sequence, the first thread is configured to restart the restartable sequence prior to reaching a memory barrier.
VEHICULAR COMPUTATIONAL TASK ALLOCATION
A computer-implemented method for planning an allocation of at least one computational task from a computational resource comprised in at least one vehicle to one or more of a plurality of external computational resources in a vehicular communications network. The method comprises obtaining a spatial representation of a region characterising at least one route of a vehicle from a first location to a second location, and data characterising an availability of external computational resources at a plurality of locations in the region, providing at least one computational requirement indication of at least one atomic computational task required by the vehicle during a prospective movement of the vehicle from the first location to the second location, comparing the at least one computational requirement indication to the data characterising the availability of external computational resources at the plurality of locations in the region.
SYSTEMS AND METHODS FOR END-TO-END WORKLOAD MODELING FOR SERVERS
An information handling system may include a processor and non-transitory computer-readable media communicatively coupled to the processor and having stored thereon a program of instructions configured to, when read and executed by the processor, perform data collection to retrieve hardware information regarding a second information handling system and analyze the hardware information to determine one or more recommended purposes for the second information handling system.
CLOUD APPLICATION THRESHOLD BASED THROTTLING
Systems and methods are provided for intercepting computing requests and modifying the execution timing thereof based on thresholds and minimum performance criteria and/or adjusting hosted services plans in order to monitor and control costs of hosting software applications on hosted provider computing resources.
SYSTEM AND METHOD FOR SUBSCRIPTION MANAGEMENT USING COMPOSED SYSTEMS
Methods, systems, and devices for providing computer implemented services using managed systems are disclosed. To provide the computer implemented services, the managed systems may need to operate in a predetermined manner conducive to, for example, execution of applications that provide the computer implemented services. Similarly, the managed system may need access to certain hardware resources (e.g., and also software resources such as drivers, firmware, etc.) to provide the desired computer implemented services. To improve the likelihood of the computer implemented services being provided, the managed systems may be managed using a subscription based model. The subscription model may utilize a highly accessible service to obtain information regarding desired capabilities (e.g., a subscription) of a managed system, and use the acquired information to automatically instantiate and/or retire composed systems to manage resources presentation and/or use.
CONSENSUS-BASED DISTRIBUTED SCHEDULER
Methods and systems for managing workload performance in distributed systems is disclosed. The distributed system may include any number of data processing systems that may perform workloads. To manage workload performance, the distributed system may include a distributed control plane. The distributed control plane may include any number of data processing systems that both receive and service workload requests. When a workload request is received by one of the data processing systems of the control plane, a consensus based processing for a selecting one of the data processing systems to perform the workload may be performed. Consequently, the data processing system that received the workload request may or may not perform the workload to service the workload request depending on the outcome of the consensus based process.
APPLICATION DELIVERY CONTROLLER PERFORMANCE CAPACITY ADVISOR
A method, a system, and a computer program product for executing a performance capacity analysis in a cloud application delivery controller computing environment and generating one or more recommendations for deployment of a computing solution. One or more deployment parameters associated with deploying of a first computing system in a plurality of first computing systems in a second computing system is received. The deployment parameters are defined by at least the second computing system. A list of first computing systems is generating using the received deployment parameters. Each first computing system in the generated list is executed in a test environment and test results associated with execution of each first computing system in the generated list are determined. At least one first computing system in the generated list is selected for deployment upon determining that the test results associated with executing of the first computing system match the deployment parameters.
Automated distribution of models for execution on a non-edge device and an edge device
Techniques for generating and executing an execution plan for a machine learning (ML) model using one of an edge device and a non-edge device are described. In some examples, a request for the generation of the execution plan includes at least one objective for the execution of the ML model and the execution plan is generated based at least in part on comparative execution information and network latency information.
Data set and node cache-based scheduling method and device
Disclosed is a data set and node cache-based scheduling method, which includes: obtaining storage resource information of each host node; in response to receiving a training task, obtaining operation information of the training task, and according to the operation information and the storage resource information, screening host nodes that satisfy a space required by the training task; in response to no host node satisfying the space required by the training task, scoring each host node according to the storage resource information; according to scoring results, selecting, from among all of the host nodes, a host node to be executed that is used to execute the training task; and obtaining and deleting an obsolete data set cache in the host node to be executed, and executing the training task in the host node to be executed.
Cloud resources management
Techniques discussed herein relate to managing service provider resources. The techniques may include receiving a first request to organize a first workload and a second workload into a space. The first workload may be associated with a first computing resource of a first service provider and the second workload may be associated with a second computing resource of a second service provider. The techniques may import data associated with the first workload and the second workload into the space. The techniques may cause an action to be performed for the first workload and the second workload by implementing a first workflow for the first workload and implementing a second workflow for the second workload.