G06F2209/501

SERVICE PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM
20230049501 · 2023-02-16 ·

A service processing method, performed by a cloud application management server, includes: upon receiving an allocation request from a target terminal, acquiring N pieces of selection reference information corresponding to a pending edge server and related to the target terminal and running reference information, the pending edge server being one of P edge servers connected to the cloud application management server; upon determining that the pending edge server meets a requirement of providing a running service of a target cloud application for the target terminal, determining a connection reference score corresponding to the pending edge server; storing the connection reference score and identification information about the pending edge server into a candidate set; and transmitting the candidate set to the target terminal.

CLOUD-BASED SYSTEMS FOR OPTIMIZED MULTI-DOMAIN PROCESSING OF INPUT PROBLEMS USING MACHINE LEARNING SOLVER TYPE SELECTION

Various embodiments of the present disclosure provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for determining optimized solutions to input problems in a containerized, cloud-based (e.g., serverless) manner. In one embodiment, an example method is provided. The method comprises: receiving a problem type of an input problem originating from a client computing entity; mapping the problem type to one or more selected solver types; generating one or more container instances of one or more compute containers, each compute container corresponding to a selected solver type; generating a problem output using the one or more container instances; and providing the problem output comprising a solution to the input problem to the client computing entity. In various embodiments, optimized solutions for input problems are determined using a cloud-based multi-domain solver system configured to dynamically allocate computing and processing resources between different solution-determining tasks.

Managing performance optimization of applications in an information handling system (IHS)

Embodiments of systems and methods for managing performance optimization of applications executed by an Information Handling System (IHS) are described. In an illustrative, non-limiting embodiment, a method may include: identifying, by an IHS, a first application; assigning a first score to the first application based upon: (i) a user's presence state, (ii) a foreground or background application state, (iii) a power adaptor state, and (iv) a hardware utilization state, detected during execution of the first application; identifying, by the IHS, a second application; assigning a second score to the second application based upon: (i) another user's presence state, (ii) another foreground or background application state, (iii) another power adaptor state, and (iv) another hardware utilization state, detected during execution of the second application; and prioritizing performance optimization of the first application over the second application in response to the first score being greater than the second score.

Scheduler for amp architecture with closed loop performance and thermal controller

Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.

METHOD AND SYSTEM FOR SELECTING OPTIMAL EDGE COMPUTING NODE IN INTERNET OF VEHICLE ENVIRONMENT

The present disclosure provides a method and system for selecting an optimal edge computing node in an Internet of vehicle (IoV) environment. The method includes: acquiring and analyzing properties of computing tasks of a vehicle in the IoV environment; acquiring and analyzing properties of different edge computing nodes; computing matching degrees between the properties of the computing tasks and the properties of the nodes; analyzing computing demands of different tasks, and assigning weights to different types of matching degrees; and selecting a node having an optimal sum for products of the matching degrees and the weights as an optimal edge computing node to compute each of the computing tasks of the vehicle.

METHOD AND SYSTEM FOR OPTIMIZING PARAMETER CONFIGURATION OF DISTRIBUTED COMPUTING JOB
20230042890 · 2023-02-09 ·

The present disclosure relates to a method and system for optimizing a parameter configuration of a distributed computing job. The method includes: obtaining job programs of different distributed computing jobs, and determining a key parameter configuration set; obtaining a cluster status during execution of the distributed computing job, randomly generating a sample data set based on the key parameter configuration set and the cluster status, and establishing a performance prediction model; correcting the performance prediction model by using a multi-objective genetic algorithm and an optimization module configured with an optimal configuration selection strategy; obtaining a job program of a to-be-optimized distributed computing job and a cluster status during execution of the to-be-optimized distributed computing job, and determining a to-be-optimized key parameter configuration item combination; and inputting, to the performance prediction model, the to-be-optimized key parameter configuration item combination and the cluster status during execution of the to-be-optimized distributed computing job, and outputting a key parameter configuration item combination with a shortest execution time. The present disclosure can rapidly and effectively optimize the key parameter configuration.

RESOURCE SCHEDULING METHOD AND RELATED APPARATUS
20230037783 · 2023-02-09 ·

The present disclosure relates to resource scheduling methods and apparatuses. In one example method, a scheduling node receives a task. The scheduling node obtains a target execution duration level to which the task belongs, where the target execution duration level represents a time length, and the target execution duration level indicates to use a target compute module of a target compute node in multiple compute nodes to execute the task. The scheduling node sends the task to the target compute node.

QUERY AND UPDATE OF PROCESSOR BOOST INFORMATION

A query operation is performed to obtain information for a select entity of a computing environment. The information includes boost information of one or more boost features currently available for the select entity. The one or more boost features are to be used to temporarily adjust one or more processing attributes of the select entity. The boost information obtained from performing the query operation is provided in an accessible location to be used to perform one or more actions to facilitate processing in the computing environment.

SYSTEM FOR MONITORING AND OPTIMIZING COMPUTING RESOURCE USAGE OF CLOUD BASED COMPUTING APPLICATION
20230043579 · 2023-02-09 ·

A system of monitoring and optimizing computing resources usage for computing application may include predicting a first performance metric for job load capacity of a computing application for optimal job concurrency and optimal resource utilization. The system may include generating an alerting threshold based on the first performance metric. The system may further include, in response to a difference between the alerting threshold and a job load of the computing application within an interval exceeding a threshold, predicting a second performance metric for job load capacity of the computing application for optimal job concurrency and optimal resource utilization. The system may further include, in response to a difference between the first performance metric and the second performance metric exceeding a difference threshold, updating the alerting threshold with a job load capacity with the optimal resource utilization rate corresponding to the second performance metric.

SYSTEMS AND METHODS FOR UNIVERSAL AUTO-SCALING
20230040512 · 2023-02-09 ·

Systems and methods for universal auto-scaling are disclosed. In one embodiment, a method may include: (1) monitoring, by an auto-scale computer program executed by a computer processor, a utilization level at each of a plurality of data layers in a data pod, wherein each data layer comprises at least one node; (2) comparing, by the auto-scale computer program, each of the utilization levels to a threshold; (3) identifying, by the auto-scale computer program, that one of the thresholds is met or exceeded; and (4) deploying, by the auto-scale computer program, an additional node to the data layer with the met or exceeded utilization level.