G06F2209/508

Processing future-dated resource reservation requests

Computer systems and methods for managing resources are described. In an aspect, a method includes: providing, to a client device associated with an authenticated entity, an intraday transfer interface, the intraday transfer interface including a selectable option to issue a future-dated borrowed resource reservation request to set aside an amount of borrowed resources; receiving, from the client device, a signal representing the future-dated borrowed resource reservation request, the future-dated borrowed resource reservation request associated with an amount of borrowed resources to set aside and a date of release of such borrowed resources; detecting a trigger condition, the trigger condition including an end-of-day reconciliation of resource tracking data and, in response to detecting the trigger condition, evaluating the future-dated borrowed resource reservation request based on a current amount of borrowed resources; and when the evaluation of the future-dated borrowed resource reservation request indicates that the future-dated borrowed resource reservation request cannot be implemented, generating an error by sending an error message to a computing device and rejecting the future-dated borrowed resource reservation request.

DYNAMIC RESOURCE ALLOCATION IN A DISTRIBUTED SYSTEM

A prioritization system includes a memory that stores an access record with, for each of the users, an indication of a previous usage of computing applications. The memory stores a permission record with, for each of the users, an indication of the computing applications that the user is permitted to access. The memory stores user affinities that include, for each of the users, an affinity score corresponding to a predetermined ability level of the user to engage in an activity associated with one or more of the computing applications. The prioritization system determines a priority score for each of the users. In response to receiving a request for a priority of a first user of the users, the prioritization system provides a response with the priority score determined for the first user of the users.

DETERMINING AVAILABLE MEMORY ON A MOBILE PLATFORM
20230036737 · 2023-02-02 ·

An application from a plurality of applications executing at one or more processors of a computing device may determine a plurality of memory metrics of the computing device. The application may determine information indicative of a predicted safe amount of memory available for allocation by an application from the plurality of applications based at least in part on the plurality of memory metrics. The application may adjust, based at least in part on the information indicative of the predicted safe amount of memory available for allocation by the application, one or more characteristics of the application executing at the one or more processors to adjust an amount of memory allocated by the application.

COMPUTING CLUSTER BRING-UP ON PUBLIC CLOUD INFRASTRUCTURE USING EXPRESSED INTENTS
20230036454 · 2023-02-02 ·

Methods, systems and computer program products for bringing-up a computing cluster on a public cloud infrastructure with techniques utilizing expressed intents (high level descriptions of desired configuration) and asynchronously receiving configuration status messages from the public cloud infrastructure. The method includes a cloud management computing system transmitting to the public cloud infrastructure a first expressed intent for bringing-up a computing cluster. The cloud management computing system asynchronously receiving periodic status messages comprising cluster status data from the public cloud infrastructure reflecting a current configuration state of the computing cluster. The system determines, based on the cluster status data, whether the first expressed intent for the computing cluster has been achieved.

INPUT WORK ITEM FLOW METRIC FOR COMPUTING ENVIRONMENT TELEMETRY

A method is described. The method includes polling a queue a plurality of times over a plurality of intervals, where, the queue feeds work items to a processor. The method includes determining, from the polling, respective work item flow metrics for the plurality of intervals. The method includes determining a processor's performance setting based on the plurality of respective work item flow metrics.

OPTIMIZING PERFORMANCE OF A COMPUTING DEVICE IN A MIXED WORKLOAD ENVIRONMENT
20230029920 · 2023-02-02 ·

Performance of a computing device can be optimized in a mixed workload environment. A management service can be configured to capture telemetry data from web applications or containerized applications and use such telemetry data to detect a scenario. Based on the detected scenario, the management service can select optimized performance settings and cause the optimized performance settings to be applied within the browser or container in which the application is deployed. Machine learning techniques can be employed to detect and define optimized performance settings for a particular scenario.

Systems and method for automating security workflows in a distributed system using encrypted task requests

Methods and systems for automating execution of a workflow by integrating security applications of a distributed system into the workflow are provided. In embodiments, a system includes an application server in a first cloud, configured to receive a trigger to execute the workflow. The workflow includes tasks to be executed in a device of a second cloud. The application server sends a request to process the task to a task queue module. The task queue module places the task request in a queue, and a worker hosted in the device of the second cloud retrieves the task request from the queue and processes the task request by invoking a plugin. The plugin interacts with a security application of the device of the second cloud to execute the task, which yields task results. The task results are provided to the application server, via the worker and the task queue module.

Information processing system and method for controlling information processing system

An information processing system includes an information processing apparatus and a management apparatus. A first processor of the information processing apparatus controls resource allocation to a first virtual machine that operates on the information processing apparatus and executes a virtual load balancer that distributes a first load to a plurality of second virtual machines. The first processor notifies, when a second load of the virtual load balancer exceeds a predetermined first threshold value, an occurrence of an overload to the management apparatus. The first processor receives and executes an addition command of adding a resource allocated to the first virtual machine. A second processor of the management apparatus creates, upon being notified of the occurrence of the overload, the addition command based on resource information of the information processing apparatus and management information of the virtual load balancer. The second processor notifies the addition command to the information processing apparatus.

Risk score generation utilizing monitored behavior and predicted impact of compromise

A method includes monitoring user behavior in an enterprise system, identifying a given user of the enterprise system associated with a given portion of the monitored user behavior, determining a predicted impact of compromise of the given user on the enterprise system, generating a risk score for the given user based on the predicted impact of compromise and the given portion of the monitored user behavior, and identifying one or more remedial actions to reduce the risk score for the given user. The method also includes implementing, prior to detecting compromise of the given user, at least one of the remedial actions to modify a configuration of at least one asset in the enterprise system, the at least one asset comprising at least one of a physical computing resource and a virtual computing resource in the enterprise system.

Dynamic application migration across storage platforms

Embodiments of the present disclosure relate to load balancing application processing between storage platforms. Input/output (I/O) workloads can be anticipated during one or more time-windows. Each I/O workload can comprise one or more I/O operations corresponding to one or more applications. Processing I/O operations of each application can be dynamically migrated to one or more storage platforms of a plurality of storage platforms based on the anticipated workload.