Patent classifications
G06F9/5011
Performance monitoring in a distributed storage system
Methods and systems for monitoring performance in a distributed storage system described. One example method includes identifying requests sent by clients to the distributed storage system, each request including request parameter values for request parameters; generating probe requests based on the identified requests, the probe requests including probe request parameter values for probe request parameter values, representing a statistical sample of the request parameters included in the identified requests; sending the generated probe requests to the distributed storage system over a network, wherein the distributed storage system is configured to perform preparations for servicing each probe request in response to receiving the probe request; receiving responses to the probe requests from the distributed storage system, and outputting at least one performance metric value measuring a current performance state of the distributed storage system based on the received responses.
NETWORK ACCURACY QUANTIFICATION METHOD AND SYSTEM, DEVICE, ELECTRONIC DEVICE AND READABLE MEDIUM
Disclosed are a network accuracy quantification method, system, and device, an electronic device and a readable medium, which are applicable to a many-core chip. The method includes: determining a reference accuracy according to a total core resource number of the many-core chip and the number of core resources required by each network to be quantified, with the number of the core resources required by each network to be quantified being the number of the core resources which is determined after each network to be quantified is quantified; and determining a target accuracy corresponding to each network to be quantified according to the reference accuracy and the total core resource number of the many-core chip.
CPU utilization for service level I/O scheduling
One or more aspects of the present disclosure relate to service level input/output scheduling to control central processing unit (CPU) utilization. Input/output (I/O) operations are processed with one or more of a first CPU pool and a second CPU pool of two or more CPU pools. The second CPU pool processes I/O operations that are determined to stall any of the CPU cores.
Image forming apparatus that generates a function execution module for an operating-system-independent environment
The present invention provides an image forming apparatus comprising: at least one first module and at least one second module configured to execute functions corresponding respectively to the at least one first module; a first control unit configured to notify, to a corresponding second module, a request accepted by the at least one first module; and a second control unit configured to control in accordance with the notification from the first control unit, the corresponding second module, wherein the at least one first module is activated at all times from when the image forming apparatus is activated, and the at least one second module is generated as a container of an execution environment that is independent of an operating system and whose activation state is controlled by an instruction from the second control unit.
Synthesizing a resource request to obtain resource identifier based on extracted unified model, user requirement and policy requirements to allocate resources
Resource allocation problems involve identification of resource, selection by certain criteria and offering of resources to the requester. Identification of required resources may involve matching the type of resource, selecting based on user requirements and policy criteria, and offering the resource through an assignment system. An apparatus and a method are provided that enable identification and selection of resources. The method includes receiving a resource allocation request for the allocation of a resource, the resource allocation request specifying a set of user requirements. The method includes receiving an operator policy associated with the resource, the operator policy including one or more policy requirements. The method includes synthesizing a resource request based on the resource allocation request and the operator policy. Synthesizing the resource request based on the resource allocation request and the operator policy comprises combining the user requirements with the one or more of the policy requirements.
System for evaluation and weighting of resource usage activity
Embodiments of the present invention provide systems and methods for evaluating and weighting resource usage activity data. The system may establish a communicable link to a user device via a user application to receive resource activity data and historical data from one or more users or systems via multiple communication channels. The system may evaluate the historical data and determine evaluation criteria based on perceived chance of loss associated with particular metadata characteristics, and use the evaluation criteria as weighted metrics for determining an overall evaluation score for the user based on indication from resource activity data that the user has conducted resource transfers with entities or channels identified in the historical data.
Dynamic allocation and re-allocation of learning model computing resources
This disclosure describes techniques for improving allocation of computing resources to computation of machine learning tasks, including on massive computing systems hosting machine learning models. A method includes a computing system, based on a computational metric trend and/or a predicted computational metric of a past task model, allocating a computing resource for computing of a machine learning task by a current task model prior to runtime of the current task model; computing the machine learning task by executing a copy of the current task model; quantifying a computational metric of the copy of the current task model; determining a computational metric trend based on the computational metric; deriving a predicted computational metric of the copy of the current task model based on the computational metric; and, based on the computational metric trend, changing allocation of a computing resource for computing of the machine learning task by the current task model.
Paravirtual storage layer for a container orchestrator in a virtualized computing system
An example method of managing storage for a containerized application executing in a virtualized computing system having a cluster of hosts and a virtualization layer executing thereon, is described. The method includes receiving, at a supervisor container orchestrator, a request for a first persistent volume lifecycle operation from a guest container orchestrator, the supervisor container orchestrator being part of an orchestration control plane integrated with the virtualization layer and configured to manage a guest cluster and virtual machines (VMs), supported by the virtualization layer, in which the guest cluster executes, the guest container orchestrator being part of the guest cluster; and sending, in response to the first persistent volume lifecycle operation, a request for a second persistent volume lifecycle operation from the supervisor container orchestrator to a storage provider of the virtualized computing system to cause the storage provider to perform an operation on a storage volume.
Cognitive processing resource allocation
A processor may run a background process to identify a first task being initiated by a first user on a device, where the first task is associated with a first application. The processor may identify the first user of the device. The processor may analyze one or more interactions of the first user associated with the first application on the device. The processor may allocate, based at least in part on identification of the first user, identification of the first task, or analysis of the one or more interactions of the first user, computing resources to one or more hardware components on the device.
Cloud access method for Iot devices, and devices performing the same
A cloud access method of an internet of things (IoT) device and devices performing the cloud access method are disclosed. The cloud access method using a cloud proxy function includes receiving a first resource retrieval request of a client device from a cloud, extracting, from the first resource retrieval request, a device identification (ID) of a device including a resource for which a resource retrieval is requested, and transmitting a second resource retrieval request of the client device to the device based on the device ID.