Patent classifications
G06F9/505
MULTIPLE LOCKING OF RESOURCES AND LOCK SCALING IN CONCURRENT COMPUTING
Methods and systems for implementing division of process resources of running processes into individually locked partitions, and indirect mapping of keys to process resources to locks of each partition, are provided. In computing systems implementing concurrent processing, applications may generate and destroy concurrently running processes with high frequency. Real-time security monitoring may cause the computing system to run monitoring processes collecting large volumes of data regarding system events occurring in context of various other processes, causing threads of processes of the security monitoring application to make frequent write access and read access to resources of those processes in memory. Indirect mapping of lock acquisition across locks provides scalable alleviation of lock contention and thread blocking that result from computational concurrency, while handling read and write requests which arise at unpredictable times from kernel-space monitoring processes, and which request unpredictable resources of monitored user-space processes.
HYBRID COMPUTING SYSTEM MANAGEMENT
A method, a system and a computer program product for hybrid computing system management are proposed. In the method, workload information associated with a set of application server instances running in a first computing system is obtained by a server controller in response to a scaling request for changing the number of instances in the set of application server instances from a request controller. The set of application server instances serves at least one application running in a second computing system. A scaling decision indicating whether to change the number of instances in the set of application server instances is determined by a predictor based on the workload information from the server controller. The second computing system is enabled by the request controller to handle requests associated with the at least one application for the set of application server instances based on the scaling decision.
TECHNIQUES FOR MANAGING CONTAINER-BASED SOFTWARE SERVICES
One embodiment of the present invention sets forth a technique for executing one or more services in a technology stack. The technique includes deploying a first set of containers within an environment, wherein each container included in the first set of containers includes a first service that implements a first interface and a first shim that implements a second interface. The technique also includes transmitting a first request associated with the second interface to a first container included in the first set of containers, wherein the first request is processed by an instance of the first shim and an instance of the first service executing within the first container.
DISTRIBUTED STORAGE SYSTEM AND VOLUME MANAGEMENT METHOD
In a distributed storage system that has a plurality of computer nodes having processors and a storage drive and that provides a volume, each of the plurality of computer nodes provides a sub-volume, the processor of the computer node manages settings of each sub-volume of the computer node, the volume can be configured by using a plurality of sub-volumes provided by the plurality of computer nodes, and the sub-volumes include a plurality of logical storage areas formed by being allocated with physical storage areas of the storage drive. The plurality of computer nodes move the logical storage areas between the sub-volumes that belong to the same volume and that are provided by different computer nodes.
WORKLOAD AWARE VIRTUAL PROCESSING UNITS
A processing unit is configured differently based on an identified workload, and each configuration of the processing unit is exposed to software (e.g., to a device driver) as a different virtual processing unit. Using these techniques, a processing system is able to provide different configurations of the processing unit to support different types of workloads, thereby conserving system resources. Further, by exposing the different configurations as different virtual processing units, the processing system is able to use existing device drivers or other system infrastructure to implement the different processing unit configurations.
Allocation of resources for a plurality of hosts
It is presented a method for enabling allocation of resources for a plurality of hosts. The method is performed by a server (1) and comprises identifying (S100) a service running on one or more of the plurality of hosts, determining (S140) a stretch factor for a recurring load pattern for the service running on the one or more of the plurality of hosts, and storing (S150) the identified service together with the determined stretch factor. It is also presented a server, a computer program and a computer program product.
Method, apparatus, client terminal, and server for data processing
Embodiments of the present specification provide a method, an apparatus, a client terminal, and a server for data processing. The method includes: selecting, based on a data attribute of to-be-processed data, a target coordinating server from a plurality of coordinating servers, the plurality of coordinating servers belonging to a plurality of server clusters respectively; and sending a data processing request to the target coordinating server, such that a server cluster to which the target coordinating server belongs processes the data processing request preferentially, the data processing request directing to the to-be-processed data.
AI ETHICS SCORES IN AUTOMATED ORCHESTRATION DECISION-MAKING
One example method includes receiving an orchestration automation request for an asset, identifying an ethics rule that applies to the asset, comparing the ethics rule to asset values contained in an AI ethics datastore, based on the comparing, identifying a list of assets that conform to the ethics rule, and when the asset appears in the list of assets that conform to the ethics rule, automatically placing the asset at an entity of a computing infrastructure.
CITY MANAGEMENT SUPPORT APPARATUS, CITY MANAGEMENT SUPPORT METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
The city management support apparatus is an apparatus supporting management of a city in which a plurality of services sharing a physical resource are provided. The city management support apparatus receives an input of information on a provision status of the resource, and receives an input of a service definition for each of the plurality of services. The city management support apparatus calculates a time transition of dependency of the plurality of services on the resource based on the service definition for each of the plurality of services, and detects a competition for acquisition of the resource among the plurality of services based on the time transition of the dependency. The city management support apparatus generates a proposed amendment to the service definition for at least one of the plurality of services so as to optimize the competition for the acquisition of the resource among the plurality of services.
TASK ALLOCATION ACROSS PROCESSING UNITS OF A DISTRIBUTED SYSTEM
An organization's distributed data storage and processing system produces an enormous volume of source data (such as log files or other statistics). The organization uses a data item processing system to process the source data in prioritized chunks, and to further assign the data items within the chunks to different processing units based on estimates of processing completion time. In this way, it becomes feasible to process the source data for analysis by consumer clients within a reasonable amount of time, and the aggregate use of the processing units is made more efficient.