Patent classifications
G06F9/5066
ACCESSING TOPOLOGICAL MAPPING OF CORES
A method, computer program product, and system include a processor(s) issues an instruction that includes processing core information that includes locations of processing cores of the computing system (logical cores and/or physical cores), and an operator selection. The processor(s) sets security parameters for information returned by the instruction which is topological information for mapping of the logical cores to the physical cores. The processor(s) obtains the topological information and utilizes an operating system to map the logical cores to the physical cores.
CPU utilization for service level I/O scheduling
One or more aspects of the present disclosure relate to service level input/output scheduling to control central processing unit (CPU) utilization. Input/output (I/O) operations are processed with one or more of a first CPU pool and a second CPU pool of two or more CPU pools. The second CPU pool processes I/O operations that are determined to stall any of the CPU cores.
Virtual machine deployment method and OMM virtual machine
This application describes a virtual machine deployment method and an operation and maintenance management (OMM) virtual machine. The method includes: obtaining, by an OMM virtual machine, a quantity and a specification of service virtual machines created in virtual network function application software to which the OMM virtual machine belongs; and determining, by the OMM virtual machine based on load that needs to be carried by the application software and the quantity and the specification of the service virtual machines, a module to be configured for each service virtual machine. The described implementations avoid or reduce waste of virtual machine resources.
Computation graph mapping in heterogeneous computer system
The present disclosure relates to a method for scheduling a computation graph on heterogeneous computing resources. The method comprises generating an augmented computation graph that includes a first set of replica nodes corresponding to a first node in the computation graph and a second set of replica nodes corresponding to a second node in the computation graph, wherein the replica nodes of the first set are connected by edges to the replica nodes of the second set according to dependency between the first node and the second node in the computation graph, adapting the augmented computation graph to include performance values for the edges, the replica nodes of the first set, and the replica nodes of the second set, and determining a path across the adapted computation graph via one replica node of the first set and one replica node of the second set based on the performance values.
Surrogate process creation technique for high process-per-server scenarios
A system and method for launching parallel processes on a server configured to process a number of parallel processes. A request is received from a parallel application to start a number of parallel processes. In response to this request a launcher creates a surrogate. The surrogate inherits communications channels from the launcher. The surrogate then executes activities related to the launch of the parallel processes, and then launches the parallel processes. The parallel processes are launched and the surrogate is terminated.
Balancing data partitions among dynamic services in a cloud environment
A method includes identifying, by a first instance of a service, a first number of data partitions of a data source to be processed by the service and a second number of instances of the service available to process the first number of data partitions. The method further includes separating the first number of data partitions into a first set of data partitions and a second set of data partitions in view of the second number of instances of the service, determining a target number of data partitions from the first set of data partitions to be claimed by each of the second number of instances of the service, and claiming, by the first instance of the service, the target number of data partitions from the first set of data partitions and up to one data partition from the second set of data partitions.
SERVER SIDE CROSSFADING FOR PROGRESSIVE DOWNLOAD MEDIA
Systems and methods are provided to implement and facilitate cross-fading, interstitials and other effects/processing of two or more media elements in a personalized media delivery service. Effects or crossfade processing can occur on the broadcast, publisher or server-side, but can still be personalized to a specific user, in a manner that minimizes processing on the downstream side or client device. The cross-fade can be implemented after decoding, processing, re-encoding, and rechunking the relevant chunks of each component clip. Alternatively, the cross-fade or other effect can be implemented on the relevant chunks in the compressed domain, thus obviating any loss of quality by re-encoding. A large scale personalized content delivery service can limit the processing to essentially the first and last chunks of any file, there being no need to process the full clip.
TECHNIQUES FOR DISTRIBUTED PROCESSING TASK PORTION ASSIGNMENT
Various embodiments are generally directed to techniques for assigning portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system. An apparatus to assign processor component cores to perform task portions includes a processor component; an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; and a core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance point between compute time and power consumption needed to execute the instances of the task portion routines. Other embodiments are described and claimed.
DISTRIBUTED TASK SYSTEM AND SERVICE PROCESSING METHOD BASED ON INTERNET OF THINGS
A distributed task system based on internet of things and a related service processing method are provided. The system can include a trigger for receiving data sent by a smart device or a user in the internet of things, a task scheduling module and a plurality of processing units. The task scheduling module can instantiate a service processing flow according to the data received by the trigger, and can sequentially schedule and start multiple processing units according to the service processing flow. Each processing units can execute a certain stage task of the service processing flow, and send an execution result of the certain stage of the service processing flow back to the task scheduling module. And the task scheduling module can notify the smart device or the user of a final execution result of one processing unit that executes a final stage task of the service processing flow.
FPGA acceleration for serverless computing
In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.