G06F9/3555

AUTOMATED PREDICTIVE INFRASTRUCTURE SCALING
20230116810 · 2023-04-13 ·

Methods, apparatus, and processor-readable storage media for automated predictive infrastructure scaling are provided herein. An example computer-implemented method includes generating infrastructure scaling predictions by processing, using a motion-based model, historical data pertaining to number of requests for resources for a given time interval and historical data pertaining to a rate of change in the number of requests; determining a trend based on moving average values pertaining to the historical data; determining a utilization target related to the resources based on the trend; calculating a standard deviation for resource demand based on historical utilization data pertaining to the resources; separating the standard deviation into zones related to levels of utilization of the resources; identifying one of the zones for infrastructure scaling in a future time interval based on the utilization target; updating the predictions by fitting the predictions to the identified zone; and performing automated actions based on the updated predictions.

Physical Quantity Detection Device
20220318010 · 2022-10-06 ·

A physical quantity detection device that can improve arithmetic resolution while preventing an increase in memory capacity is obtained. A physical quantity detection device 100 according to the present invention includes: a physical quantity detection sensor that detects a physical quantity of a measurement target gas; a storage unit that records a correction amount corresponding to a detection value of the physical quantity detection sensor; and an arithmetic unit 110 that performs output adjustment of the detection value using the detection value and the correction amount. Resolution of the storage unit 120 is lower than arithmetic resolution of the arithmetic unit 110.

FORWARD TENSOR AND ACTIVATION SCALING FOR LOWER PRECISION NEURAL NETWORKS
20230205544 · 2023-06-29 · ·

A processing device is provided which comprises memory configured to store data and a processor configured to execute a forward activation of the neural network using a low precision floating point (FP) format, scale up values of numbers represented by the low precision FP format and process the scaled up values of the numbers as non-zero values for the numbers. The processor is configured to scale up the values of one or more numbers, via scaling parameters, to a scaled up value equal to or greater than a floor of a dynamic range of the low precision FP format. The scaling parameters are, for example, static parameters or alternatively, parameters determined during execution of the neural network.

AUTOSCALING AND THROTTLING IN AN ELASTIC CLOUD SERVICE

Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.

System and method for managing execution of processes on a cluster of processing devices

Disclosed is a method and system for managing execution of processes on a cluster of processing devices by a supervising device. The method comprises receiving memory consumption information from each of a processing devices executing a plurality of processes. The method further comprises receiving information related to swapping of a new process from at least one processing device of the processing devices while memory available on the at least one processing device is insufficient to execute the new process. The method further comprises terminating either the new process being swapped or a process executing on the at least one processing device. The method further comprises instructing another processing device having sufficient memory available for execution of the new process being swapped or the process executing on the at least one processing device, whichever is terminated on the at least one processing device.

Ordered Event Stream Event Retention
20220035709 · 2022-02-03 ·

Retention of events of an ordered event stream is disclosed. Expiration of events stored in a segment of an ordered event stream (OES) can be desirable. New events are added to a head of an OES segment, and pruning events from a tail of the OES segment can be valuable. Processing applications can register a processing scheme for a segment, e.g., at-least-once processing, exact1y-once processing, etc., and can generate checkpoints indicating a degree of advancement in processing events of the segment. The ordered event stream can determine a cut point indicative of a progress point, that before which, events of an OES can be marked as ready for expiration. However, events that are marked for expiration can be retained to allow processing based on a checkpoint, e.g., expiration of the event can be refused until there is an assurance the event was read by the processing application.

ACCESSING DATA IN MULTI-DIMENSIONAL TENSORS
20170220352 · 2017-08-03 ·

Methods, systems, and apparatus, including an apparatus for processing an instruction for accessing a N-dimensional tensor, the apparatus including multiple tensor index elements and multiple dimension multiplier elements, where each of the dimension multiplier elements has a corresponding tensor index element. The apparatus includes one or more processors configured to obtain an instruction to access a particular element of a N-dimensional tensor, where the N-dimensional tensor has multiple elements arranged across each of the N dimensions, and where N is an integer that is equal to or greater than one; determine, using one or more tensor index elements of the multiple tensor index elements and one or more dimension multiplier elements of the multiple dimension multiplier elements, an address of the particular element; and output data indicating the determined address for accessing the particular element of the N-dimensional tensor.

AN APPARATUS AND METHOD FOR SPECULATIVELY VECTORISING PROGRAM CODE
20220236990 · 2022-07-28 ·

An apparatus and method are provided for speculatively vectorising program code. The apparatus includes processing circuitry for executing program code, the program code including an identified code region comprising at least a plurality of speculative vector memory access instructions. Execution of each speculative vector memory access instruction is employed to perform speculative vectorisation of a series of scalar memory access operations using a plurality of lanes of processing. Tracking storage is used to maintain, for each speculative vector memory access instruction, tracking information providing an indication of a memory address being accessed within each lane. Checking circuitry then references the tracking information during execution of the identified code region by the processing circuitry, in order to detect any inter lane memory hazard resulting from the execution of the plurality of speculative vector memory access instructions.

LOGIC SCALING SETS FOR CLOUD-LIKE ELASTICITY OF LEGACY ENTERPRISE APPLICATIONS
20210409347 · 2021-12-30 ·

Methods, systems, and computer-readable storage media for determining, by an instance manager and from a pattern associated with a system executing within a landscape, that a status of the system is to change to scaled-in, the pattern being absent any reference to instances of systems executed within landscapes, in response, identifying, by the instance manager and from a logic scaling set that is associated with the system, one or more instances of the system that are able to be scaled-in, selecting, by the instance manager, at least one instance of the one or more instances, and executing, by the instance manager, scaling of the system based on the at least one instance.

VNF LIFECYCLE MANAGEMENT METHOD AND APPARATUS
20210389970 · 2021-12-16 · ·

In a method of managing the lifecycle of a virtual network function (VNF), a VNF manager sends a pre-notification to the VNF to notify the VNF in advance that the VNF is to enter a target stage of the lifecycle. After receiving a response to the pre-notification from the VNF, the VNF manager controls the VNF to enter the target stage of the lifecycle.