G06F9/463

User defined logical spread placement groups for computing resources within a computing environment

Customers of a computing resource service provider may transmit requests to instantiate compute instances associated with a plurality of logical partitions. The compute instances may be executed by a server computer system associated with a particular logical partition of the plurality of logical partitions. For example, a compute service may determine a set of server computer systems that are capable of executing the compute instance based at least in part on placement information and/or a diversity constraint of the plurality of logical partitions.

Identifying firmware functions executed in a call chain prior to the occurrence of an error condition

Technologies are disclosed for identifying firmware functions that were executed in a call chain prior to the occurrence of an error condition, such as an assert or an exception. In particular, a search is made from an instruction pointer (“IP”) for a memory address containing a signature identifying a firmware module. The firmware module that includes a function that generated the error condition can be identified based on the memory address. The name of the function that generated the error condition can be identified using a function mapping file. Previous functions in the same call chain are identified and their names determined using the function mapping file. Output can then be generated that includes the name of the firmware module that includes the function that generated the error condition, the name of the function that generated the error condition, and the names of other functions in the same call chain.

CHARACTERIZING OPERATION OF SOFTWARE APPLICATIONS HAVING LARGE NUMBER OF COMPONENTS
20210224102 · 2021-07-22 ·

An aspect of the present disclosure facilitates characterizing operation of software applications having large number of components. In one embodiment, a digital processing system receives a first data indicating invocation types and corresponding invocation counts at an entry component for multiple block durations, where the entry component causes execution of internal component of the software application. The system also receives a second data indicating values for a processing metric at the internal components for the same block durations. The system then constructs for each internal component, a corresponding component model correlating the values for the processing metrics at the internal component indicated in the second data to the invocation types and invocation counts of the entry component indicated in the first data. The component models can aid in the performance management of the software application.

Initialization of Parameters for Machine-Learned Transformer Neural Network Architectures

An online system trains a transformer architecture by an initialization method which allows the transformer architecture to be trained without normalization layers of learning rate warmup, resulting in significant improvements in computational efficiency for transformer architectures. Specifically, an attention block included in an encoder or a decoder of the transformer architecture generates the set of attention representations by applying a key matrix to the input key, a query matrix to the input query, a value matrix to the input value to generate an output, and applying an output matrix to the output to generate the set of attention representations. The initialization method may be performed by scaling the parameters of the value matrix and the output matrix with a factor that is inverse to a number of the set of encoders or a number of the set of decoders.

Task execution with non-blocking calls
11132218 · 2021-09-28 · ·

Techniques are disclosed relating to task execution with non-blocking calls. A computer system may receive a request to perform an operation comprising a plurality of tasks, each of which corresponds to a node in a graph. A particular one of the plurality of tasks specifies a call to a downstream service. The computer system may maintain a plurality of task queues, each of which is associated with a thread pool. The computer system may enqueue, in an order specified by the graph, the plurality of tasks in one or more of the plurality of task queues. The computer system may process the plurality of tasks. Such processing may include a thread of a particular queue in which the particular task is enqueued performing a non-blocking call to the downstream service. After processing the plurality of tasks, the computer system may return a result of performing the operation.

RULE ENGINE OPTIMIZATION VIA PARALLEL EXECUTION
20210201317 · 2021-07-01 ·

A first graph that includes a plurality of containers is accessed. The containers each contain one or more rules that each have corresponding computer code. The containers are configured for sequential execution by a rule engine. The computer code corresponding to the one or more rules in each of the containers is electronically scanned. Based on the electronic scan, an interdependency among the rules is determined. Based on the determined interdependency, a second graph is generated. The second graph includes all of the rules of the containers, but not the containers themselves. At least some of the rules are configured for parallel execution by the rule engine.

Self-Optimizing Computation Graphs
20210200575 · 2021-07-01 · ·

A method includes receiving code of an application, the code structured as a plurality of instructions in a computation graph that corresponds to operational logic of the application. The method also includes processing the code according to an iterative learning process. The iterative learning process includes determining whether to adjust an exploration rate associated with the iterative learning process based on a state of a computing environment. Additionally, the process includes executing the plurality of instructions of the computation graph according to an execution policy that indicates certain instructions to be executed in parallel. The process also includes determining an execution time for executing the plurality of instructions of the computation graph according to the execution policy and based on the execution time and the exploration rate, adjusting the execution policy to reduce the execution time in a subsequent iteration.

VISUAL CONFORMANCE CHECKING OF PROCESSES

Systems and methods for determining conformance of a process based on a process model of the process and an event log of an execution of the process are provided. The process model is divided into one or more control regions and reachable nodes are determined for each node in the process model. Conformance of the process is determined by comparing transitions from source activities to destination activities in the event log with the reachable nodes based on the one or more control regions.

METHOD AND PROCESS OF CREATING QUALIFIABLE PARAMETER DATA ITEM (PDI) TO DEFINE THE FUNCTION OF A POWER SYSTEM CONTROLLER

A method and system of designing control logic for an avionics system, the method and system including receiving a function requirement defining a desired control logic for the desired control system, designing, by a user in a user interface (UI) of a toolset, the desired control logic comprising an arrangement of predefined library blocks to enable the functional requirement in the desired control system, and generating, by the toolset, a data file representative of the desired control logic to enable the functional requirement during run-time operation in the avionics system.

System and method for multiqueued access to cloud storage

Systems and methods are disclosed herein for multithreaded access to cloud storage. An exemplary method comprises creating a plurality of mount points by mounting, by a hardware processor, a plurality of file systems on a computer system, creating an image file on each of the plurality of mount points, instantiating, for each of the plurality of mount points, a block device on the image file, creating a union virtual block device that creates one or more stripes from each block device, delegating a request for accessing the union virtual block device, received from a client, to one or more block devices and merging a result of the request from each of the one or more block devices and providing the result to the client.