G06F9/463

Simulation using accelerated models
11443088 · 2022-09-13 · ·

Simulation of a circuit design using accelerated models can include determining, using computer hardware, that a design unit of a circuit design specified in a hardware description language is a prime block and determining, using the computer hardware, an output vector corresponding to an output of the prime block. Using the computer hardware, contents of the prime block can be replaced with an accelerated simulation model specified in a high level language, wherein the accelerated simulation model can determine a value for the output of the prime block as a function of values of one or more inputs of the prime block using the output vector. Using the computer hardware, the circuit design can be elaborated and compiled into object code that is executable to simulate the circuit design.

ELECTRONIC SYSTEM FOR AUTHORIZATION AND USE OF CROSS-LINKED RESOURCE INSTRUMENTS

Embodiments of the invention are directed to systems, methods, and computer program products for authorization and use of cross-linked resource instruments. As such, the system results in increased flexibility of resource transfers by enabling a user to establish a virtual link between an account and a resource instrument not originally associated with said account. The user may then complete a transaction using a preferred account, despite the preferred resource instrument being lost, damaged, or otherwise ineffective. Furthermore, the system may activate crosslink requests in real-time, allowing a user to rapidly complete a transaction from any location.

Adaptive program task scheduling to blocking and non-blocking queues

Techniques are disclosed relating to scheduling program tasks in a server computer system. An example server computer system is configured to maintain first and second sets of task queues that have different performance characteristics, and to collect performance metrics relating to processing of program tasks from the first and second sets of task queues. Based on the collected performance metrics, the server computer system is further configured to update a scheduling algorithm for assigning program tasks to queues in the first and second sets of task queues. In response to receiving a particular program task associated with a user transaction, the server computer system is also configured to select the first set of task queues for the particular program task, and to assign the particular program task in a particular task queue in the first set of task queues.

DERIVING COMPONENT STATISTICS FOR A STREAM ENABLED APPLICATION
20220083448 · 2022-03-17 · ·

A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations.

Devices and methods for parallelized recursive block decoding
11294674 · 2022-04-05 · ·

A decoder for determining an estimate of a vector of information symbols carried by a signal received through a transmission channel represented by a channel matrix is provided. The decoder includes a block division unit configured to divide the vector of information symbols into two or more sub-vectors, each sub-vector being associated with a block level; two or more processors configured to determine, in parallel, candidate sub-vectors and to store the candidate sub-vectors in a first stack. Each processor is configured to determine at least a candidate sub-vector by applying a symbol estimation algorithm and to store each candidate sub-vector with a decoding metric and the block level associated with the candidate sub-vector. The decoding metric is lower than or equal to a decoding metric threshold. A processor among the two or more processors is configured to determine at least a candidate vector from candidate sub-vectors stored in the first stack, the candidate vector being associated with a cumulated decoding metric and to update the decoding metric threshold from the cumulated decoding metric.

SHARED DATA FABRIC PROCESSING CLIENT RESET SYSTEM AND METHOD

A processing system that includes a shared data fabric resets a first client processor while operating a second client processor. The first client processor is instructed to stop making requests to one or more devices of the shared data fabric. Status communications are blocked between the first client processor and a memory controller, the second client processor, or both, such that the first client processor enters a temporary offline state. The first client processor is indicated as being non-coherent. Accordingly, when the processor is reset some errors and efficiency losses due messages sent during or prior to the reset are prevented.

Deriving component statistics for a stream enabled application

A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations.

BLOCK PROCESSING METHOD, NODE, AND SYSTEM

Embodiments of this disclosure disclose a block processing method, a node, and a system, to improve the speed of block generation and the performance of transaction processing. One method includes: a first node being a leader node, and a second node being a follower node, packaging, by the first node, first transaction information in a transaction queue of the first node into a candidate block, and broadcasting the candidate block through the blockchain; performing, by the first node, verification on the first transaction information in the candidate block to generate a first verification result, and executing the first transaction information in the candidate block to generate a first transaction execution result; broadcasting, by the first node, a first node processing result comprising the first verification result and the first transaction execution result through the blockchain; receiving, by the first node, a second node processing result broadcast by the second node through the blockchain, the second node processing result comprising: a second verification result generated by the second node by performing verification on the first transaction information in the candidate block, and a second transaction execution result generated by executing the first transaction information in the candidate block by the second node; and performing, by the first node, consensus on the candidate block according to the first node processing result and the second node processing result, saving, by the first node, the candidate block in response to a consensus on the candidate block being reached successfully, and saving, by the first node, the first transaction execution result in response to the first transaction information being executed successfully.

Blockchain read/write data processing method, apparatus, and server
11106488 · 2021-08-31 · ·

Implementations of the present specification describe a computer-implemented method, medium, and system. In one computer-implemented method, a data reading request sent by a client device is received, where the data reading request includes a code value. When the code value is matched in first code value configuration data, a location value corresponding to the code value is obtained based on the first code value configuration data, where the first code value configuration data includes at least one code value that corresponds to a location value. When the location value satisfies a location value determining condition, block data identified by the location value is obtained. A reading result is sent to the client device based on the block data obtained.

EXTENDED ASYNCHRONOUS DATA MOVER FUNCTIONS COMPATIBILITY INDICATION

A method is provided that is executable by a processor of a computer. Note that the processor is communicatively coupled to a memory of the computer, and the memory stores a response block of a call command. In implementing the method, the processor defines a sub-functions field in the response block of the call command. Further the processor indicates that a set of functions of a set of instructions are installed and available at an interface based on a corresponding sub-functions flag within the sub-functions field being set. Note that the interface is also being executed on the computer and that the set of functions being represented by the corresponding sub-functions flag. The processor further indicates that the set of functions of the set of instructions are not installed based on the corresponding sub-functions flag not being set.