G06F9/3836

Systems and methods for controlling machine operations within a multi-dimensional memory space
11526357 · 2022-12-13 · ·

Systems and methods for controlling machine operations are provided. A number of data entries are organized into a stack. Each data entry includes a type, a flag, a length, and a value or pointer entry. For each data entry in the stack, the type of data is determined from the type entry, the presence of an address or value is determined by the respective flag entry, and a length of the address or value is determined from the respective length entry. The data to be utilized or an address for the same at a particular electronic storage area is provided at the respective value or pointer entry, which may be specified by a space definition pushed onto the stack.

DECOUPLED ACCESS-EXECUTE PROCESSING
20220391214 · 2022-12-08 ·

An apparatus comprises first instruction execution circuitry, second instruction execution circuitry, and a decoupled access buffer. Instructions of an ordered sequence of instructions are issued to one of the first and second instruction execution circuitry for execution in dependence on whether the instruction has a first type label or a second type label. An instruction with the first type label is an access-related instruction which determines at least one characteristic of a load operation to retrieve a data value from a memory address. Instruction execution by the first instruction execution circuitry of instructions having the first type label is prioritised over instruction execution by the second instruction execution circuitry of instructions having the second type label. Data values retrieved from memory as a result of execution of the first type instructions are stored in the decoupled access buffer.

Monitoring Apparatus, Device, Method, and Computer Program and Corresponding System

Examples relate to a monitoring apparatus, a monitoring device, a monitoring method, and to a corresponding computer program and system. The monitoring apparatus is configured to obtain a first compute kernel to be monitored and to obtain one or more second compute kernels. The monitoring apparatus is configured to provide instructions, using interface circuitry, to control circuitry of a computing device comprising a plurality of execution units, to instruct the control circuitry to execute the first compute kernel using a first slice of the plurality of execution units and to execute the one or more second compute kernels concurrently with the first compute kernel using one or more second slices of the plurality of execution units, and to instruct the control circuitry to provide information on a change of a status of at least one hardware counter associated with the first slice that is caused by the execution of the first compute kernel. The monitoring apparatus is configured to determine information on the execution of the first compute kernel based on the information on the change of the status of the at least one hardware counter.

Parallel processing for malware detection

Client devices detect malware based on a ruleset received from a security server. To evaluate a current ruleset, an administrative client device initiates a ruleset evaluation of the malware detection ruleset. A security server partitions stored malware samples into a group of evaluation lists based on an evaluation policy. The security server then creates scanning nodes on an evaluation server according to the evaluation policy. The scanning nodes scan the malware samples of the evaluation lists using the rulesets and associate each malware sample with a rule of the ruleset based on the detections, if any. The security server analyzes the associations and optimizes the ruleset and stored malware samples. The security server sends the optimized ruleset to client devices such that they more efficiently detect malware samples.

System and method to dynamically and automatically sharing resources of coprocessor AI accelerators
11521042 · 2022-12-06 ·

A system and method for dynamically and automatically sharing resources of a coprocessor AI accelerator based on workload changes during training and inference of a plurality of neural networks. The method comprising steps of receiving a plurality of requests from each neural network and high-performance computing applications (HPCs) through a dynamic adaptive scheduler module. The dynamic adaptive scheduler module morphs the received requests into threads, dimensions and memory sizes. The method then receives the morphed requests from the dynamic adaptive scheduler module through client units. Each of the neural network applications is mapped with at least one of the client units on a graphics processing unit (GPU) hosts. The method then receives the morphed requests from the plurality of client units through a plurality of server units. Further, the method receives the morphed request from the plurality of server units through one or more coprocessors.

Operating system architecture for microkernel generations support

Computer operating systems are provided that allow for applications compatibility with different OS generations. Example operating systems are designed using an adapted COM (ACOM) component architecture with immutable interfaces and their specifications within same generation, allowing freedom of components programming code implementation. That includes: modular microkernel itself comprising an interface bus component, possibility for OS concurrently run microkernels of various generations, create new components by reusing (comprising/delegating or aggregating) other binary components; special marshalling mechanism reduces header size by allowing executable file to have a single system interface instance' address pointer for dynamic functions importing without need to recompile applications earlier executables in accordance with the latest versions of OS system libraries.

Systems and methods for customization of workflow design

Disclosed here are systems and methods that allow users, upon detecting errors within a running workflow, to either 1) pause the workflow and directly correct its design before resuming the workflow, or 2) pause the workflow, correct the erred action within the workflow, resume running the workflow, and afterwards apply the corrections to the design of the workflow. The disclosure comprises functionality that pauses a single workflow and other relevant workflows as soon as the error is detected and while it is corrected. The disclosed systems and methods improve communication technology between the networks and servers of separate parties relevant and/or dependent on successful execution of other workflows.

System and method for the segmentation of a processor architecture platform solution

Aspects of the present disclosure involve systems, methods, devices, and the like for segmentation of the processor architecture platform. In one embodiment, a system and method are introduced which enable the use of a segmented platform in an extended network. The segmented platform is introduced for processing using standardized plugins enabling the use of processing and services available at the segmented network. In another embodiment, processing on the segmented platform can include the integration of microservices for the completion of the transaction.

FINE-GRAINED IMAGE RECOGNITION METHOD AND APPARATUS USING GRAPH STRUCTURE REPRESENTED HIGH-ORDER RELATION DISCOVERY
20220382553 · 2022-12-01 ·

Embodiments of the present disclosure provides a fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery, wherein the method includes: inputting an image to be classified into a convolutional neural network feature extractor with multiple stages, extracting two layers of network feature graphs in the last stage, constructing a hybrid high-order attention module according to the network feature graphs, and forming a high-order feature vector pool according to the hybrid high-order attention module, using each vector in the vector pool as a node, and utilizing semantic similarity among high-order features to form representative vector nodes in groups, and performing global pooling on the representative vector nodes to obtain classification vectors, and obtaining a fine-grained classification result through a fully connected layer and a classifier based on the classification vectors.

Multi-Mode Integrated Circuits With Balanced Energy Consumption
20220382360 · 2022-12-01 ·

Aspects of the disclosure include methods, systems, and apparatus, including computer-readable storage media for multi-mode integrated circuits with balanced energy consumption. A method includes determining, by one or more processors and based at least on a maximum energy threshold for planned multi-mode system having one or more processing units, a respective number of operations that can be performed per clock cycle by the processing units for each operating mode. The system is configured to consume the same amount of energy per clock cycle in each operating mode, but perform more operations in operating modes corresponding to operations performed on smaller bit-width operands.