G06F9/30069

Down-sampling of negative signals used in training machine-learned model

In an example embodiment, a skip logic using downsampling is applied to negative signals on a training data set fed to a machine-learning algorithm to train a machine-learned model. By downsampling the negatively labeled pieces of training data, the technical problem encountered in biasing the machine-learned model towards negative cases is overcome.

NPU IMPLEMENTED FOR FUSION-ARTIFICIAL NEURAL NETWORK TO PROCESS HETEROGENEOUS DATA PROVIDED BY HETEROGENEOUS SENSORS
20230347934 · 2023-11-02 ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

BRANCH PREDICTION METHOD, BRANCH PREDICTION APPARATUS, PROCESSOR, MEDIUM, AND DEVICE
20230350683 · 2023-11-02 ·

A branch prediction method includes obtaining an instruction block containing an instruction, performing detection on the instruction block according to branch instruction information stored in a branch target buffer of a branch predictor of a processor, and in response to detecting that the instruction is a branch instruction, detecting a type of the branch instruction. The method further includes, in response to the type of the branch instruction being a type other than a target type, searching for a predicted jump address of the branch instruction in the branch target buffer, and, in response to the type of the branch instruction being the target type, searching for the predicted jump address of the branch instruction in other address areas of the branch predictor. The target type includes at least one of a function call instruction type, a function return instruction type, or a loop instruction type.

METHOD TO OPTIMIZE STORAGE PARTITION REDISCOVERY

Disclosed is a storage management system comprising: sending, by a user device manager running at a user space of an operating system, a first request for partition table data to a block device; receiving, by the user device manager, first partition data of the block device; sending, by the user device manager, a second request for partition data of the block device to a kernel of the operating system; receiving, by the user device manager, second partition data from the kernel, wherein the second partition data is associated with the block device and cached by the kernel; determining whether the first partition data and the second partition data are identical; and in response to determining that the first partition data is different from the second partition data, performing a device discovery operation on the block device.

NPU implemented for artificial neural networks to process fusion of heterogeneous data received from heterogeneous sensors
11511772 · 2022-11-29 · ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

Systems and methods to skip inconsequential matrix operations

Disclosed embodiments relate to systems and methods to skip inconsequential matrix operations. In one example, a processor includes decode circuitry to decode an instruction having fields to specify an opcode and locations of first source, second source, and destination matrices, the opcode indicating that the processor is to multiply each element at row M and column K of the first source matrix with a corresponding element at row K and column N of the second source matrix, and accumulate a resulting product with previous contents of a corresponding element at row M and column N of the destination matrix, the processor to skip multiplications that, based on detected values of corresponding multiplicands, would generate inconsequential results, scheduling circuitry to schedule execution of the instruction; and execution circuitry to execute the instructions as per the opcode.

METHOD PERFORMED BY A MICROCONTROLLER FOR MANAGING A NOP INSTRUCTION AND CORRESPONDING MICROCONTROLLER
20220253314 · 2022-08-11 ·

Disclosed herein is a method for managing of NOP instructions in a microcontroller, the method comprising duplicating all jump instructions causing a NOP instruction to form a new instruction set; inserting an internal NOP instruction into each of the jump instructions; when a jump instruction is executed, executing a subsequent instruction of the new instruction set; and executing the internal NOP instruction when an execution of the subsequent instruction is skipped.

METHOD AND APPARATUS OF OPERATING A NEURAL NETWORK

Disclosed is a method and apparatus of operating a neural network. The neural network operation method includes receiving data for the neural network operation, verifying whether competition occurs between a first data traversal path corresponding to a first operation device and a second data traversal path corresponding to a second operation device, determining first operand data and second operand data from among the data using a result of the verifying and a priority between the first data traversal path and the second data traversal path, and performing the neural network operation based on the first operand data and the second operand data.

HASHING FOR DEDUPLICATION THROUGH SKIPPING SELECTED DATA
20220245104 · 2022-08-04 ·

A system for calculating a fingerprint across a data set by identifying a data set to hash, the data set comprising a set of data blocks, identifying data within the data set to skip, generating, by a hash engine, a hash for each data block in the set of data blocks within the data set except for the data within the data set to skip, and compressing the data.

Skip-over offset branch prediction

A system includes a branch predictor and a processing circuit configured to perform a plurality of operations including storing a skip-over offset value in the branch predictor. The skip-over offset value defines a number of search addresses of the branch predictor to be skipped. The operations further include searching the branch predictor for a branch prediction. Responsive to finding the branch prediction, the searching of the branch predictor is re-indexed based on the skip-over offset value associated with the branch prediction.