G06F9/30079

CONTROLLER WITH CACHING AND NON-CACHING MODES

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.

METADATA PREDICTOR

Embodiments for a metadata predictor. An index pipeline generates indices in an index buffer in which the indices are used for reading out a memory device. A prediction cache is populated with metadata of instructions read from the memory device. A prediction pipeline generates a prediction using the metadata of the instructions from the prediction cache, the populating of the prediction cache with the metadata of the instructions being performed asynchronously to the operating of the prediction pipeline.

SYSTEM AND METHOD FOR HALTING PROCESSING CORES IN A MULTICORE SYSTEM

A multicore computing device includes a memory and a processor coupled to the memory. The processor includes plural cores and a multiple input multiple output (MIMO) block coupled to the cores. The MIMO block receives a halt request from a first core of the cores, transmits a core-halt request to one or more other cores other than the first core, to halt execution of the one or more other cores, and permits the first core to lock with a shared resource.

INSERTING A PROXY READ INSTRUCTION IN AN INSTRUCTION PIPELINE IN A PROCESSOR
20220365780 · 2022-11-17 ·

Inserting a proxy read instruction in an instruction pipeline in a processor is disclosed. A scheduler circuit is configured to recognize when a produced value generated by execution of a producer instruction in the instruction pipeline will not be available through a data forwarding path to be consumed for processing of a subsequent consumer instruction. In this case, the scheduling circuit is configured to insert a proxy read instruction in the instruction pipeline to cause execution of an operation to generate the same produced value as was generated by previous execution of producer instruction in the instruction pipeline. Thus, the produced value will remain available in the instruction pipeline to again be available through a data forwarding path to an earlier stage of the instruction pipeline to be consumed by a consumer instruction, which may avoid a pipeline stall.

Processor and pipeline processing method for processing multiple threads including wait instruction processing

A pipeline processing unit includes a fetch unit that fetches the instruction for the thread having an execution right, a decoding unit that decodes the instruction fetched by the fetch unit, and a computation execution unit that executes the instruction decoded by the decoding unit. When the WAIT instruction for the thread having the execution right is executed, an instruction holding unit holds instruction fetch information on a processing target instruction to be processed immediately after the WAIT instruction. An execution target thread selection unit selects a thread to be executed based on a wait command and, in response to a wait state started from the execution of the WAIT instruction being canceled, processes the processing target instruction from decoding thereof based on the instruction fetch information on the processing target instruction held in the instruction holding unit.

Heterogeneous microprocessor for energy-scalable sensor inference using genetic programming

A heterogeneous microprocessor configured to perform classification on an input signal. The heterogeneous microprocessor includes a die with a central processing unit (CPU) a programmable feature-extraction accelerator (FEA) and a classifier. The FEA is configured to perform feature extraction on the input signal to generate feature data. The classifier is configured to perform classification on the feature data and the CPU is configured to provide processing after classification. The FEA may be configured with a plurality of Gene-Computation (GC) Cores. The FEA may be configured for genetic programing with gene depth constraints, gene number constraints and base function constraints. The classifier may be a support-vector machine accelerator (SVMA). The SVMA may include training data based on error-affected feature data. The heterogeneous microprocessor may also include an automatic-programming & classifier training module. An automatic-programming & classifier training module may be configured to receive input-output feature data and training labels and generate gene code and a classifier model.

NPU IMPLEMENTED FOR ARTIFICIAL NEURAL NETWORKS TO PROCESS FUSION OF HETEROGENEOUS DATA RECEIVED FROM HETEROGENEOUS SENSORS
20230045552 · 2023-02-09 ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

DISTRIBUTED USER MODE PROCESSING
20230094639 · 2023-03-30 ·

A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.

NPU IMPLEMENTED FOR ARTIFICIAL NEURAL NETWORKS TO PROCESS FUSION OF HETEROGENEOUS DATA RECEIVED FROM HETEROGENEOUS SENSORS
20220348229 · 2022-11-03 · ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

Efficient streaming based lazily-evaluated machine learning framework

Methods, systems, and computer products are herein provided for lazy evaluation of input data by a machine learning (ML) framework. An ML pipeline receives input data and compiles a chain of operators into a chain of dataviews configured for lazy evaluation of the input data. Each dataview in the chain represents a computation over data as a non-materialized view of the data. The ML pipeline receives a request for column data and selects a chain of delegates comprising one or more delegates for one or more dataviews in the chain to fulfill the request. The ML pipeline processes the input data with the selected chain of delegates. The ML pipeline performs delegate chaining on a dataview. A feature value for a feature column of the dataview is determined based on the delegate chaining and provided to an ML algorithm to predict column data.