G06F15/80

ACCESSING TOPOLOGICAL MAPPING OF CORES

A method, computer program product, and system include a processor(s) issues an instruction that includes processing core information that includes locations of processing cores of the computing system (logical cores and/or physical cores), and an operator selection. The processor(s) sets security parameters for information returned by the instruction which is topological information for mapping of the logical cores to the physical cores. The processor(s) obtains the topological information and utilizes an operating system to map the logical cores to the physical cores.

ACCESSING TOPOLOGICAL MAPPING OF CORES

A method, computer program product, and system include a processor(s) issues an instruction that includes processing core information that includes locations of processing cores of the computing system (logical cores and/or physical cores), and an operator selection. The processor(s) sets security parameters for information returned by the instruction which is topological information for mapping of the logical cores to the physical cores. The processor(s) obtains the topological information and utilizes an operating system to map the logical cores to the physical cores.

Neural network processor based on application specific synthesis specialization parameters

Neural network processors that have been customized based on application specific synthesis specialization parameters and related methods are described. Certain example neural network processors and methods described in the present disclosure expose several major synthesis specialization parameters that can be used for specializing a microarchitecture instance of a neural network processor to specific neural network models including: (1) aligning the native vector dimension to the parameters of the model to minimize padding and waste during model evaluation, (2) increasing lane widths to drive up intra-row-level parallelism, or (3) increasing matrix multiply tiles to exploit sub-matrix parallelism for large neural network models.

Built-in self-test for a programmable vision accelerator of a system on a chip

In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.

PROCESSOR AND CONTROL METHOD OF PROCESSOR

A processor includes: an address generating unit that, when an instruction decoded by a decoding unit is an instruction to execute arithmetic processing on a plurality of operand sets each including a plurality of operands that are objects of the arithmetic processing, in parallel a plurality of times, generates an address set corresponding to each of the operand sets of the arithmetic processing for each time, based on a certain address displacement with respect to the plurality of operands included in each of the operand sets; a plurality of instruction queues that hold the generated address sets corresponding to the respective operand sets, in correspondence to respective processing units; and a plurality of processing units that perform the arithmetic processing in parallel on the operand sets obtained based on the respective address sets outputted by the plurality of instruction queues.

Exchange between stacked die

Two or more die are stacked together in a stacked integrated circuit device. Each of the processors on these die is able to communicate with other processors on its die by sending data over the switching fabric of its respective die. The mechanism for sending data between processors on the same die (i.e. intradie communication) is reused for sending data between processors on different die (i.e. interdie communication). The reuse of the mechanism is enabled by assigning each processor a vertical neighbour on its opposing die. Each processor has an interdie connection that connects it to the output exchange bus of its neighbour. A processor is able to borrow the output exchange bus of its neighbour by sending data along the output exchange bus of its neighbour.

Exchange between stacked die

Two or more die are stacked together in a stacked integrated circuit device. Each of the processors on these die is able to communicate with other processors on its die by sending data over the switching fabric of its respective die. The mechanism for sending data between processors on the same die (i.e. intradie communication) is reused for sending data between processors on different die (i.e. interdie communication). The reuse of the mechanism is enabled by assigning each processor a vertical neighbour on its opposing die. Each processor has an interdie connection that connects it to the output exchange bus of its neighbour. A processor is able to borrow the output exchange bus of its neighbour by sending data along the output exchange bus of its neighbour.

Electronic device including main processor and systolic array processor and operating method of electronic device

Disclosed is an electronic device which includes a main processor, and a systolic array processor, and the systolic array processor includes processing elements, a kernel data memory that provides a kernel data set to the processing elements, a data memory that provides an input data set to the processing elements, and a controller that provides commands to the processing elements. The main processor translates source codes associated with the systolic array processor into commands of the systolic array processor, calculates a switching activity value based on the commands, and stores the translated commands and the switching activity value to a machine learning module, which is based on the systolic array processor.

Graphics processors and graphics processing units having dot product accumulate instruction for hybrid floating point format

Described herein is a graphics processing unit (GPU) comprising a first processing cluster to perform parallel processing operations, the parallel processing operations including a ray tracing operation and a matrix multiply operation; and a second processing cluster coupled to the first processing cluster, wherein the first processing cluster includes a floating-point unit to perform floating point operations, the floating-point unit is configured to process an instruction using a bfloat16 (BF16) format with a multiplier to multiply second and third source operands while an accumulator adds a first source operand with output from the multiplier.

Virtualized Multicore Systems With Extended Instruction Heterogeneity
20230237009 · 2023-07-27 ·

A system on a chip may include a plurality of data plane processor cores sharing a common instruction set architecture. At least one of the data plane processor cores is specialized to perform a particular function via extensions to the otherwise common instruction set architecture. Such systems on a chip may have reduced physical complexity, cost, and time-to-market, and may provide improvements in core utilization and reductions in system power consumption.