G06F15/8015

Smart performance of spill fill data transfers in computing environments
10956359 · 2021-03-23 · ·

A mechanism is described for facilitating smart spill/fill data transfers in computing environments. A method of embodiments, as described herein, includes facilitating dividing a kernel into regions including low pressure regions and high pressure regions, where the low pressure regions are associated with low use of registers hosted by a processor of a computing device, while the high pressure regions are associated with high use of the registers. The method may further include transferring of data between memory and the registers based on at least one of the low pressure regions and the high pressure regions.

ARCHITECTURE TO SUPPORT SYNCHRONIZATION BETWEEN CORE AND INFERENCE ENGINE FOR MACHINE LEARNING
20210081846 · 2021-03-18 ·

A system to support a machine learning (ML) operation comprises a core configured to receive and interpret commands into a set of instructions for the ML operation and a memory unit configured to maintain data for the ML operation. The system further comprises an inference engine having a plurality of processing tiles, each comprising an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform tasks of the ML operation on the data in the OCM. The system also comprises an instruction streaming engine configured to distribute the instructions to the processing tiles to control their operations and to synchronize data communication between the core and the inference engine so that data transmitted between them correctly reaches the corresponding processing tiles while ensuring coherence of data shared and distributed among the core and the OCMs.

Architecture to support synchronization between core and inference engine for machine learning

A system to support a machine learning (ML) operation comprises a core configured to receive and interpret commands into a set of instructions for the ML operation and a memory unit configured to maintain data for the ML operation. The system further comprises an inference engine having a plurality of processing tiles, each comprising an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform tasks of the ML operation on the data in the OCM. The system also comprises an instruction streaming engine configured to distribute the instructions to the processing tiles to control their operations and to synchronize data communication between the core and the inference engine so that data transmitted between them correctly reaches the corresponding processing tiles while ensuring coherence of data shared and distributed among the core and the OCMs.

GRAPH STREAMING NEURAL NETWORK PROCESSING SYSTEM AND METHOD THEREOF

Disclosed herein is a graph streaming neural network processing system comprising a first processor array, a second processor, and a thread scheduler. The thread scheduler dispatches a thread of a first node to the first processor array or the second processor, wherein the thread is executed to generate output data comprising a data unit stored in a private data buffer of the second processor. The thread scheduler determines that the data unit is sufficient for executing a thread of a second node. The second node is dependent on the output data generated by execution of a plurality of threads of the first node. Upon determining that the data unit is sufficient, the thread scheduler dispatches the thread of the second node. The thread scheduler determines to dispatch a subsequent thread of the first node for execution when a predefined threshold buffer size is available on the private data buffer.

High bandwidth memory system with distributed request broadcasting masters

A system comprises a processor and a plurality of memory units. The processor is coupled to each of the plurality of memory units by a plurality of network connections. The processor includes a plurality of processing elements arranged in a two-dimensional array and a corresponding two-dimensional communication network communicatively connecting each of the plurality of processing elements to other processing elements on same axes of the two-dimensional array. Each processing element that is located along a diagonal of the two-dimensional array is configured as a request broadcasting master for a respective group of processing elements located along a same axis of the two-dimensional array.

APPARATUSES, METHODS, AND SYSTEMS FOR VECTOR PROCESSOR ARCHITECTURE HAVING AN ARRAY OF IDENTICAL CIRCUIT BLOCKS

Systems, methods, and apparatuses relating to vector processor architecture having an array of identical circuit blocks are described. In one embodiment, a processor includes a single centralized circuit comprising an instruction decoder and a controller; and a plurality of circuit slices that each comprise an arithmetic logic unit, a multiplier, a register file, a local memory, and a same plurality of logic circuits and a packed data datapath in between, wherein each circuit slice includes a physical port that provides a unique identification value that identifies a circuit slice from the other circuit slices, and the controller is to broadcast a same configuration value to the plurality of circuit slices to cause a first circuit slice to enable a first logic circuit and enable a second logic circuit of the first circuit slice based on its unique identification value and the configuration value, and cause a second circuit slice to enable a same, first logic circuit and disable a same, second logic circuit of the second circuit slice based on its unique identification value and the configuration value.

FACILITATING DATA PROCESSING USING SIMD REDUCTION OPERATIONS ACROSS SIMD LANES

Various embodiments are provided for facilitating data processing by one or more processors in a computing system. An instruction to be executed may be obtained. The instruction is a single instruction multiple data (SIMD) reduction operation of an operand vector with a plurality of vector elements. The SIMD reduction operation may be executed to produce a result vector with a plurality of alternative vector elements. One or more reduction functions may be performed on each of a pair of vector elements from the plurality of vector elements of the operand vector and a result of the one or more reduction functions may be placed in a corresponding vector element of the result vector.

Selectable reconfiguration for dynamically reconfigurable IP cores

Systems and methods for reconfiguration of a hardened intellectual property (IP) block in an integrated circuit (IC) device are provided. Reconfiguration of the hardened IP block in the IC device may transition between functions supported by the hardened IP block. A transition may occur as a pre-configured profile is selected to reconfigure the hardened IP block. Further, configuration data associated with each of the pre-configured profiles of the hardened IP block may be generated and storage space to store the configuration data may be created. Additionally, reconfiguration control logic to read and implement the configuration data in hard IP design primitives may also be generated.

Networked Computer With Embedded Rings Field
20200311020 · 2020-10-01 ·

One aspect of the invention provides a computer comprising a plurality of interconnected processing nodes arranged in a ladder configuration comprising a plurality of facing pairs of processing nodes. The processing nodes of each pair are connected to each other by two links. A processing node in each pair is connected to a corresponding processing node in an adjacent pair by at least one link. The processing nodes are programmed to operate the ladder configuration to transmit data around two embedded one-dimensional rings formed by respective sets of processing nodes and links, each ring using all processing nodes in the ladder once only.

Networked Computer With Multiple Embedded Rings
20200311528 · 2020-10-01 ·

A computer comprising a plurality of interconnected processing nodes arranged in multiple stacked layers forming a multi-face prism is provided. Each face of the prism comprises multiple stacked pairs of nodes. Said nodes are connected by at least two intralayer links. Each node is connected to a corresponding node in an adjacent pair by an interlayer link. The corresponding nodes are connected by respective interlayer links to form respective edges. Each pair forms part of a layers, each layer comprising multiple nodes, each node connected to their neighbouring nodes in the layer by at least one of the intralayer links to form a ring. Data is transmitted around paths formed by respective sets of nodes and links, each path having a first portion between a first and second endmost layers, and a second portion provided between the second and first endmost layers and comprising one of the edges.