G06F8/457

Handling Interrupts from a Virtual Function in a System with a Multi-Die Reconfigurable Processor

A system is presented that includes a communication link, a runtime processor, and a reconfigurable processor. The reconfigurable processor is adapted for generating an interrupt to the runtime processor in response to a predetermined event and includes first and second dies arranged in a package, having respective first and second arrays of coarse-grained reconfigurable (CGR) units, and respective first and second communication link interfaces coupled to the communication link. The runtime processor is adapted for configuring the first and second communication link interfaces to provide access to the first and second arrays of coarse-grained reconfigurable units from first and second physical function drivers and from at least one virtual function driver, and the reconfigurable processor is adapted for sending the interrupt to the first or to the second physical function driver and for sending the interrupt to a virtual function driver of the at least one virtual function driver.

Systems And Methods For Processor Circuits

A processor circuit includes a first front-end circuit for scheduling first instructions for a first program and a second front-end circuit for scheduling second instructions for a second program. A back-end processing circuit processes first operations in the first instructions and second operations in the second instructions. A multi-program scheduler circuit causes the first front-end circuit to schedule processing of the first operations on the back-end processing circuit and causes the second front-end circuit to schedule processing of the second operations on the back-end processing circuit. A processor generator system includes a processor designer that creates specifications for a processor using workloads for a program, a processor generator that generates a first processor instance using the specifications, a processor optimizer that generates a second processor instance using the workloads, and a co-designer that modifies the program using the second processor instance.

LABEL PROPAGATION IN A DISTRIBUTED SYSTEM

Data are maintained in a distributed computing system that describe a graph. The graph represents relationships among items. The graph has a plurality of vertices that represent the items and a plurality of edges connecting the plurality of vertices. At least one vertex of the plurality of vertices includes a set of label values indicating the at least one vertex's strength of association with a label from a set of labels. The set of labels describe possible characteristics of an item represented by the at least one vertex. At least one edge of the plurality of edges includes a set of label weights for influencing label values that traverse the at least one edge. A label propagation algorithm is executed for a plurality of the vertices in the graph in parallel for a series of synchronized iterations to propagate labels through the graph.

Configurable Access to a Multi-Die Reconfigurable Processor by a Virtual Function

A data processing system is presented that includes a communication link, a runtime processor, and one or more reconfigurable processors. A reconfigurable processor includes first and second dies arranged in a package, having respective K and L arrays of coarse-grained reconfigurable (CGR) units, and respective first and second communication link interfaces coupled to the communication link. The runtime processor is adapted for configuring the first communication link interface to provide access to the K arrays of CGR units through the communication link from a first physical function driver and from up to M virtual function drivers, and for configuring the second communication link interface to provide access to the K arrays of CGR units of the first die and to the L arrays of CGR units of the second die through the communication link from a second physical function driver and from up to N virtual function drivers.

Software acceleration platform for supporting decomposed, on-demand network services
11169787 · 2021-11-09 · ·

An example embodiment may involve obtaining one or more blueprint files. The blueprint files may collectively define a system of processing nodes, a call flow involving a sequence of messages exchanged by the processing nodes, and message formats of the messages exchanged by the processing nodes. The example embodiment may also involve compiling the blueprint files into machine executable code. The machine executable code may be capable of: representing the processing nodes as decomposed, dynamically invoked units of logic, and transmitting the sequence of messages between the units of logic in accordance with the message formats. The units of logic may include a respective controller and one or more respective workers for each type of processing node.

EXTENDING APPLICATION LIFECYCLE MANAGEMENT TO USER-CREATED APPLICATION PLATFORM COMPONENTS

The examples described herein extend application lifecycle management (ALM) processes (e.g., create, update, delete, retrieve, import, export, uninstall, publish) to user-created application platform components. First and second components are generated within an application platform. The first component is customized at least by indicating whether the first component is subject to localization, defining a layering of the first component, and indicating whether the first component is protected from downstream modification. The second component is customized in accordance with customizing the first component, and is further customized by defining a dependency of the second component on the first component. The components are deployed in a target environment with metadata representing the customizations and enabling the ALM processes.

Programming a Coarse Grained Reconfigurable Array through Description of Data Flow Graphs

An assembly language program for a coarse grained reconfiguration array (CGRA), having dispatch interface information indicating operations to be performed via a dispatch interface of the CGRA to receive an input, memory interface information indicating operations to be performed via one or more memory interfaces of the CGRA, tile memory information indicating memory variables referring to memory locations to be implemented in tile memories of the CGRA, a flow description specifying one or more synchronous data flows, through the memory locations referenced via the memory variables in the tile memory information, to produce a result from the input using the CGRA.

Methods and devices for computing a memory size for software optimization

There is provided methods and devices for computing a tile size for software optimization. A method includes receiving, by a computing device, information indicative of one or more of a set of loop bounds and a set of data shapes; processing, by the computing device, the information to determine a computation configuration based on the obtained information, the computation configuration implementable by a compiler, said processing including evaluating at least the computation configuration based on a build cost model, the build cost model representative of a data transfer cost and a data efficiency of the computation configuration; and transmitting, by the computing device, instructions directing the compiler to implement the computation configuration.

API GATEWAY SELF PACED MIGRATION

Disclosed herein are system, method, and computer program product embodiments for self-paced migration of an application programming language (API) gateway. An embodiment operates by applying a policy chain comprising a first set of policies to an API request received at a first API gateway. The embodiment forwards the API request to a second API gateway and applies, at the second gateway, a virtual policy chain comprising a second set of policies to the API request. The embodiment then forwards the API request to the first API gateway and routes the API request to a corresponding backend API.

Programming a coarse grained reconfigurable array through description of data flow graphs

An assembly language program for a coarse grained reconfiguration array (CGRA), having dispatch interface information indicating operations to be performed via a dispatch interface of the CGRA to receive an input, memory interface information indicating operations to be performed via one or more memory interfaces of the CGRA, tile memory information indicating memory variables referring to memory locations to be implemented in tile memories of the CGRA, a flow description specifying one or more synchronous data flows, through the memory locations referenced via the memory variables in the tile memory information, to produce a result from the input using the CGRA.