G06F13/4027

IC die to IC die interconnect using error correcting code and data path interleaving
11545467 · 2023-01-03 · ·

A multi-chip module includes a first Integrated Circuit (IC) die a second IC die. The first IC die includes an array of first bond pads, a plurality of first code group circuits, and first interleaved interconnections between the plurality of first code group circuits and the array of first bond pads, the first interleaved interconnections including a first interleaving pattern causing data from different code group circuits to be coupled to adjacent first bond pads. The second IC die includes a second array of bond pads that electrically couple to the array of first bond pads, a plurality of second code group circuits, and second interleaved interconnections between the plurality of second code group circuits and the array of second bond pads, the second interleaved interconnections including a second interleaving pattern causing data from different code groups to be coupled to adjacent second bond pads.

VRAN with PCIe fronthaul

Systems, methods and computer software are disclosed for fronthaul. In one embodiment a method is disclosed, comprising: providing a virtual Radio Access Network (vRAN) having a centralized unit (CU) and a distributed unit (DU); and interconnecting the CU and DU over an Input/Output (I/O) bus using Peripheral Component Interconnect-Express (PCIe); wherein the CU and the DU include a PCI to optical converter and an optical to PCI converter.

Heterogeneous computation and hierarchical memory image sensing pipeline

A system on a chip, including a first domain having a first processor, a first local memory coupled to the first processor, wherein the first local memory having a first memory format and a first sub-network coupled to the first processor, a second domain having a second processor, a second local memory coupled to the second processor and a second sub-network coupled to the second processor, wherein the second local memory having a second memory format which differs from the first memory format, a multi-tier network coupled to the first sub-network and the second sub-network, a global memory coupled to the multi-tier network and a multi-port DDR controller coupled to the global memory to receive, transmit and share the first local memory having the first memory format and the second local memory having the second memory format based on a predetermined criteria.

DYNAMICALLY SCALABLE AND PARTITIONED COPY ENGINE

An apparatus to facilitate a dynamically scalable and partitioned copy engine is disclosed. The apparatus includes a processor comprising copy engine hardware circuitry to facilitate copying surface data in memory and comprising: a plurality of copy front-end hardware circuitry to generate a plurality of surface data sub-blocks, wherein a number of the plurality of copy front-end hardware circuitry corresponds to a number of partitions configured for the processor, with each partition associated with a single copy front-end hardware circuitry; a plurality of copy back-end hardware circuitry to operate in parallel to process the plurality of surface data sub-blocks to perform memory accesses, wherein subsets of the plurality of copy back-end hardware circuitry are each associated with the single copy front-end hardware circuitry associated with each partition; and a connectivity matrix hardware circuitry to communicably connect the plurality of copy front-end hardware circuitry to the plurality of copy back-end hardware circuitry.

Network interface device and host processing device
11537541 · 2022-12-27 · ·

A network interface device comprises a plurality of components configured to process a flow of data one after another. A control component is configured to provide one or more control messages in said flow, said one or more control message being provided to said plurality of components one after another such that a configuration of one or more of said components is changed.

Inter cluster snoop latency reduction

In one embodiment, a cache coherent system includes one or more agents (e.g., coherent agents) that may cache data used by the system. The system may include a point of coherency in a memory controller in the system, and thus the agents may transmit read requests to the memory controller to coherently read data. The point of coherency may determine if the data is cached in another agent, and may transmit a copy back request to the other agent if the other agent has modified the data. The system may include an interconnect between the agents and the memory controller. At a point on the interconnect at which traffic from the agents converges, a copy back response may be converted to a fill for the requesting agent.

Experience convergence for multiple platforms

A processor may launch a common layer including a browser and a shell. The processor may identify a request for data required by the software platform. The processor may determine a context in which the request was generated by detecting, by the shell, whether the software platform is an on-premise solution or a cloud solution; examining, by the shell, the request to determine whether it requires local resources or cloud-based resources; and designating, by the shell, the context as a cloud-based context or an on-premise context based on the detecting and the examining. In response to the determining, the processor may perform processing related to a request generated in the on-premise context using at least one local resource stored in a local data store; and communicate, by the browser, with at least one cloud-based resource to perform processing related to a request generated in the cloud-based context.

PCIE-BASED COMMUNICATIONS METHOD AND APPARATUS
20220405229 · 2022-12-22 ·

A PCIe-based communications method includes: a root complex writes identity information of a second node into a first node and writes routing table information into a third node, where the first node is a source node of first data, the second node is a destination node of the first data, and the third node is a node through which the first data arrives at the second node.

Interface Bus Combining
20220405227 · 2022-12-22 ·

Circuits and methods enabling common control of an agent device by two or more buses, particularly MIPI RFFE serial buses. In essence, the invention provides flagging signals designating completed register write operations to denote which of two registers are active, such that synchronization is accomplished in a clock-free manner. One embodiment includes at least two decoders, each including a common register and a bus (S/P) decoder coupled to a respective bus and to the common register. The S/P decoder asserts a write-complete signal when a write operation to a corresponding common register is completed. A multiplexer has at least two selectable input bus ports coupled to the common registers within the at least two decoders. A selection circuit selects an input bus port of the multiplexer in response to the assertion of a last write-complete signal from the S/P decoders.

SYSTEMS AND METHODS FOR REDUCING CONGESTION ON NETWORK-ON-CHIP

Systems or methods of the present disclosure may provide a programmable logic device including a network-on-chip (NoC) to facilitate data transfer between one or more main intellectual property components (main IP) and one or more secondary intellectual property components (secondary IP). To reduce or prevent excessive congestion on the NoC, the NoC may include one or more traffic throttlers that may receive feedback from a data buffer, a main bridge, or both and adjust data injection rate based on the feedback. Additionally, the NoC may include a data mapper to enable data transfer to be remapped from a first destination to a second destination if congestion is detected at the first destination.