Patent classifications
G06F2213/0026
Front End Traffic Handling In Modular Switched Fabric Based Data Storage Systems
Systems, methods, apparatuses, and software for data storage systems are provided herein. In one example, a data storage system is provided that includes storage drives each comprising a PCIe interface, and configured to store data and retrieve the data stored on associated storage media responsive to data transactions received over a switched PCIe fabric. The data storage system includes processors configured to each manage only an associated subset of the storage drives over the switched PCIe fabric. A first processor is configured to identify first data packets received over a network interface associated with the first processor within a network buffer of the first processor as comprising a storage operation associated with at least one of the plurality of storage drives managed by a second processor, and responsively transfer the first data packets into a network buffer of the second processor.
HIGH PERFORMANCE INTERCONNECT LINK LAYER
Transaction data is identified and a flit is generated to include three or more slots and a floating field to be used as an extension of any one of two or more of the slots. In another aspect, the flit is to include two or more slots, a payload, and a cyclic redundancy check (CRC) field to be encoded with a 16-bit CRC value generated based on the payload. The flit is sent over a serial data link to a device for processing, based at least in part on the three or more slots.
REDIRECTION OF LANE RESOURCES
An apparatus includes a pass-through module that includes connector pins to connect with at least one active portion of a motherboard connector and to separately connect with at least one inactive portion of the motherboard connector. A routing function on the pass-through module redirects a set of bidirectional lanes from the connector pins connected to the active portion of the motherboard connector to the connector pins connected to the inactive portion of the motherboard connector to enable a connection of the set of bidirectional lanes to at least one other motherboard resource connected to the inactive portion of the motherboard connector.
INFORMATION PROCESSING APPARATUS
An information processing device having a processor and memory, and including one or more accelerators and one or more storage devices, wherein: the information processing device has one network for connecting the processor, the accelerators, and the storage devices; the storage devices have an initialization interface for accepting an initialization instruction from the processor, and an I/O issuance interface for issuing an I/O command; and the processor notifies the accelerators of the address of the initialization interface or the address of the I/O issuance interface.
MOTHERBOARD MODULE HAVING SWITCHABLE PCI-E LANE
A motherboard module having switchable PCI-E lanes includes a CPU, a first PCI-E slot, a second PCI-E slot, a first switch, and a second switch. 1st to a-th processor pin sets of the CPU are switchably electrically connected to 1st to a-th first PCI-E pin sets of the first PCI-E slot or (2N−a+1)th to 2N-th second PCI-E pin sets of the second PCI-E slot via the first switch to form PCI-E lanes whose number is a. (a+1)-th to 2N-th processor pin sets of the CPU are connected to the second input terminal of the second switch, and the second output terminal of the second switch is switchably electrically connected to (a+1)-th to 2N-th first PCI-E pin sets of the first PCI-E slot or 1st to (2N−a)th second PCI-E pin sets of the second PCI-E slot to form PCI-E lanes whose number is 2N−a, wherein 1<a<2N.
Wide Elastic Buffer
A receiving device uses an elastic buffer that is wider than the number of data elements transferred in each cycle. To compensate for frequency differences between the transmitter and the receiver, the transmitting device periodically sends a skip request with a default number of skip data elements. If the elastic buffer is filling, the receiving device ignores one or more of the skip data elements. If the elastic buffer is emptying, the receiving device adds one or more skip data elements to the skip request. To maintain the ordering of data despite the manipulation of the skip data elements, two rows of the wide elastic buffer are read at a time. This allows construction of a one-row result from any combination of the data elements of the two rows. The column pointers are adjusted appropriately, to ensure that they continue to point to the next data to be read.
Surveillance Camera Upgrade via Removable Media having Deep Learning Accelerator and Random Access Memory
Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, a removable media (e.g., a memory card, or a USB drive) may be configured to execute instructions with matrix operands and configured with: an interface to receive a video stream; and random access memory to buffer a portion of the video stream as an input to an artificial neural network and to store instructions executable by the deep learning accelerator and matrices of the artificial neural network. Such a removable media can be used to replace an existing removable media used in a surveillance camera to record video or images. The deep learning accelerator can execute the instructions to generate analytics of the buffer portion using the artificial neural network, enabling the surveillance camera that is upgraded via the use of the removable media to provide intelligent services based on the analytics.
Configurable client hardware
Various systems and methods for configuring a pluggable computing device are described herein. A pluggable computing device may be configured to be compatible with a pluggable host system using a default communication channel to obtain configuration settings and configure a programmable logic device on the pluggable computing device. The pluggable computing device may perform chain of trust processing on the pluggable host system. The pluggable computing device may be disposed on a compute card, which may include a heat sink in a particular configuration.
SERVICE MESH ARCHITECTURE FOR INTEGRATION WITH ACCELERATOR SYSTEMS
A processing apparatus can include a memory device having a user space for executing user applications. The processing apparatus can further include infrastructure communication circuitry that can receive a request from a user application executing in the user space. The infrastructure communication circuitry can perform a service mesh operation, in response to the request, without a sidecar proxy. Other systems and methods are described.
Enabling a multi-chip daisy chain topology using peripheral component interconnect express (PCIe)
A system-on-chip (SoC) may be configured to enable a Multi-Chip Daisy Chain Topology using peripheral component interface express (PCIe). The SoC may include a processor, a local memory, a root complex operably connected to the processor and the local memory, and a multi-function endpoint controller. The root complex may obtain forwarding information to configure routing of transactions to one or more PCIe endpoint functions or to the local memory. The root complex may initialize, based on the forwarding information, access between a host and the one or more PCIe endpoint functions. The multi-function endpoint controller may obtain a descriptor and endpoint information to configure outbound portals for transactions to at least one remote host. The multi-function endpoint controller may establish a communication path between the host and a function out of a plurality of functions.