G06F15/7803

BOARD DEVICE OF SINGLE BOARD COMPUTER
20230004200 · 2023-01-05 ·

A board device of a single board computer includes a baseboard with front and back areas; at least one small outline dual in-line memory module installed in the front area; a processor frame installed in the front area and having a size of a LGA 1200 processor socket; a cooling module combined with the processor frame and installed in the front area and having a size of a cooling module of a LGA 2011 processor socket; and a bottom frame portion disposed in the back area. The four locking members are passed through the four board holes of the baseboard to lock the bottom frame portion and the cooling module to the baseboard. The four board holes have a hole distance complying with the hole distance specification of the four slots of the cooling module of the LGA 2011 processor socket.

Computing system with hardware reconfiguration mechanism and method of operation thereof
11494322 · 2022-11-08 · ·

A method of operation of a computing system includes: providing a first cluster having a first kernel unit for managing a first reconfigurable hardware device; analyzing an application descriptor associated with an application; generating a first bitstream based on the application descriptor for loading the first reconfigurable hardware device, the first bitstream for implementing at least a first portion of the application; and implementing a first fragment with the first bitstream in the first cluster.

SYSTEMS AND METHODS TO CONFIGURE FRONT PANEL HEADER

In one aspect, a device may include at least one processor programmed with instructions to power on the device responsive to an electrical connection of two pins on a front panel header of a system board and, based on powering on the device responsive to the electrical connection of two pins on the front panel header of the system board, present a basic input/output system (BIOS) setup screen on a display. The BIOS setup screen may provide one or more options for a person to configure pinouts of the front panel header. The processor may also be programmed with instructions to save the person's configuration of the pinouts of the front panel header based on user input using the BIOS setup screen and, responsive to a subsequent startup of the device, apply the configuration of the pinouts of the front panel header for operation of the device.

Load reduced memory module
09826638 · 2017-11-21 · ·

The embodiments described herein describe technologies for memory systems. One implementation of a memory system includes a motherboard substrate with multiple module sockets, at least one of which is populated with a memory module. A first set of data lines is disposed on the motherboard substrate and coupled to the module sockets. The first set of data lines includes a first subset of point-to-point data lines coupled between a memory controller and a first socket and a second subset of point-to-point data lines coupled between the memory controller and a second socket. A second set of data lines is disposed on the motherboard substrate and coupled between the first socket and the second socket. The first and second sets of data lines can make up a memory channel.

Node card utilizing a same connector to communicate pluralities of signals

A system and method for provisioning of modular compute resources within a system design are provided. In one embodiment, a node card or a system board may be used.

Heterogeneous miniaturization platform

A method of forming an electrical device is provided that includes forming microprocessor devices on a microprocessor die; forming memory devices on an memory device die; forming component devices on a component die; and forming a plurality of packing devices on a packaging die. Transferring a plurality of each of said microprocessor devices, memory devices, component devices and packaging components to a supporting substrate, wherein the packaging components electrically interconnect the memory devices, component devices and microprocessor devices in individualized groups. Sectioning the supporting substrate to provide said individualized groups of memory devices, component devices and microprocessor devices that are interconnected by a packaging component.

Neural Network Accelerator in DIMM Form Factor
20220051089 · 2022-02-17 ·

The technology relates to a neural network dual in-line memory module (NN-DIMM), a microelectronic system comprising a CPU and a plurality of the NN-DIMMs, and a method of transferring information between the CPU and the plurality of the NN-DIMMS. The NN-DIMM may include a module card having a plurality of parallel edge contacts adjacent to an edge of a slot connector thereof and configured to have the same command and signal interface as a standard dual in-line memory module (DIMM). The NN-DIMM may also include a deep neural network (DNN) accelerator affixed to the module card, and a bridge configured to transfer information between the DNN accelerator and the plurality of parallel edge contacts via a DIMM external interface.

Storage system with a memory blade that generates a computational result for a storage device

One embodiment is a storage system having one or more compute blades to generate and use data and one or more memory blades to generate a computational result. The computational result is generated by a computational function that transforms the data generated and used by the one or more compute blades. One or more storage devices are in communication with and remotely located from the one or more compute blades. The one or more storage devices store and serve the data for the one or more compute blades.

Mainboard and server

A mainboard and a server are provided. The mainboard includes: a board body, a preset number of Purley platform central processors, and one or more memories. The preset number of Purley platform central processors and the one or more memories are installed on the board body. The Purley platform central processors are sequentially connected with each other, and each of the memories is connected to one of the Purley platform central processors. Each of the memories is configured to receive to-be-burned data inputted from outside and transmit the to-be-burned data to the Purley platform central processor connected with the memory. Each of the Purley platform central processors is configured to burn the to-be-burned data when receiving the to-be-burned data transmitted by the connected memory connected with the Purley platform central processor, to have a function corresponding to the to-be-burned data.

Convolutional neural networks on hardware accelerators

A hardware acceleration component is provided for implementing a convolutional neural network. The hardware acceleration component includes an array of N rows and M columns of functional units, an array of N input data buffers configured to store input data, and an array of M weights data buffers configured to store weights data. Each of the N input data buffers is coupled to a corresponding one of the N rows of functional units. Each of the M weights data buffers is coupled to a corresponding one of the M columns of functional units. Each functional unit in a row is configured to receive a same set of input data. Each functional unit in a column is configured to receive a same set of weights data from the weights data buffer coupled to the row. Each of the functional units is configured to perform a convolution of the received input data and the received weights data, and the M columns of functional units are configured to provide M planes of output data.