G06F13/40

Apparatus and methods for in data path compute operations
11550742 · 2023-01-10 · ·

The present disclosure includes apparatuses and methods for in data path compute operations. An example apparatus includes an array of memory cells. Sensing circuitry is selectably coupled to the array. A plurality of shared input/output (I/O) lines provides a data path. The plurality of shared I/O lines selectably couples a first subrow of a row of the array via the sensing circuitry to a first compute component in the data path to move a first data value from the first subrow to the first compute component and a second subrow of the respective row via the sensing circuitry to a second compute component to move a second data value from the second subrow to the second compute component. An operation is performed on the first data value from the first subrow using the first compute component substantially simultaneously with movement of the second data value from the second subrow to the second compute component.

Input/output terminal and electronic device comprising same

Disclosed are an input/output terminal for connecting an electronic device to an external device and an electronic device comprising the input/output terminal, the input/output terminal comprising: a signal pin allowing signals to be transmitted/received between the electronic device and the external device; a ground pin connected to a ground part of the electronic device; and a resistant material disposed at the end part of the signal pin or the ground pin. Other various embodiments are possible.

Secondary processor device ownership assignment system
11593120 · 2023-02-28 · ·

A secondary processor device ownership assignment system includes a chassis that houses devices, a secondary processing system, a central processing system that includes an integrated switch device that is coupled to each of the devices and the secondary processing system, and a device ownership subsystem that is coupled to the central processing system. The device ownership system accesses device information for a subset of the devices that will be owned by the secondary processing system, and configures the device information for the subset of the devices such that the subset of the devices are hidden from an operating system provided by the central processing system. The secondary processing system reconfigures the device information for the subset of the plurality of devices such that the subset of the plurality of devices are accessible by the secondary processing system.

Methods and apparatus for fabric interface polling
11593288 · 2023-02-28 · ·

Methods and apparatus for efficient data transmit and receive operations using polling of memory queues associated with interconnect fabric interface. In one embodiment, Non-Transparent Bridge (NTB) technology used to transact the data transmit/receive operations and a hardware accelerator card used implement a notification mechanism in order to optimize of receive queue polling are disclosed. The accelerator card comprises a notification address configured to signal the presence of data, and a notification acknowledgement region configured to store flags associated with memory receive queues. In one implementation, the interconnect fabric is based on PCIe technology, including up to very large fabrics and numbers of hosts/devices for use in ultra-high performance applications such as for example data centers and computing clusters.

Adjusting wireless docking resource usage

Adjusting wireless docking resource usage, including identifying, at a client information handling system (IHS), a configuration policy, the client IHS wirelessly connected to a docking station, the docking station providing wireless connections to peripheral computing components, respectively; processing, at the client IHS, the configuration policy, including identifying configuration rules of the configuration policy for performing computer-implemented actions of throttling resource utilization between the client IHS and the docking station; identifying, at the client IHS, when the client IHS is wirelessly connected to the docking station, a first presence state of a user with respect to the client IHS; and determining, at the client IHS, that the first presence state indicates that the user of the client IHS is not actively engaged with the client IHS, and in response, applying the configuration rules to perform computer-implemented actions of throttling resource utilization between the client IHS and the docking station.

Edge component redirect for IoT analytics groups

Disclosed are various examples of providing edge component redirection for IoT analytics groups. In some embodiments, an Internet-of-Things (IoT) analytics group is identified. The IoT analytics group includes an IoT device that communicates through an interface device of a first edge device. A grouping interface policy is generated to specify a bus redirect from the first edge computing device to a second edge computing device. The grouping interface policy is transmitted for implementation using the edge devices.

Methods and systems for loosely coupled PCIe service proxy over an IP network

PCIe devices installed in host computers communicating with service nodes can provide virtualized and high availability PCIe functions to host computer workloads. The PCIe device can receive a PCIe TLP encapsulated in a PCIe DLLP via a PCIe bus. The TLP includes a TLP address value, a TLP requester identifier, and a TLP type. The PCIe device can terminate the PCIe transaction by sending a DLLP ACK message to the host computer in response to receiving the TLP. The TLP packet can be used to create a workload request capsule that includes a request type indicator, an address offset, and a workload request identifier. A workload request packet that includes the workload request capsule can be sent to a virtualized service endpoint. The service node, implementing the virtualized service endpoint, receives a workload response packet that includes the workload request identifier and a workload response payload.

Data storage system for improving data throughput and decode capabilities

Systems and methods for storing data are described. A system can comprise a controller, one or more physical non-volatile memory devices, a bus comprising a plurality of input/output (I/O) lines. The controller configured to receive data, encode the received data into a codeword, and transfer, in parallel, different portions of the codeword to different physical non-volatile memory devices among the plurality of physical non-volatile memory devices.

POWER ADAPTER
20180004266 · 2018-01-04 ·

A power adapter includes an universal serial bus (USB) interface, a control unit and a charging circuit. The universal serial bus (USB) interface connected to the electronic device, to receive level signals from the electronic device. The control unit electronically connected to the USB interface, to output a control signal according to the level signals received by the USB interface. The charging circuit electronically connected to the control unit and receiving the external power signal, to determine whether or not to output the external power signal to the electronic device according to the control signal outputted by the control unit.

IMPLEMENTING COHERENT ACCELERATOR FUNCTION ISOLATION FOR VIRTUALIZATION

A method, system and computer program product are provided for implementing coherent accelerator function isolation for virtualization in an input/output (IO) adapter in a computer system. A coherent accelerator provides accelerator function units (AFUs), each AFU is adapted to operate independently of the other AFUs to perform a computing task that can be implemented within application software on a processor. The AFU has access to system memory bound to the application software and is adapted to make copies of that memory within AFU memory-cache in the AFU. As part of this memory coherency domain, each of the AFU memory-cache and processor memory-cache is adapted to be aware of changes to data commonly in either cache as well as data changed in memory of which the respective cache contains a copy.