H04L12/40123

ALL-CONNECTED BY VIRTUAL WIRES NETWORK OF DATA PROCESSING NODES
20170222945 · 2017-08-03 ·

Embodiments of the present disclosure generally relate to a cloud computing network and a method of transferring information among processing nodes in a cloud computing network. In one embodiment, a cloud computing network is disclosed herein. The cloud computing network includes a plurality of motherboards arranged in racks. Each individual motherboard includes a central hub and a plurality of processing nodes equipped to the central hub. Each processing node is configured to access memory or storage space of another processing node in the same motherboard by intermediation of the hub. The access is called a communication between a pair of processing nodes. The communication includes a string of information transmitted between processing nodes. The string of information has a plurality of frames. Each frame includes a plurality of time slots, wherein each time slot is allotted a specific node pair.

Method for configuring devices in a daisy chain communication configuration

A master device, daisy-chained devices, and a method for configuring the daisy-chained devices are provided. The master device generates a signal having a pre-determined base frequency, and outputs the signal generated to a first device in the daisy chain communication configuration. Each daisy-chained device receives an input signal, having an input frequency, from a previous daisy-chained device. Each daisy-chained device generates an output signal having an output frequency different to and based on the input frequency of the received signal, and outputs the output signal to a following daisy-chained device. Each daisy-chained device further determines an address of a communication interface, for exchanging data with the master device, based on the input frequency of the received signal. For example, the output frequency of the output signal is half the input frequency of the received signal.

Communication apparatus and communication system

A communication apparatus installed on a vehicle as a master apparatus includes: a slave port communicating with an on-vehicle control apparatus; two or more master ports are paired with two or more slave apparatuses installed on the vehicle, and communicate with the two or more slave apparatuses using different channels based on Distributed System Interface (DSI) protocol; two or more buffer memories provided corresponding to the two or more master ports; and a control section sorting and storing commands addressed to the two or more slave apparatuses, respectively, from the on-vehicle control apparatus into the two or more buffer memories, respectively, and when receiving a trigger instructing transmission of the commands from the on-vehicle control apparatus, reading the commands from the two or more buffer memories, and simultaneously transmitting the commands from the two or more master ports, respectively.

Streaming On Diverse Transports
20220191060 · 2022-06-16 ·

In some examples, a transport agnostic source includes a streaming device to stream video on diverse transport topologies including isochronous and non-isochronous transports. In some examples, a transport agnostic sink includes a receiving device to receive streamed video from diverse transport topologies including isochronous and non-isochronous transports.

Streaming on diverse transports

In some examples, a transport agnostic source includes a streaming device to stream video on diverse transport topologies including isochronous and non-isochronous transports. In some examples, a transport agnostic sink includes a receiving device to receive streamed video from diverse transport topologies including isochronous and non-isochronous transports.

Priority-arbitrated access to a set of one or more computational engines

The present invention discloses a method for managing priority-arbitrated access to a set of one or more computational engines of a physical computing device. The method includes providing a multiplexer module and a network bus in the physical computing device, wherein the multiplexer module is connected to the network bus. The method further includes receiving, by the multiplexer module, a first data processing request from a driver and inferring, by the multiplexer module, a first priority class from the first data processing request according to at least one property of the first data processing request. The method further includes manipulating, by the multiplexer module, a priority according to which the physical computing device handles data associated with the first data processing request in relation to data associated with other data processing requests, wherein the priority is determined by the first priority class.

All-connected by virtual wires network of data processing nodes

Embodiments of the present disclosure generally relate to a cloud computing network and a method of transferring information among processing nodes in a cloud computing network. In one embodiment, a cloud computing network is disclosed herein. The cloud computing network includes a plurality of motherboards arranged in racks. Each individual motherboard includes a central hub and a plurality of processing nodes equipped to the central hub. Each processing node is configured to access memory or storage space of another processing node in the same motherboard by intermediation of the hub. The access is called a communication between a pair of processing nodes. The communication includes a string of information transmitted between processing nodes. The string of information has a plurality of frames. Each frame includes a plurality of time slots, wherein each time slot is allotted a specific node pair.

VERIFICATION OF FIELDBUS NETWORK CONNECTED DEVICES IN A WIND TURBINE SUB-ASSEMBLY
20230318874 · 2023-10-05 ·

The present invention relates to verification of connections between a computing unit, such as a test computer, and a device, e.g. a sensor or transducer, in a fieldbus network of a sub-assembly of a wind turbine. The fieldbus network comprises a plurality of network components. One of the network components comprises a program element. The computing unit is arranged for forwarding a request to the program element to verify the connection from the computing unit to a selected device, the request comprising a sequence of selected network components between the computing unit and the selected device. The program element is arranged to establish in a sequenced manner a communication connection between the selected network components and the selected device, and to provide a response as to whether or not the connection exists.

Negotiated bridge assurance in a stacked chassis

An information handling system includes multiple data ports, a memory, and a processor. Each of the data ports enables a separate communication link of a plurality of communication links for the information handling system. The memory stores data to indicate whether the information handling system supports bridge assurance on each of the communication links. In response to the bridge assurance being supported in the information handling system, the processor provides a message across a first link of the communication links. The message indicates that bridge assurance is supported in the information handling system. The processor also determines whether an acknowledgement message has been received. In response to the acknowledgement message being received, the processor enables the bridge assurance on the first link.

SYSTEM ON CHIP (SOC) FOR SEAT BOXES ON A TRANSPORTATION VEHICLE AND ASSOCIATED METHODS THEREOF
20230344672 · 2023-10-26 · ·

Methods and systems are provided for a transportation vehicle. One method includes detecting, by a first system on chip (“SOC”) of a seat box on a transportation vehicle that a first seat device is operational and usage of a second SOC of the seat box by a second seat device is below a first threshold level, the first SOC operationally coupled to the second SOC by a peripheral link, the seat box providing a network connection to the first seat device and the second device; allocating resources of the first SOC and the second SOC to the first seat device; and modifying usage of the second SOC by the first seat device, in response to a change in resource usage of the second SOC.