Patent classifications
H04L49/3036
Simplified low profile module with light guide for pendant, surface mount, wall mount and stand alone luminaires
A luminaire having a waveguide suspended beneath a mounting element, the waveguide has a first surface proximal to the mounting element, a second surface distal to the mounting element, and an edge between the first and the second surfaces. At least one cavity extends into the waveguide from the first surface to the second surface. A LED component is coupled to the waveguide so as to emit light into the cavity. LED support structures are also disclosed.
Chip module, communication system, and port allocation method
A chip module has a plurality of first ports, at least some or all of the first ports are first selection ports, and each first selection port may act as a write port or a read port. The chip module further includes a first control module. The first control module controls, based on a transmit/receive requirement of the chip module, the first selection port to be switched to a read port or a write port, to match the transmit/receive requirement of the chip module. The first selection port may selectively act as a read port or a write port, so that switching can be performed based on an operating state of the chip module, increasing a read/write bandwidth. The first control module controls an operating state of the first selection port, to flexibly adjust a quantity of read ports and a quantity of write ports of the chip module.
TECHNOLOGIES FOR USING A HARDWARE QUEUE MANAGER AS A VIRTUAL GUEST TO HOST NETWORKING INTERFACE
Technologies for using a hardware queue manager as a virtual guest to host networking interface include a compute node configured to receive a pointer corresponding to each of one or more available receive buffers from a guest processor core of at least one processor of the compute node that has been allocated to a virtual guest managed by the compute node. The compute node is further configured to enqueue the received pointer of each of the one or more available receive buffers into an available buffer queue and facilitate access to the available receive buffers to at least a portion of a plurality of virtual switch processor cores. Each of the virtual switch processor cores comprises another processor core of the plurality of processor cores that has been allocated to a virtual switch of the compute node. Other embodiments are described herein.
RDMA Data Transmission System, RDMA Data Transmission Method, and Network Device
A remote direct memory access (RDMA) data transmission system includes a first network device in a first host and a second network device in a second host. The first network device may create a shared send queue (SSQ) used by a plurality of processes run by the first host, obtain an RDMA data transmission message of a first process from the SSQ, and encapsulate a first identifier corresponding to the first process into a first packet in which the RDMA data transmission message is encapsulated. The second network device is configured to encapsulate the first identifier into a second packet in which a feedback message is encapsulated.
Buffer Optimization in Modular Switches
In a packet network of ingress nodes and egress nodes connected by a fabric transmit queues are associated with a hash table that stores packet descriptors. When new packets are received in the ingress nodes, credits are obtained from the egress nodes that reflect capacities of the transmit queues to accommodate the new packets. The credits are consumed by transmitting at least a portion of the new packets from the ingress nodes to the egress nodes via the fabric and storing descriptors of the new packets in a hash table. In order to transmit the packets in order by sequence number, when a desired packet sequence number is found by a hash lookup, the new packet having that sequence number is forwarded through the egress nodes.
CHIP MODULE, COMMUNICATION SYSTEM, AND PORT ALLOCATION METHOD
A chip module has a plurality of first ports, at least some or all of the first ports are first selection ports, and each first selection port may act as a write port or a read port. The chip module further includes a first control module. The first control module controls, based on a transmit/receive requirement of the chip module, the first selection port to be switched to a read port or a write port, to match the transmit/receive requirement of the chip module. The first selection port may selectively act as a read port or a write port, so that switching can be performed based on an operating state of the chip module, increasing a read/write bandwidth. The first control module controls an operating state of the first selection port, to flexibly adjust a quantity of read ports and a quantity of write ports of the chip module.
DISTRIBUTED FUNCTION-SPECIFIC BUFFER ARRANGEMENT IN A COMMUNICATION LAYER
An apparatus, system, and method are provided for a distributed function-specific buffer arrangement in a communication layer. A first module included in the layered communication architecture reserves a buffer in association with communicating a PDU. The buffer is included in a set of buffers allocated in association with the first module. The first module provides the PDU to a second module included in the layered communication architecture. Providing the PDU to the second module includes storing the PDU to the buffer and transferring ownership of the buffer to the second module. The second module accesses the PDU from a memory space associated with the buffer and releases the ownership of the buffer based on successfully accessing the PDU.
System and a method of analysing a plurality of data packets
A system and a method for analyzing a plurality of data packets where the data packets are analyzed to determine which of a number of subsequent process(es) is/are to further analyze the data packets. Information identifying the subsequent process(es) is added to a FIFO. An unknown data packet type is not immediately recognizable, whereby a storage location is reserved in the FIFO, and the data packet is fed to a separate characterizing process deriving the information relating to the relevant process(es), which information is subsequently fed to the relevant storage location in the FIFO, so that the order of data packets represented in the FIFO is the order of receipt of the data packets. From the FIFO, information is fed to a work list or storage of the relevant subsequent processes to process the pertaining data packets. This processing may also be in the chronological order of receipt of the data packets.
ETHERNET INTERFACE MODULE
An Ethernet interface module comprises a first full duplex port, a second duplex port, a first path coupling the first duplex port and the second full duplex port, a second path coupling the second full duplex port and the first full duplex port, a first queue disposed in the first path, a second queue disposed in the second path, a third path comprising at least a portion of the first queue coupling the receive and transmit portions of the first port, a fourth path comprising at least a portion of the second queue coupling the receive and transmit portions of the second port, execution apparatus operable responsive to a command to alter the state of said Ethernet interface module, or the contents of said received frame to produce a return frame comprising fields of a received frame that are modified, or both.
Optical switching
A network node comprises an optical input, an optical output, a random-access queue and processing system. It receives a data packet, at the optical input and determines whether to process it as a guaranteed-service packet or as a statistically-multiplexed packet. A guaranteed-service packet is output within a predetermined maximum time of receipt, optionally within a data container comprising container control information. A statistically-multiplexed packet is queued. The node determines a set of statistically-multiplexed packets that would fit a gap between two guaranteed-service packets; selects one of the packets; and outputs it between the two guaranteed-service packets.