Patent classifications
H04L49/9047
OPEN AND SAFE MONITORING SYSTEM FOR AUTONOMOUS DRIVING PLATFORM
In one embodiment, a system for operating an autonomous driving vehicle (ADV) includes a number of modules. These modules include at least a perception module to perceive a driving environment surrounding the ADV and a planning module to plan a path to drive the ADV to navigate the driving environment. The system further includes a bus coupled to the modules and a sensor processing module communicatively coupled to the modules over the bus. The sensor processing module includes a bus interface coupled to the bus, a sensor interface to be coupled to a first set of one or more sensors mounted on the ADV, a message queue to store messages published by the sensors, and a message handler to manage the messages stored in the message queue. The messages may be subscribed by at least one of the modules to allow the modules to monitor operations of the sensors.
DILATED CONVOLUTION USING SYSTOLIC ARRAY
In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.
DATA PACKET PROCESSING METHOD AND APPARATUS, AND DEVICE
Embodiments of the present invention disclose a data packet processing method and apparatus, and a device. The method includes: if a first data packet is received, determining a first cache queue that is in the first buffer and that is used to store the first data packet; buffering the first data packet in the second buffer if a state of the first cache queue is an invalid state, where a data amount of the first data packet is less than the capacity of the second buffer, and the state of the first cache queue is set to the invalid state when a current data amount of the first buffer reaches the capacity of the first buffer; and if a data amount of the second buffer reaches the capacity of the second buffer, sending all data packets that are in the second buffer to a control plane device.
Multi-destination traffic handling optimizations in a network device
When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
PRIORITY-BASED FLOW CONTROL
Some embodiments provide a method for a hardware forwarding element. The method adds a received packet to a buffer. The method determines whether adding the packet to the buffer causes the buffer to pass one of multiple flow control thresholds, each of which corresponds to a different packet priority. When adding the packet to the buffer causes the buffer to pass a particular flow control threshold corresponding to a particular priority, the method generates a flow control message for the particular priority.
BUFFER ALLOCATION FOR PARALLEL PROCESSING OF DATA
Examples described herein relate to receiving, at a network interface, an allocation of a first group of one or more buffers to store data to be processed by a Message Passing Interface (MPI) and based on a received packet including an indicator that permits the network interface to select a buffer for the received packet and store the received packet in the selected buffer, the network interface storing a portion of the received packet in a buffer of the first group of the one or more buffers. The indicator can permit the network interface to select a buffer for the received packet and store the received packet in the selected buffer irrespective of a tag and sender associated with the received packet. In some examples, based on a received packet including an indicator that does not permit storage of the received packet in a buffer irrespective of a tag and source associated with the second received packet, the network interface is to store a portion of the second received packet in a buffer of the second group of one or more buffers, wherein the buffer of the second group of one or more buffers corresponds to a tag and source associated with the second received packet.
PROCESSING OF ETHERNET PACKETS AT A PROGRAMMABLE INTEGRATED CIRCUIT
Methods, systems, and computer programs are presented for processing Ethernet packets at a Field Programmable Gate Array (FPGA). One programmable integrated circuit includes: an internal network on chip (iNOC) comprising rows and columns; clusters, coupled to the iNOC, comprising a network access point (NAP) and programmable logic; and an Ethernet controller coupled to the iNOC. When the controller operates in packet mode, each complete inbound Ethernet packet is sent from the controller to one of the NAPs via the iNOC, where two or more NAPs are configurable to receive the complete inbound Ethernet packets from the controller. The controller is configurable to operate in quad segment interface (QSI) mode where each complete inbound Ethernet packet is broken into segments, which are sent from the controller to different NAPs via the iNOC, where two or more NAPs are configurable to receive the complete inbound Ethernet packets from the controller.
Buffer shortage management system
A buffer shortage management system includes a first networking device with ports included in a buffer pool. A second networking device forwards a first data flow to the first networking device through LAG port(s) in the buffer pool. The first networking device determines that a total buffer pool utilization has reached a threshold that effects a first data flow received through a non-LAG port on the first networking device. The first networking device then identifies that the first data flow and third data flow(s) are received through the LAG port(s), determines that the first data flow has a higher utilization of the buffer pool than the third data flow(s) and, in response, transmits a first buffer shortage notification to the second networking device that causes the second networking device to modify its forwarding parameters to redirect the first data flow from the first networking device to a third networking device.
TECHNIQUES FOR HANDLING MESSAGE QUEUES
Techniques are disclosed relating to handling queues. A server-based platform, in some embodiments, accesses queue information that includes performance attributes for a plurality of queues storing one or more messages corresponding to one or more applications. In some embodiments, the platform assigns, based on the performance attributes, a corresponding set of the plurality of queues to each of a plurality of processing nodes of the platform. In some embodiments, the assigning of a corresponding set of queues to a given one of the plurality of processing nodes causes instantiation of: a first set of one or more dequeuing threads and a second set of one or more processing threads. The dequeuing threads may be executable to dequeue one or more messages stored in the corresponding set of queues. The processing threads may be executable to perform one or more tasks specified in the dequeued one or more messages.
METHOD FOR PROCESSING NETWORK PACKETS AND ELECTRONIC DEVICE THEREFOR
An electronic device including a wireless communication circuitry, a processor including a plurality of cores, and a memory. The processor receives a packet of a first session associated with a first core among the plurality of cores, identifies whether a core associated with the first session is changed to a second core different from the first core, sets pending information based on an amount of packets which are pending in a first packet of the first core when it is identified that the core is changed to the second core, stores data corresponding to the received packet of the first session in a pending buffer of the memory, and inserts the data corresponding to the received packet of the first session, stored in the pending buffer, into a packet queue of the second core.