Patent classifications
H04L49/9042
METHOD AND DEVICE FOR FORWARDING DATA MESSAGES
The present application discloses a method and device for forwarding a data message. A specific embodiment of the method comprises: receiving the data message and reading a data context length value of a first row in the data message; determining whether the data context length value is less than or equal to a maximum segment size in a single transmission according to a transmission control protocol; reading data from the data message in segments in response to the data context length value being less than or equal to the maximum segment size in the single transmission according to the transmission control protocol; reading data from the data message in rows in response to the data context length value being greater than the maximum segment size in the single transmission according to the transmission control protocol; and storing the read data in a user buffer, and sending the data in the user buffer to a terminal if the data in the user buffer exceeds a preset capacity threshold. According to this embodiment, the data messages can be quickly and efficiently forwarded.
Data processing device and data processing system
A data processing device includes a first CPU (Central Processing Unit), a first memory, a CAN (Controller Area Network) controller and a system bus coupled to the first CPU, the first memory and the CAN controller, wherein the CAN controller comprises a receive buffer that stores a plurality of messages each of which has a different ID, and a DMA (Direct Memory Access) controller that selects the latest message among messages having a fist ID stored in the receive buffer and transfers the selected latest message to the first memory, wherein the message is one of CAN, CAN FD and CAN XL messages.
SWITCH AND DATA ACCESSING METHOD THEREOF
A switch for transmitting data packets between at least one source node and at least one target node is provided. The switch includes a storage unit, a control unit, at least one receiving port and at least one transmitting port. The storage unit includes a plurality of storage blocks and configured to cache the data packets. The control unit is configured to manage the storage blocks. The switch receives and caches the data packets transmitted from the at least one source node via the receiving port and transmits the cached data packets to the at least one target node via the transmitting port. A data accessing method adapted for the switch is also provided.
Digital Signal Processing Over Data Streams
The techniques and systems described herein are directed to providing deep integration of digital signal processing (DSP) operations with a general-purpose query processor. The techniques and systems provide a unified query language for processing tempo-relational and signal data, provide mechanisms for defining DSP operators, and support incremental computation in both offline and online analysis. The techniques and systems include receiving streaming data, aggregating and performing uniformity processing to generate a uniform signal, and storing the uniform signal in a batched columnar representation. Data can be copied from the batched columnar representation to a circular buffer, where DSP operations are applied to the data. Incremental processing can avoid redundant processing. Improvements to the functioning of a computer are provided by reducing an amount of data that to be passed back and forth between separate query databases and DSP processors, and by reducing a latency of processing and/or memory usage.
METHOD FOR ACCESSING SYSTEM MEMORY AND ASSOCIATED PROCESSING CIRCUIT WITHIN A NETWORK CARD
The present invention provides a method for accessing a system memory, wherein the method includes the steps of: reading a descriptor from the system memory, where the descriptor includes a buffer start address field and a buffer size field, wherein the buffer start address field includes a start address of a buffer in the system memory, and the buffer size field indicates a size of the buffer; receiving multiple packets, and writing the multiple packets in to the buffer; modifying the descriptor according to the multiple packets stored in the buffer to generate a modified descriptor, wherein the modified descriptor only comprises information of part of the multiple packets or does not comprise information of any one of the multiple packets; and writing the modified descriptor into the system memory.
Decoupled packet and data processing rates in switch devices
Continuing to integrate more aggregate bandwidth and higher radix into switch devices is an economic imperative because it creates value both for the supplier and customer in large data center environments which are an increasingly important part of the marketplace. While new silicon processes continue to shrink transistor and other chip feature dimensions, process technology cannot be relied upon as a key driver of power reduction. Transitioning from 28 nm to 16 nm is a special case where FinFET provides additional power scaling, but subsequent FinFET nodes are not expected to deliver as substantial of power reductions to meet the desired increases in integration. The disclosed switch architecture attacks the power consumption problem by controlling the rate at which power-consuming activities occur.
System and method for exchanging information among exchange applications
In a system and method for accessing messages in a data store in a gateway, a data frame request, which is a structured SQL query, is received at the gateway. The received data frame request is applied to the gateway data store, which stores messages. A data frame is generated that comprises messages from the data store that are responsive to the received data frame request, the data frame having a format that is readable by a character editor.
Caching of service decisions
Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations.
Method for storing and retrieving packets in high bandwidth and low latency packet processing devices
A packet processor includes a header processor and a packet memory. A receive direct memory access block is configured to receive a packet with a header and a payload and to route the header to the header processor and to route the payload to the packet memory such that the header processor begins processing of the header while the payload is loaded into packet memory.
Apparatus and method for use in a spacewire-based network
An apparatus for use in a SpaceWire-based network is configured to send and receive data packets, and process data included in a received data packet. A header of the received data packet is stored in a buffer while the data is being processed, a processed data packet including the stored header and the processed data is generated, and the processed data packet is transmitted. The header of the received data packet may be modified, and the modified header attached to the processed data to generate the processed data packet. When the data packet is received via a first port, the processed data packet may be transmitted via the first port, or may be transmitted via a second port.