Patent classifications
H04L12/861
METHOD OF COMMUNICATING DATA PACKETS WITHIN DATA COMMUNICATION SYSTEMS
A data communication system is provided. The data communication system includes at least one transmitter that is operable to communicate data packets via a data communication network and/or a data carrier to at least one receiver. The at least one transmitter is operable to include within at least one of the data packets a plurality of mutually different types of data having mutually different priorities. Optionally, the data communication system is operable to communicate to the at least one receiver information that is indicative of the one or more priorities of the plurality of mutually different types of data. Optionally, the data communication system is operable to communicate the information that is indicative of the mutually different priorities of the plurality of mutually different types of data within the at least one of the data packets.
IC CARD, PORTABLE ELECTRONIC APPARATUS, AND IC CARD PROCESSING APPARATUS
An IC card has a communication unit, a storage unit, and a controller. The communication unit communicates with an external apparatus. A communication buffer for communication between the communication unit and the external apparatus is set in the storage unit. If the size of a buffer used in communication is designated by the external apparatus, the controller sets a receive buffer that stores reception data and a transmit buffer that stores transmission data in the communication buffer, and notifies the external apparatus of the size of the set receive buffer and the size of the set transmit buffer.
REDUCING NETWORK LATENCY DURING LOW POWER OPERATION
In an embodiment, a method includes identifying a core of a multicore processor to which an incoming packet that is received in a packet buffer is to be directed, and if the core is powered down, transmitting a first message to cause the core to be powered up prior to arrival of the incoming packet at a head of the packet buffer. Other embodiments are described and claimed.
Packet processing at a server
A server processers received real-time transport protocol packets from a first device to obtain sequentially ordered packets at a first buffer. The server decodes the sequentially ordered packets to obtain decoded packets at a decoder. The server encodes the decoded packets to obtain encoded packets at an encoder. The server transmits the encoded packets from the encoder to a storage unit. The server fetches the encoded packets from the storage unit at a first interval using a second buffer. The server transmits the encoded packets from the second buffer to a second device at a second interval.
Combining with variable limited buffer rate matching
Methods, systems, and devices for wireless communications are described. A user equipment (UE) may receive a first transmission including encoded bits of a data packet and a second transmission including some or all of the encoded bits of the data packet. The first transmission associated with a first limited buffer rate matching (LBRM) configuration and the second transmission may be associated with a second LBRM configuration. The UE may process the first transmission and the second transmission based on the first LBRM associated with the first transmission being different than the second LBRM associated with the second transmission.
Technologies for packet forwarding on ingress queue overflow
Technologies for packet forwarding under ingress queue overflow conditions includes a computing device configured to receive a network packet from another computing device, determine whether a global packet buffer of the NIC is full, and determine, in response to a determination that the global packet buffer is full, whether to forward all the global packet buffer entries. The computing device is additionally configured to compare, in response to a determination not to forward all the global packet buffer entries, a selection filter to one or more characteristics of the received network packet and forward, in response to a determination that the selection filter matches the one or more characteristics of the received network packet, the received network packet to a predefined output. Other embodiments are described herein.
CLOUD SERVER SYSTEM
Provided is a cloud server system, the system comprising a plurality of multi-root input/output virtualized PCIE switches (MR-IOV Switches) that are interconnected each other. The cloud server system based on the MR-IOV PCIE Switch in the present invention can well meet the design requirements of the cloud servers very well, with a high performance-to-consumption ratio, strong overall service capability, low cost, low power consumption and high energy efficiency. I/O virtualization is realized architecture, thus maximally ensuring the performance of the server.
Transport stream packet header compression
A demultiplexer 630 routes only one or more transport stream packets with a single packet identifier value to each physical layer pipe. A header compression unit 620 replaces the packet identifier of the transport stream packet with a short packet identifier of one bit length indicating at least whether the transport stream packet is a NULL packet.
Data processing apparatus and data processing method
A data processing apparatus includes a shared buffer; an issuing unit that issues a write address for writing incoming data to the shared buffer; a receiving unit that receives a returned read address for the data read from the shared buffer; a monitoring buffer that saves information indicating use status of an address for the shared buffer; and a monitoring unit that monitors write address issuance and returned read address reception, changes the information for the write address, from an unused state to a used state, when the write address is issued, and changes the information for a read address to be returned, from a used state to an unused state when the returned read address is received. The monitoring unit determines the address for the shared buffer is overlapping, when the information for the write address indicates a used state when the write address is issued.
TECHNOLOGIES FOR COORDINATING ACCESS TO DATA PACKETS IN A MEMORY
Technologies for coordinating access to packets include a network device. The network device is to establish a ring in a memory of the network device. The ring includes a plurality of slots. The network device is also to allocate cores to each of an input stage, an output stage, and a worker stage. The worker stage is to process data in a data packet with an associated worker function. The network device is also to add, with the input stage, an entry to a slot in the ring representative of a data packet received with a network interface controller of the network device, access, with the worker stage, the entry in the ring to process at least a portion of the data packet, and provide, with the output stage, the processed data packet to the network interface controller for transmission.