Patent classifications
H04L49/9094
Data Transmission Method and System
The present application discloses a method and a system for transmitting data. A method embodiment comprises: acquiring a most recent shared memory block index of a shared memory segment by a data receiver, the shared memory segment being used by a data transmitter and the data receiver to transmit data; deciding whether the most recent shared memory block index is consistent with a shared memory block index corresponding to data recently read by the data receiver; and determining, according to the decision, whether to read the data in the shared memory block corresponding to the most recent shared memory block index. According to the present application, when the frequency at which the data receiving process processes data is lower than the frequency at which the data transmitting process processes data, the data receiving process directly reads the most recent data and abandons the outdated data which is not processed in time, without influencing other data receiving processes that process data in a higher frequency. Accordingly, the extremely high demand for instantaneity for processing data by a process in the control system of an autonomous vehicle, for example, is satisfied. Therefore, the security and stability of the system are improved.
Transport stream packet header compression
A demultiplexer 630 routes only one or more transport stream packets with a single packet identifier value to each physical layer pipe. A header compression unit 620 replaces the packet identifier of the transport stream packet with a short packet identifier of one bit length indicating at least whether the transport stream packet is a NULL packet.
Packet Processing At A Computer
A computer stores packets from a first device at a first buffer. The computer decodes the packets to obtain decoded packets at a decoder. The computer encodes encoding the decoded packets to obtain encoded packets at an encoder. The computer transmits the encoded packets from the encoder to a storage unit. The computer fetches the encoded packets from the storage unit using a second buffer. The computer causes a transmitter to transmit the encoded packets from the second buffer to a second device.
TECHNIQUES FOR WARMING UP A NODE IN A DISTRIBUTED DATA STORE
In various embodiments, a node manager configures a “new” node as a replacement for an “unavailable” node that was previously included in a distributed data store. First, the node manager identifies a source node that stores client data that was also stored in the unavailable node. Subsequently, the node manager configures the new node to operate as a slave of the source node and streams the client data from the source node to the new node. Finally, the node manager configures the new node to operate as one of multiple masters nodes in the distributed data store. Advantageously, by configuring the node to implement a hybrid of a master-slave replication scheme and a master-master replication scheme, the node manager enables the distributed data store to process client requests without interruption while automatically restoring the previous level of redundancy provided by the distributed data store.
Radio communication apparatus
A radio receiving apparatus for receiving the variable-length RLC PDU data in an RLC layer includes the buffer memory sectioned into a plurality of areas having a predetermined maximum data length of the RLC PDU data. By referring to a sequence number SN included in each received RLC PDU data, the radio receiving apparatus stores the RLC PDU data having an identical sequence number SN into an identical area, and assembles an RLC SDU data on a basis of the RLC PDU data stored in each area.
METHOD AND DEVICE FOR FORWARDING DATA MESSAGES
The present application discloses a method and device for forwarding a data message. A specific embodiment of the method comprises: receiving the data message and reading a data context length value of a first row in the data message; determining whether the data context length value is less than or equal to a maximum segment size in a single transmission according to a transmission control protocol; reading data from the data message in segments in response to the data context length value being less than or equal to the maximum segment size in the single transmission according to the transmission control protocol; reading data from the data message in rows in response to the data context length value being greater than the maximum segment size in the single transmission according to the transmission control protocol; and storing the read data in a user buffer, and sending the data in the user buffer to a terminal if the data in the user buffer exceeds a preset capacity threshold. According to this embodiment, the data messages can be quickly and efficiently forwarded.
Reliable transport offloaded to network devices
Examples described herein relate to a reliable transport protocol for packet transmission using an Address Family of an eXpress Data Path (AF_XDP) queue framework, wherein the AF_XDP queue framework is to provide a queue for received packet receipt acknowledgements (ACKs). In some examples, an AF_XDP socket is to connect a service with a driver for the network device, one or more queues are associated with the AF_XDP socket, and at least one of the one or more queues comprises a waiting queue for received packet receipt ACKs. In some examples, at least one of the one or more queues is to identify one or more packets for which ACKs have been received. In some examples, the network device is to re-transmit a packet identified by a descriptor in the waiting queue based on non-receipt of an ACK associated with the packet from a receiver.
System and method for preserving order of data processed by processing engines
A device includes an input processing unit and an output processing unit. The input processing unit dispatches first data to one of a group of processing engines, records an identity of the one processing engine in a location in a first memory, reserves one or more corresponding locations in a second memory, causes the first data to be processed by the one processing engine, and stores the processed first data in one of the locations in the second memory. The output processing unit receives second data, assigns an entry address corresponding to a location in an output memory to the second data, transfers the second data and the entry address to one of a group of second processing engines, causes the second data to be processed by the second processing engine, and stores the processed second data to the location in the output memory.
Fixed HS-DSCH or E-DCH allocation for VOIP (or HS-DSCH without HS-SCCH/E-DCH without E-DPCCH)
In order to reduce the HS-SCCH overhead, a fixed time allocation approach could be used. In that case, the scheduling time of each VoIP user is semi-static and thus there is no need to transmit e.g. HS-SCCH toward the UE for the initial transmissions, if the UE knows when to receive data on the HS-DSCH and what transport format is used. There are at least two ways of implementing this: 1) HS-SCCH/E-DPCCH signalling to indicate parameters of a first transmission, with subsequent transmissions using the same parameters (and HS-SCCH/E-DPCCH always sent when changes needed), or 2) fixed allocation, RRC signalling used to allocate users and tell the default transport parameters.
Reception according to a data transfer protocol of data directed to any of a plurality of destination entities
A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.