H04L12/865

INTERFACE APPARATUS BETWEEN TSN-DEVICES AND NON-TSN-DEVICES
20210258264 · 2021-08-19 ·

A method for transmitting a first data packet from a receiving input-buffer to a receiving output-buffer, the first data packet in the receiving input-buffer having a non-TSN format and the first data packet in the receiving output-buffer being TSN-compliant, includes the steps of: analysing the first data packet, which has been retrieved from a non-TSN device, in the receiving input-buffer; adding a first data packet time to the first data packet according to a Precision Time Protocol (PTP); adding a predefined first data packet priority level to the first data packet according to a Priority Code Point (PCP) of an 802.1Q tag; transmitting the first data packet to the receiving output-buffer; and sending the first data packet according to the first data packet priority level to a TSN-compliant device.

Methods and apparatus for virtualized hardware optimizations for user space networking

Methods and apparatus for efficient data transfer within a user space network stack. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. Direct transfer reduces the per-byte and per-packet costs relative to socket based communication. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack).

Information processing apparatus and verification system
11088960 · 2021-08-10 · ·

An information processing apparatus includes a processor that obtains a flow table from a switch apparatus that processes packets by using the flow table. The processor creates, for each flow registered with the flow table, a verification packet based on identification information that identifies each flow. The processor determines a number of verification packets based on a count value that represents a number of actual packets that have arrived at the switch apparatus. The processor generates the number of verification packets for each flow by copying the verification packet created for each flow. The processor determines transmission order of the generated verification packets based on the count value for each flow and time information that represents a time when a final actual packet of each flow has arrived at the switch apparatus.

TS OPERATION FOR RTA SESSION MANAGEMENT
20210306271 · 2021-09-30 · ·

A wireless local area network (WLAN) station and protocol configured to support communicating real-time application (RTA) packets that are sensitive to communication delays as well as non-real time packets over a network supporting within traffic stream (TS) operations in which real time application (RTA) traffic and non-RTA traffic coexist. Stations can request establishing a traffic stream from neighboring stations, which can accept or deny the TS for the RTA stream. Additional information can be passed in requesting the stream or by the responder for denying the stream.

LATENCY BASED FORWARDING OF PACKETS WITH DESTINATION POLICIES

Latency Based Forwarding (LBF) techniques are presented for the management of the latencies, or delays, of packets forwarded over nodes, such as routers, of a network. In addition to a network header indicating a destination node for receiving the packet, a packet also includes an LBF header indicating the packets accumulated delay since leaving the sender, a maximum latency for the entire journey from the sender to the receiver and a minimum latency for the entire journey from the sender to the receiver. When a packet is received at a node, based on the accumulated delay, the maximum latency, and the minimum latency, the node places the packet in a forwarding queue to manage the delays between the sender and the receiver. The LBF can also indicate a policy for the forwarding node to used when determining the enqueueing of the packet.

Method and apparatus for transmitting and obtaining uplink HARQ feedback

Methods and apparatus for transmitting and obtaining uplink HARQ feedbacks are provided. The method of transmitting uplink HARQ feedbacks includes: obtaining a service priority ordered list associated with a plurality of downlink subframes; generating to-be-sent HARQ feedbacks, by bundling HARQ feedbacks associated with downlink subframes in a descending order of service priorities based on the service priority ordered list; and successively assigning uplink subframes for the to-be-sent HARQ feedbacks to transmit the to-be-sent HARQ feedbacks to a base station. With the method of transmitting uplink HARQ feedbacks provided by the present disclosure, user equipment may preferentially transmit HARQ feedbacks of subframes with low-latency to the base station. When the base station determines that the transmission of the latency-sensitive service data is faulty, the downlink subframe carrying the latency-sensitive service data is preferentially re-transmitted, thereby shortening the delivery time of the latency-sensitive service data.

In-vehicle apparatus, information processing unit, information processing method, and non-transitory computer readable storage medium that stores program

An in-vehicle apparatus includes a processor configured to obtain first transmission data with a first communication address as a destination and second transmission data with a second communication address as a destination from one or more applications, transmit the first transmission data to a relay unit at a first timing among a plurality of timings set at an interval of a predetermined cycle corresponding to a buffer size of the relay unit to which the in-vehicle apparatus is connected, and transmit the second transmission data to the relay unit at a second timing among the plurality of timings, the second timing being different from the first timing.

System and Method for Latency Critical Quality of Service Using Continuous Bandwidth Control

A system and method are provided for a bandwidth manager for packetized data designed to arbitrate access between multiple, high bandwidth, ingress channels (sources) to one, lower bandwidth, egress channel (sink). The system calculates which source to grant access to the sink on a word-to-word basis and intentionally corrupts/cuts packets if a source ever loses priority while sending. Each source is associated with a ranking that is recalculated every data word. When a source buffer sends enough words to have its absolute rank value increase above that of another source buffer waiting to send, the system “cuts” the current packet by forcing the sending buffer to stop mid-packet and selects a new, lower ranked, source buffer to send. When there are multiple requesting source buffers with the same rank, the system employs a weighted priority randomized scheduler for buffer selection.

INTELLEGENT QUEUING OF RULES BASED COMMAND INVOCATIONS
20210185146 · 2021-06-17 ·

Constraint based command invocations are dynamically queued in a cloud queue such that aspects of remote user devices may be remotely controlled with reduced exposure to inconvenient remotely issued commands. By monitoring conditions that may trigger command invocations, verifying rules of associated constraints prior to queuing command invocations, evaluating parameters to prioritize command invocations in a dynamic issuing order within cloud queue, examining factors and reverifying previously verified rules when determining whether to transmit a command from an command invocation located at a transmission position of the cloud queue, systems and methods herein provide a constrained environment within which user devices may be remotely controlled relatively free from unexpected cloud caused encumbrances during inopportune moments.

Quality of Service Management in a Distributed Storage System

One or more computing devices may comprise congestion management circuitry, one or more client file system request buffers, and DESS interface circuitry. The congestion management circuitry is operable to determine an amount of congestion in the DESS. The one or more client file system request buffers is/are operable to queue first client file system requests of a first priority level and second client file system requests of a second priority level, wherein the first priority level is higher priority than the second priority level. The DESS interface circuitry is operable to control a rate at which the first file system requests and second file system requests are fetched from the one or more client file system request buffers based on the amount of congestion in the DESS, on the first priority level, and on the second priority level.