Method of guaranteeing jitter upper bound for a network without time-synchronization
11742972 · 2023-08-29
Assignee
Inventors
Cpc classification
H04L47/283
ELECTRICITY
International classification
Abstract
In a method of guaranteeing a jitter upper bound for a network without time-synchronization, which guarantees a jitter upper bound for a flow that is transmitted from a source to a destination through a network, the network guarantees a latency upper bound of the flow, a buffer located between the network and the destination holds a packet of the flow for a predetermined buffer holding interval and then outputs, and the jitter upper bound is set to an arbitrary value including 0 (zero).
Claims
1. A method of guaranteeing a jitter upper bound for a network without time-synchronization, which guarantees a jitter upper bound for a flow that is transmitted from a source to a destination through a network, setting a jitter upper bound to an arbitrary value, determining buffer holding intervals, holding, at a buffer located between the network and the destination, packets of the flow for the predetermined holding intervals, the buffer outputting the packet of the flow, and wherein the network guarantees a latency upper bound of the flow, wherein the arbitrary value can be as low as 0 (zero), wherein the packet includes a relative time-stamp in which a departure time from the source or an arrival time to the network is recorded, wherein the buffer holding interval is determined using the relative time-stamp, and wherein the buffer is configured to: hold a first packet of the flow for a buffer holding interval (m−W); and hold a n.sup.th packet of the flow until a time max {b.sub.n, C.sub.1+(a.sub.n−a.sub.1)}, where a.sub.n is an arrival time when the n.sup.th packet of the flow arrives at the network, a.sub.1 is an arrival time when the first packet of the flow arrives at the network, b.sub.n is a departure time when the n.sup.th packet departs from the network, c.sub.1 is a buffer-out time when the first packet departs from the buffer, U is the latency upper bound, W is a latency lower bound provided by the network, m is a buffering parameter, W≤m≤U, and n >1.
2. The method of guaranteeing a jitter upper bound for a network without time-synchronization of claim 1, wherein the jitter upper bound is U−m, and zero jitter is implemented by setting m=U.
3. The method of guaranteeing a jitter upper bound for a network without time-synchronization of claim 2, wherein the arbitrary value is set to 0 (zero).
4. A method of guaranteeing a jitter upper bound for a network without time-synchronization, which guarantees a jitter upper bound for a flow that is transmitted from a source to a destination through a network, setting a jitter upper bound to an arbitrary value, determining buffer holding intervals, holding, at a buffer located at a boundary of the network, packets of the flow for the predetermined holding intervals, the buffer outputting the packet of the flow, and wherein the network guarantees a latency upper bound of the flow, wherein the arbitrary value can be as low as 0 (zero), wherein the packet includes a relative time-stamp in which a departure time from the source or an arrival time to the network is recorded, wherein the buffer holding interval is determined using the relative time-stamp, and wherein the buffer is configured to: hold a first packet of the flow for a buffer holding interval (m−W); and hold a n.sup.th packet of the flow until a time max {b.sub.n, C.sub.1+(a.sub.n−a.sub.1)}, where a.sub.n is an arrival time when the n.sup.th packet of the flow arrives at the network, a.sub.1 is an arrival time when the first packet of the flow arrives at the network, b.sub.n is a departure time when the n.sup.th packet departs from the network, c.sub.1 is a buffer-out time when the first packet departs from the buffer, U is the latency upper bound, W is a latency lower bound provided by the network, m is a buffering parameter, W≤m≤U, and n>1.
5. The method of guaranteeing a jitter upper bound for a network without time-synchronization of claim 4, wherein the jitter upper bound is U−m, and zero jitter is implemented by setting m=U.
6. The method of guaranteeing a jitter upper bound for a network without time-synchronization of claim 5, wherein the arbitrary value is set to 0 (zero).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The above and other features and advantages will become more apparent to those of ordinary skill in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7) In the following description, the same or similar elements are labeled with the same or similar reference numbers.
DETAILED DESCRIPTION
(8) The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
(9) The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes”, “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. In addition, a term such as a “unit”, a “module”, a “block” or like, when used in the specification, represents a unit that processes at least one function or operation, and the unit or the like may be implemented by hardware or software or a combination of hardware and software.
(10) Reference herein to a layer formed “on” a substrate or other layer refers to a layer formed directly on top of the substrate or other layer or to an intermediate layer or intermediate layers formed on the substrate or other layer. It will also be understood by those skilled in the art that structures or shapes that are “adjacent” to other structures or shapes may have portions that overlap or are disposed below the adjacent features.
(11) In this specification, the relative terms, such as “below”, “above”, “upper”, “lower”, “horizontal”, and “vertical”, may be used to describe the relationship of one component, layer, or region to another component, layer, or region, as shown in the accompanying drawings. It is to be understood that these terms are intended to encompass not only the directions indicated in the figures, but also the other directions of the elements.
(12) Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
(13) Preferred embodiments will now be described more fully hereinafter with reference to the accompanying drawings. However, they may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
(14) The core idea of the present disclosure is to buffer a first packet of a flow for an appropriate time interval based on a latency upper bound provided by a network, and to buffer all subsequent packets of the flow so that inter-arrival intervals thereof become similar to inter-departure intervals.
(15) In the prior art, time-stamp is generally used for latency/jitter measurement or clock synchronization, but in the present disclosure, it is used as a tool for jitter upper bound guarantee.
(16) Hereinafter, a method of guaranteeing a jitter upper bound for a deterministic network without time-synchronization according to an embodiment of the present disclosure will be described.
(17)
(18) In the present disclosure, time-synchronization or slot allocation task between nodes included in the network is not required, and the following three components are required: 1) a network 10 that guarantees latency upper bounds; 2) a buffer 20 before a destination, which can hold the packets destined for the destination for a predetermined interval; and 3) packets with relative time-stamps inscribed with the clock of the source, or of the network ingress interface.
(19) The relative time-stamp means that it is not needed to synchronize the source of the traffic recording a time-stamp with another node. In the present disclosure, there is a mechanism for time-stamping a departure instance from the source or an arrival instance (a.sub.n) to the network to the packet. For example, the time-stamp function of TCP or RTP may be used. Alternatively, a time-stamp implemented in a lower layer, for example a MAC layer, may also be used. In addition, there is no need to share a synchronized clock between the source and the destination, and it is sufficient to know the difference between time-stamps, namely information about the relative arrival instance.
(20) Also, the destination may be a small-sized network with a synchronized network, like a TSN synchronous network.
(21) The network 10 is irrelevant to any size, topology or input pattern as long as it guarantees the latency upper bound (U) of the flow. In addition, the network 10 provides a lower latency bound (W) to the flow, and the lower latency bound (W) may be caused by, for example, a transmission and propagation delay within the network.
(22) The time when an n.sup.th packet of the flow is input to the network 10 is called an arrival instance (a.sub.n). The time when the n.sup.th packet is output from the network 10 is called a departure instance (b.sub.n). For example, a.sub.1 and b.sub.1 are arrival and departure instances of the first packet of the flow, respectively. It is assumed that the buffering parameter m is a value between W and U, that is, W≤m≤U.
(23) The buffer 20 may hold packets of the flow according to predefined intervals. To determine the buffer holding interval, a time-stamp in each packet may be used as described later.
(24) It may be assumed that the buffer 20 is able to support as many as the number of flows destined for the destination. In addition, if the buffer 20 is not suitable to be placed within an end station, a buffering function may be added at the boundary of the network 10.
(25)
(26) The basic rule for determining the buffer holding interval is as follows. The buffer 20 holds the first packet for an interval (m−W), for m, W≤m≤U. The buffer-out instance of the first packet c.sub.1 is (b.sub.1+m−W). The buffer 20 holds the n.sup.th packet until instance max {b.sub.n, c.sub.1+(a.sub.n−a.sub.1)}, for n>1.
(27) To this end, the buffer 20 needs information on W, U, b.sub.1, c.sub.1, b.sub.n and (a.sub.n−a.sub.1). Here, W and U can be informed by the network, and b.sub.1 and b.sub.n can be easily obtained from the buffer with its own clock. The buffer 20 needs to keep the record of the buffer-out instance c.sub.1. The time difference (a.sub.n−a.sub.1) can be calculated from the difference of the time-stamps of the packet written at the source. As such, the buffer 20 does not need to know the exact values of a.sub.n or a.sub.1, so the source clock does not need to be synchronized with the buffer clock.
(28) Algorithm 1 below is an algorithm for determining a buffer holding interval according to an embodiment of the present disclosure.
(29) TABLE-US-00002 Algorithm 1 Buffer holding interval decision 01: procedure BUFFER (m, PKT) m is a preset value based on W, U, and the jitter bound
W ≤ m ≤ U
PKT is a packet just received by the buffer 02: if first packet in the flow then 03: hold the packet with the interval (m − W)
The buffer-out instance of the first packet c.sub.1 is then (b.sub.1 + m − W) 04: TRANSMIT the packet at the decided instance 05: while there is a packet already arrived before the first packet 06: hold the packet until the instance max {b.sub.n, c.sub.1 + (a.sub.n − a.sub.1)}
Let's say this is n.sup.th packet, n > 1 07: TRANSMIT the packet at the decided instance 08: end while 09: else if the first packet has already arrived then 10: hold n.sub.th packet until the instance max {b.sub.n, c.sub.1 + (a.sub.n − a.sub.1)}
Let's say this is n.sup.th packet, n > 1 11: TRANSMIT the packet at the decided instance 12: else wait for another packet arrival 13: end if
(30) The implementation of the lines 6 and 10 in Algorithm 1 is feasible since max {b.sub.n, c.sub.1+(a.sub.n−a.sub.1)} is greater than or equal to b.sub.n, which is the packet departure instance of the n.sup.th packet from the network, by definition.
(31) The buffer 20 has to be able to identify the first packet of a flow, in order to identify the relative time-stamp values representing the instance b.sub.1 and a.sub.1. If a flag at the header indicating that the packet is indeed the first packet is used, or if a FIFO property is guaranteed in the network, this is trivial. Alternatively, a sequence number written in the packet header, such as the one in RTP, would work as well.
(32) If some of the earlier packets (e.g., 2.sup.nd or 3.sup.rd packets of a flow) arrive to the buffer sooner than the first packet, they will be held until the first packet's buffer-out plus the additional interval, as specified in lines 5 and 6 of Algorithm 1.
(33)
(34) According to the present disclosure described above, it may be found that the following theorems hold.
Theorem 1 (Upper Bound of the End to End (E2E) Buffered Latency)
(35) The latency from the packet arrival to the buffer-out instance (c.sub.n−a.sub.n) is upper bound by (m+U−W): Proof. By definition,
(36)
Theorem 2 (Lower Bound of the E2E Buffered Latency)
(37) The latency from the packet arrival to the buffer-out instance (c.sub.n−a.sub.n) is lower bound by m: Proof. By definition,
(38)
(39) The jitter, or the latency difference between a pair of packets, can be defined as follows.
Definition (Jitter)
(40) The jitter between the i.sup.th packet and the j.sup.th packet of a flow is defined as follows:
r.sub.ij=|(c.sub.i−a.sub.i)−(c.sub.j−a.sub.j)|.
Theorem 3 (Upper Bound of the Jitter)
(41) The jitter is upper bounded by (U−m): Proof. Define r.sub.n=r.sub.n1=(c.sub.n−a.sub.n)−(c.sub.1−a.sub.1). c.sub.1=(b.sub.1+m−W) and c.sub.n=max {b.sub.n,c.sub.1+(a.sub.n−a.sub.1)}. From the definition,
(42)
(43) The jitter between packets i and j, r.sub.ij can be rewritten such as r.sub.ij=|(c.sub.i−c.sub.1)−(a.sub.i−a.sub.1)−(c.sub.j−c.sub.1)+(a.sub.j−a.sub.1)|=|r.sub.i−r.sub.j|. Since 0≤r.sub.i≤U−m and 0≤r.sub.j≤U−m, the jitter r.sub.ij≤U−m, for any i, j>0.
(44) For example, if it is assumed that there is a flow requesting the end-to-end buffered latency upper bound of 10 ms and a jitter upper bound of 1 ms, from Theorem 1, (m+U−W)=10 ms, and from Theorem 3, (U−m)=1 ms. Based on these equations, it is possible to obtain U=5.5 ms+W/2 and m=4.5 ms+W/2. As such, during the call setup process, upon the flow's requested specifications, the network and the buffer may assign U=(5.5 ms+W/2) of the actual network latency upper bound, and m=(4.5 ms+W/2) parameter for the buffering.
(45) As an extreme case, if it is wanted to achieve an absolute synchronization, i.e., the inter-departure times of the output packets (c.sub.n−c.sub.n−1) are exactly the same as the inter-arrival times (a.sub.n−a.sub.n−1). Then, the jitter may be set to be equal to 0 (zero). In this case, the absolute synchronization may be achieved by setting m=U, the buffered latency upper bound becomes 2U−W, which is close to 2U when W is negligible.
(46) Hereinafter, a simple network with a small number of time-sensitive flows will be considered to check the performance of the present disclosure.
(47) The traffic is composed of three classes with characteristics summarized in Table 2.
(48) TABLE-US-00003 TABLE 2 {Packet length, Maximum burst size, Symbol Traffic type Arrival rate} of a flow A Audio {2 Kbit, 2 Kbit, 1.6 Mbps} V Video {12 Kbit, 360 Kbit, 11 Mbps} C Command & Control {2.4 Kbit, 2.4 Kbit, 480 Kbps}
(49) The video flow emits a larger burst compared to the other types of flows, and the number of C&C flows are more than double the number of the other two types of flows combined.
(50) The flows characteristics are further simplified such that the audio flows (A flows) emit 256 byte packet every 1.25 ms, video flows (V flows) emit 30*1500 byte bursts with 33 ms period, and C&C flows (C flows) emit 300 byte packet every 5 ms. As shown in
(51) Link capacities for all the links are set to be 1 Gbps, which is common with Gigabit Ethernet nowadays. With the notation for a flow {Packet length, Max burst, Arrival rate}, audio flow's parameter set is {256 B, 256 B, 256 B/1.25 ms} to {2 Kbit, 2 K, 1.6 Mbps}, video flows' parameter set is {1500 B, 30*1500 B, 30*1500 B/33 ms} to {12 Kbit, 360 Kbit, 11 Mbps}, and C&C flows' parameter set is {300 B, 300 B, 300 B/5 ms} to {2400 bit, 2400, 480 Kbps}.
(52) Given the topology, the input flows and their destination, three types of solutions that guarantee latency upper bound are considered. They are the TSN synchronous approach, the DiffServ, and the FAIR framework. The three types of traffic are assumed to have the same high priority.
(53) Considering the problems to be solved by the present disclosure, the FAIR or DiffServ framework, which guarantees the latency upper bound, may be selected.
(54) The FAIR may guarantee 0.2164 ms latency upper bound. Therefore, the parameter U equals 0.2164 ms. The sum of the packet transmission time in four nodes is 4 times of 2.4 μs, which is the latency lower bound, and the parameter W equals to 9.6 μs. If the parameter m is set to be equal to U, 0.2164 ms, then from Theorem 1, the end-to-end buffered latency upper bound is (m+U−W), which is equal to 0.4304 ms and the jitter upper bound is U−m. That is, a perfect synchronization can be achieved without time-synchronization functions specified in the TSN standard, and simultaneously the latency upper bound requirements (10 ms in this case) can be satisfied by a wide margin, which is a much better result compared to TSN.
(55) CPRI is expected to maintain the maximum end-to-end latencies less than 100 to 250 μs, jitter within 65 to 100 ns, in 10 G Ethernet network. If scaling down to a 1 G network, it is possible to infer about 1 to 2.5 ms latency upper bound and 0.65 to 1 μs jitter upper bound.
(56) If the jitter upper bound is set to be 1 μs, then again from Theorems 1 and 3, with the FAIR framework, m=U−1 μs=0.2154 ms, m+U−W=0.4294, U−m=1 μs. As such, the latency upper bound 0.4294 ms meets even the stringent CPRI requirement of 1 ms.
(57) The DiffServ framework, or simple strict priority schedulers for high priority traffic with preemption, can guarantee 0.8584 ms. It is also possible to achieve zero jitter with the DiffServ framework, with a buffer, at the cost of lengthening the end-to-end buffered latency upper bound roughly twice, to 1.7144 ms. This is the similar upper bound to that of the TSN synchronous approach, which is 1.712 ms. With 1 μs jitter upper bound, the buffered latency upper bound is 1.7134 ms.
(58) Hereinafter, the result of simulating the network of
(59) In the simulation, all the flows emit the maximum burst at the start of the simulation. It is so set in order to observe the worst case behavior at the earlier stage of the simulation. A single run of the simulation lasts for 16.5 seconds. The C&C flows, the audio flows, and the video flows produce 3,300, 13,200, and 15,000 packets respectively, for a single run, and 100 runs are performed repeatedly. All C&C flows with the same input port and output port were observed, and the result was obtained using the latency observed for 6,600,000 packets.
(60)
(61) Referring to
(62)
(63) Referring to
(64) While the present disclosure has been described with reference to the embodiments illustrated in the figures, the embodiments are merely examples, and it will be understood by those skilled in the art that various changes in form and other embodiments equivalent thereto can be performed. Therefore, the technical scope of the disclosure is defined by the technical idea of the appended claims The drawings and the forgoing description gave examples of the present invention. The scope of the present invention, however, is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of the invention is at least as broad as given by the following claims.