Adaptive jitter buffer
09942119 · 2018-04-10
Assignee
Inventors
Cpc classification
H04J3/0632
ELECTRICITY
H04L25/05
ELECTRICITY
H04M1/2535
ELECTRICITY
International classification
H04L7/00
ELECTRICITY
Abstract
The present disclosure relates to an adaptive jitter buffer for buffering audio data received via a network. The adaptive jitter buffer comprises an adaptive audio sample buffer, which comprises an adaptive resampler that receives a number of audio samples of the audio data and that outputs a first number of audio samples, which are resampled from the received number of audio samples according to a resampling factor, an audio sample buffer that buffers audio samples, wherein the outputted first number of audio samples are written to the audio sample buffer during an input access event and a second number of audio samples are read from the audio sample buffer during an output access event, and an audio sample buffer fill quantity controller that controls a fill quantity of the audio sample buffer based on controlling the resampling factor of the adaptive resampler.
Claims
1. An adaptive jitter buffer for buffering audio data received via a packet-switched network, comprising: adaptive-resampler circuitry that receives a number of audio samples of the audio data and that outputs a first number of audio samples, which are resampled from the received number of audio samples according to a resampling factor, an audio sample buffer that buffers audio samples, wherein the outputted first number of audio samples are written to the audio sample buffer during an input access event and a second number of audio samples are read from the audio sample buffer during an output access event, and audio-sample-buffer-fill-quantity-controller circuitry that controls a fill quantity of the audio sample buffer, wherein the audio-sample-buffer-fill-quantity-controller circuitry controls the fill quantity of the audio sample buffer based on controlling the resampling factor of the adaptive-resampler circuitry.
2. The adaptive jitter buffer according to claim 1, wherein the audio-sample-buffer-fill-quantity-controller circuitry comprises: audio sample buffer fill quantity estimator circuitry that estimates an average instantaneous fill quantity of the audio sample buffer during an observation event, jitter estimator circuitry that estimates a jitter, audio sample buffer target fill quantity determiner circuitry that determines a target fill quantity of the audio sample buffer in dependence of the estimated jitter, and adaptive-resampler-controller circuitry that controls the resampling factor of the adaptive-resampler circuitry such that the fill quantity of the audio sample buffer approaches the determined target fill quantity of the audio sample buffer.
3. The adaptive jitter buffer according to claim 2, wherein the audio sample buffer fill quantity estimator circuitry estimates the average instantaneous fill quantity of the audio sample buffer during the observation event based on calculating a weighted average of the fill quantities of the audio sample buffer between pairs of temporally adjacent access events that occurred between the observation event and a temporally adjacent previous observation event.
4. The adaptive jitter buffer according to claim 3, wherein the calculation of the weighted average includes weights that depend on the normalized temporal distances between the pairs of temporally adjacent access events.
5. The adaptive jitter buffer according to claim 3, wherein a number of input access events between the observation event and the temporally adjacent previous observation event is in the range between 2 and 20.
6. The adaptive jitter buffer according to claim 2, wherein the audio sample buffer target fill quantity determiner circuitry determines the target fill quantity of the audio sample buffer based on determining a fill quantity variance corridor of the audio sample buffer, wherein the target fill quantity of the audio sample buffer is determined to be located within the fill quantity variance corridor of the audio sample buffer.
7. The adaptive jitter buffer according to claim 6, wherein the audio sample buffer target fill quantity determiner circuitry determines the fill quantity variance corridor of the audio sample buffer based on determining a first portion of the fill quantity variance corridor of the audio sample buffer independent of a jitter and a second portion of the fill quantity variance corridor of the audio sample buffer in dependence of the estimated jitter.
8. The adaptive jitter buffer according to claim 2, wherein the audio sample buffer target fill quantity determiner circuitry determines the target fill quantity of the audio sample buffer based on determining a safety distance between the fill quantity variance corridor of the audio sample buffer and a buffer underrun condition of the audio sample buffer.
9. The adaptive jitter buffer according to claim 2, wherein the audio sample buffer target fill quantity determiner circuitry performs a temporal smoothing of the determined target fill quantity of the audio sample buffer.
10. The adaptive jitter buffer according to claim 2, wherein the adaptive-resampler controller circuitry controls the resampling factor of the adaptive-resampler circuitry based on determining a corridor surrounding the determined target fill quantity of the audio sample buffer, wherein the control is performed differently depending on whether the estimated average instantaneous fill quantity of the audio sample buffer is inside or outside the corridor.
11. The adaptive jitter buffer according to claim 2, wherein the adaptive-resampler controller circuitry controls the resampling factor of the adaptive-resampler circuitry such that the outputted first number of audio samples is effectively a fractional number.
12. The adaptive jitter buffer according to claim 2, wherein the audio data comprises a first audio channel and a second audio channel, wherein the adaptive-resampler circuitry resamples audio samples from the first audio channel and audio samples from the second audio channel, and wherein the adaptive-resampler controller circuitry controls the resampling factor of the adaptive-resampler circuitry to be the same for the audio samples from the first audio channel and the audio samples from the second audio channel.
13. The adaptive jitter buffer according to claim 1, further comprising: a packet buffer that reorders packets, which comprise an encoded version of the audio data, when the packets are received in an incorrect temporal order via the packet-switched network.
14. The adaptive jitter buffer according to claim 13, further comprising: an audio data decoder that decodes the encoded version of the audio data, wherein the audio data decoder comprises a first audio data loss concealer that artificially generates audio data when a packet was lost or could not be reordered.
15. The adaptive jitter buffer according to claim 14, further comprising a second audio data loss concealer that artificially generates audio data samples when a buffer underun condition of the audio sample buffer occurs.
16. A device for receiving audio data, comprising: an input that receives the audio data; an adaptive jitter buffer that includes an adaptive audio sample buffer, wherein the adaptive audio sample buffer comprises: first circuitry that receives a number of audio samples of the audio data and that outputs a first number of audio samples, which are resampled from the received number of audio samples according to a resampling factor, an audio sample buffer that buffers audio samples, wherein the outputted first number of audio samples are written to the audio sample buffer during an input access event and a second number of audio samples are read from the audio sample buffer during an output access event, and second circuitry that controls the fill quantity of the audio sample buffer, wherein the second circuitry controls the fill quantity of the audio sample buffer based on controlling the resampling factor of the first circuitry; and an output that performs audio playback of the audio data by requesting the second number of audio samples via the output access event.
17. A method to buffer audio data received via a network, comprising: receiving audio samples of the audio data; sampling a first number of audio samples from the received audio samples based on a resampling factor; storing the first number of audio samples in an audio sample buffer during an input access event, the audio sample buffer having a fill quantity defined as a current number of audio samples stored in the audio sample buffer; outputting a second number of audio samples from the audio sample buffer during an output access event; and modifying the fill quantity of the audio sample buffer by modifying the resampling factor such that the fill quantity approaches a target fill quantity that is based on an estimated delay of adjacent input access events and adjacent output access events.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1) These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter. In the following drawings:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) Embodiments described herein are for an adaptive jitter. It is noted that the focus of this description will be primarily on audio aspects, because humans are more sensitive against temporal delays and signal interrupts in audio signals than in video signals.
(15) The proposed adaptive jitter buffer is suitable for different kinds of VOIP softphones and terminal devices. However, in some embodiments, the selection of the adaptive resampler may be designed to focus on the preservation of binaural cues in the context of binaural telephony.
(16) Connected Terminal Devices and End-to-End Delay in VOIP Communication
(17) In VOIP based communication, two audio devices (devices A and B in
(18) Naturally, instead of a simple peer-to-peer communication, more than two devices may be involved. For the sake of simplicity, however, the focus will be on a peer-to-peer audio communication link within this specification.
(19) Sometimes there is a temporal delay between the moment when the one communication partner emits an audio signal and the moment when the other partner listens to the emitted signal. This delay is caused by numerous technical procedures such as, e.g., buffering of audio samples, transmission delays, and audio signal processing.
(20) Typically, a VOIP communication is fully duplex, that is, the communication partners on both sides can interactively interruptwhether this happens by intention or notthe connected partner by talking while the other partner talks. In this context, a communication session can be degraded in terms of communication quality if the end-to-end-delay of the VOIP communication session is too high. If that is the case, the number of verbal interruptions increases and the communication partners tend to perceive the communication as annoying and may even end the communication. As a result the design of a VOIP terminal application to achieve a low end-to-end delay is desirable.
(21) The Nature and the Impact of Network Jitter
(22) VoIP transmission schemes, in most cases, rely on the so-called User Datagram Protocol (UDP). However, due to wide-spread use of firewalls and routers in local private home networks, applications may also employ the Transmission Control Protocol (TCP). Developers of a VOIP application often design the underlying protocol to be hidden from the users and negotiated by the VOIP application without the need for any user interaction.
(23) In both cases, UDP and TCP, due to the non-deterministic nature of the transport medium world-wide-web (WWW), packets emitted by one side of the communication arrive in time very often, but also may arrive with a significant delay (denoted as the network jitter). In the case of UDP, packets may also get lost during the transmission (denoted as a frame loss). If a packet does not arrive in time, audible artifacts may occur due to gaps in the audio signal caused by the loss of audio samples comprised by the lost packet.
(24) The impact of network jitter is demonstrated in the following by comparing
(25) In
(26) As shown in the figure, audio playback and the arrival of packets may be realized in independent threads, denoted here as audio thread and receive thread.
(27) The packets arrive from the network in the context of the receive thread periodically. The incoming packets are denoted as packets 1, 2, 3, 4 and 5 in the example shown in
(28) After arrival, the packets are transferred to the audio thread, in which the audio samples, which were recorded on the side of the connected partner, are reconstructed to provide a continuous stream of audio data that is played back by a loudspeaker. In most applications, the audio thread may also work based on packets of audio samples, because the VOIP terminal is operated in a non-real-time-system such as a PC, laptop, or mobile handheld device. It is noted that the sizes of the packets arriving via the network and of the packets that are used to feed data to the loudspeaker are in general not equal, which, however, is neglected for the sake of simplicity here.
(29) No problem arises in the example shown in
(30) In
(31) In this case, packet 3 has been transmitted via a different route over the transport medium WWW compared to packets 2 and 4. As a consequence, packet 3 arrives after packet 4. Without use of any jitter buffer, the playback can proceed until the stream of samples can no longer be continued at the moment at which packet 3 should be played back. The consequence is that an acoustic artifact (dropout) may be audible.
(32) In order to compensate for the shown packet delays, a certain budget of audio samples can be stored at the receiver side, which is realized in the jitter buffer. The larger this storage of audio samples is, the higher can be the delay which can be compensated without the occurrence of any audible acoustic artifacts. However, if the storage is too large, the end-to-end delay of the communication becomes too high, diminishing the overall communication quality.
(33) The network jitter characteristics observed in real applications are in general strongly time-varying. An example with strongly variable network jitter is a typical WIFI router used in many households nowadays. Often, packets are not transmitted via the WIFI transmission link for a couple of hundred milliseconds if a microwave oven is used producing disturbances in the same frequency band used by WIFI or if a Bluetooth link is used in parallel. Therefore, a suitable jitter buffer should be managed and should adapt to the instantaneous network quality observed by the VOIP communication application. Such a jitter buffer is denoted as an adaptive jitter buffer in this specification.
(34) Other Causes for Jitter
(35) Targeting the development of a VOIP softphone, most people mention the described network jitters as the main sources for packet delays. However, other sources of jitters can be observed in practice: Soundcard jitter: Most of today's VOIP terminal applications are realized on PCs, laptops, or mobile handheld devices, such as smartphones, which have in common that the operating system is not suited for applications with hard real-time constraints, because response times cannot be guaranteed. In order to overcome this problem for audio processing (which relies on response times to avoid audio dropouts), audio samples are grouped to form buffers used for audio playback and recording, which should be large enough to avoid audio dropouts. The operating system typically requests these buffers to be filled or emptied by the application on a regular time basis. However, requests for new buffers for playback and recording may not occur on a regular time basis. This behavior is denoted as the soundcard jitter in this specification. Sample rate drifts: Given two independent clock rates on the two sides involved in a VOIP communication, there is always a slight drift in sample rates. This drift is generally due to slight deviations of the oscillator frequency in the employed hardware components. On PCs, laptops, and smartphones, however, in some cases, the range of combinations of sample rates and processing buffer sizes is limited. Since the number of audio samples in buffers is assumed to be constant and of integer value, there might be cases, in which the effective sample rate, which is determined by the relations of input and output buffer sizes of the involved fixed resamplers, may be slightly different from the intended sample rate.
(36) The isolated measurement of network and soundcard jitters as well as sample rate drifts is hard to achieve since the symptoms for these characteristics are identical.
(37) Definition of Terms
(38) At first, some terms shall be defined: Circular buffer: Normally, a buffer that is used in audio applications contains some kind of data memory that is accessed in a FIFO manner. In the adaptive jitter buffer, this memory may be realized as a circular buffer for storing audio samples (see
Temporal Events The adaptive jitter buffer proposed in this specification makes use of an audio sample buffer that is influenced from within three independent threads. Any action to make use of the audio sample buffer from within each thread is a temporal event, during which the audio sample buffer is accessed or specific states of the audio sample buffer are observed and estimated. The following temporal events are considered: Input access event K.sub.in: During an input access event, audio samples are written to the audio sample buffer. This event is caused by the arrival of a packet from the IP network. The mathematical notation for the time shall in the following be given as t.sub.in (K.sub.in). Here, it is assumed that during each input event, a specific number of N.sub.in samples is transferred to the audio sample buffer, which depends on the frame size related to the packet that arrives from the network. Sequences of input events are denoted as K.sub.in,i with i=0, 1, 2, . . . . Output access event K.sub.out: An output access event occurs if the audio playback functionality requests audio samples from the audio sample buffer for audio playback. The mathematical term for the time of a specific output access event shall be t.sub.out(K.sub.out). It is assumed that a specific number of N.sub.out samples is transferred from the audio sample buffer to the operating system for playback during each output access event. Sequences of output access events are denoted as K.sub.out,i with i=0, 1, 2, . . . . Observation event K.sub.obs: An observation event is a reoccurring, e.g., periodic, event, during which temporal changes of the audio sample buffer fill quantity and the network status are analyzed. The mathematical term for the time at which an observation event occurs shall be t.sub.obs(K.sub.obs). Sequences of observation events are denoted as K.sub.obs,i with i=0, 1, 2, . . . . Note that since the described access events can occur periodically, the basis for the analysis of temporal aspects in the control and analysis of the audio sample buffer are sequences of timestamps t.sub.in(K.sub.in), t.sub.out(K.sub.out), and t.sub.obs(K.sub.obs).
Special Considerations for Binaural Telephony (HD-Audio-3D)
(39) HD-Audio-3Dalso denoted as binaural telephonyis expected by many to be the next emerging technology in communication. The benefit of HD-Audio-3D in comparison to conventional HD-Voice communication lies in the use of a binaural instead of a monaural audio signal: Audio contents are captured and played by binaural terminals involving at least two microphones and two speakers, yielding an exact acoustical reproduction of what the remote communication partner really hears. Binaural telephony is listening to the audio ambience with the ears of the remote speaker. The pure content of the recorded speech is extended by the capturing of the acoustical ambience. In contrast to the transmission of stereo contents, which allow a left-right location of sound sources, the virtual representation of room acoustics in binaural signals is based on differences in the time of arrival of the signals reaching the left and the right ear as well as attenuation- and filtering effects caused by the human head, the body and the ears, allowing the location of sources also in vertical direction.
(40) In order to allow a realistic reproduction of the acoustic ambience, which is the goal in binaural telephony, all signal processing components involved in the transmission of the audio content from the recording to the playback side should preserve the binaural cues inherent to the audio signal recorded with a sophisticated binaural recording device. In this context, the binaural cues are defined as the characteristics of the relations between the two channels of the binaural signal, which are commonly mainly expressed as the Interaural Time Differences (ITD) and the Interaural Level Differences (ILD) (see J. Blauert. Spatial hearing: The psychophysics of human sound localization. The MIT press, Cambridge, Mass., 1983).
(41) The ITD cues influence the perception of the spatial location of acoustic events at low frequencies due to the time differences between the arrival of an acoustic wavefront at the left and the right human ear. Often, these cues are also denoted as phase differences between the two channels of the binaural signal. The human perception is rather sensitive to these cues, and already a very light shift of a fraction of a millisecond between the left and the right signal has a significant impact on the perceived location of an acoustic event. This is rather intuitive, because with the known speed of sound of approximately c=340 m/s, a wavefront typically propagates from one ear to the other in approximately 0.7 milliseconds (distance approximately 25 cm).
(42) In contrast to this, the ILD binaural cues have a strong impact on the human perception at high frequencies. The ILD cues are due to the shadowing and attenuation effects caused by the human head given signals arriving from a specific direction: The level tends to be higher at that side of the head which points into the direction of the origin of the acoustic event.
(43) Design Goals for Adaptive Jitter Buffers
(44) Design goals for the adaptive jitter buffer exemplarily described in the following are: Avoid underrun conditions to avoid audio artifacts. Avoid overrun conditions to avoid audio artifacts. Avoid high average buffer fill quantity to achieve operation with a minimum end-to-end delay. Do not lose any parts of the audio signal. It is better to increase the delay for a short moment than to loose important parts of a conversation. If a buffer fill quantity adaptation is used, this should not be audible. Binaural cues should be preserved (if the application is to be suitable for HD-Audio-3D).
(45) In summary, the adaptive jitter buffer should be controlled such as to avoid losses of audio samples given the minimum necessary fill quantity (and therefore delay) in all conditions.
(46) Involved Jitter Model
(47) In some embodiments, a model of the jittere.g., the network jittermay be realized using a variety of different methods, e.g., M. Yajnik et al., Measurement and modelling of the temporal dependence in packet loss, in proceedings of IEEE Joint Conference on the IEEE Computer and Communications Societies, New York, N.Y., USA, vol. 1, pages 342 to 352, March 1999.
(48) In may not be possible to exactly describe the circumstances often found in practice by means of a mathematical model, because the actual behavior may depend on the realizations of functionality in routers, operating systems, drivers, PCs, etc.
(49) Nevertheless, both good situations, in which almost no jitter can be observed, as well as bad situations, in which a big portion of the packets are being lost, may be observed. Certainly, a more accurate classification would be useful to operate most effectively in all situations.
(50) Here, only a simple model is assumed: If packet losses start to occur, it is likely that this state does not change immediately. As a conclusion, given that a buffer underrun has occurred, it is likely that at least additional underruns may be seen, which should be prevented or reduced. And if after a phase with many losses, a phase of a good state is observed for a specific time, it is assumed to be likely that the network will remain in a good state with low network jitters and few losses for a longer time.
(51) Top-Down Functionality Description
(52) The proposed adaptive jitter buffer shall be described in the following according to a top-down procedure. The setup and the involved adaptive jitter buffer were already shown in
(53) In
(54) Two main components are shown in the figure: an adaptive packet buffer 20 (denoted as adaptive buffer reordering/loss detection block in the figure) and an adaptive audio sample buffer 50. The adaptive buffer reordering/loss detection block 20 is used in cases where the communication terminal is operated such that audio data may arrive in wrong order. On the one hand, it should buffer incoming packets to compensate for the case where packets are arriving in wrong order. On the other hand, it should not buffer too long in order to reduce the end-to-end delay and it should detect if packets arrive outside the temporal horizon in which a reordering is possible. As a result, the adaptive buffer reordering/loss detection block 20 may be utilized if the VOIP communication involves the UDP transport protocol. In TCP, packages may arrive late but are generally never lost.
(55) Packets which have arrived are fed into the audio data decoder 30 (denoted as decoder/frame loss concealment block in the figure) to be transformed into a sequence of samples (conversion to PCM block 40 in the figure). From here, the samples are transferred to the adaptive audio sample buffer 50. In case that a packet is lost or arrives outside the temporal horizon in which a buffer reordering would be possible, instead of decoding a frame in the decoder/frame loss concealment block 30, the decoder is requested to perform a frame loss concealment (or audio data loss concealment), in which it generates replacement audio data. Even in this case, the output from the decoder/frame loss concealment block 30 is fed into the adaptive audio sample buffer block (via the conversion to PCM block 40).
(56) If a new frame of audio samples is requested by the audio thread, samples are taken from the adaptive audio sample buffer 50. In case of a buffer underrun, the adaptive audio sample buffer 50 cannot provide the requested audio samples. In this case, the blind frame loss concealment block 60 (also denoted as second audio data loss concealer in this specification) at the output side of the adaptive audio sample buffer 50 acts to produce audio samples to conceal the underrun.
(57) It is important to distinguish between the decoder based frame loss concealment in the decoder/frame loss concealment block 30 and the blind frame loss concealment in the blind frame loss concealment block 60 at the output side of the adaptive audio sample buffer 50. In general, a decoder can do very efficient frame loss concealment since internal states of the decoder engine are available which can support in the creation of a signal to mimic the signal within the lost frame to hide interruptions of the signal. In contrast to this, the blind frame loss concealment block 60 performs frame loss concealment with no additional information about the signal to be produced to create something from nothing, therefore denoted as blind.
(58) The decoder/frame loss concealment block 30 acts whenever it is obvious that a frame will not arrive in time. It is limited to the length of the sequence that would have been produced by the frame if it would have arrived. In contrast to this, the blind frame loss concealment in the blind frame loss concealment block 60 acts if there is a buffer underrun in the adaptive audio sample buffer 50.
(59) A configuration of the contribution of delay to the adaptive buffer reordering/loss detection block 20 and the adaptive audio sample buffer 50 depends on the application and the network constraints and should be realized such that it adapts to the measured network jitter and condition of the transfer medium.
(60) The actual realization of the decoder and the frame loss concealment to be executed within the decoder/frame loss concealment block 30 should be according to the type of application to be realized. The adaptive jitter buffer 10 may be efficient if it has full control of the adaptive audio sample buffer 50 and the decoder/frame loss concealment block 30, because this allows decoding packets and performing the frame loss concealment whenever best to avoid buffer underruns and to achieve a minimum end-to-end delay.
(61) The functional blocks of the adaptive audio sample buffer 50 are shown in
(62) On the right side of the figure, audio samples arrive in the context of the input event at the adaptive audio sample buffer 50 from the decoder/frame loss concealment block 30 to be fed into an audio sample buffer 52, which is realized, in this example, as a circular buffer. On the entrance, an adaptive resampler 51 is operated to employ a waveform modification in order to influence the fill quantity of the audio sample buffer 52. The adaptive resampler 51 is controlled by specifying a resampling factor .sub.rs, which describes the relation between the number of audio samples received by the adaptive resampler 51 and the number of audio samples output by the adaptive resampler 51. If the fill quantity of the audio sample buffer 52 shall be reduced, the number of received audio samples is greater than the number of output audio samples, whereas it is the inverse relation if the fill quantity of the audio sample buffer 52 is to be increased.
(63) On the left side, audio samples are taken from the audio sample buffer 52 to be passed to the audio playback routines in the context of an output access event. In some embodiments, a number of samples to be destroyed may be specified, which are then removed from the audio sample buffer 52 in order reduce its fill quantity quickly.
(64) In this example, both input as well as output access events occur in independent threads asynchronously. In addition, an average instantaneous fill quantity of the audio sample buffer 52 is determined by observing the instantaneous fill quantity thereof on a regular basis in the context of an observation event.
(65) In order to measure the temporal evolution of the fill quantity of the audio sample buffer 52, every single input, output and observation event is assigned to a specific timestamp.
(66) Given that a buffer underrun has occurred, an indicator is fed into the blind frame loss concealment block 50 to artificially produce and output audio samples to prevent that audio artifacts are audible.
(67) Another element of the adaptive audio sample buffer 50 in
(68) In this example, the audio sample buffer fill quantity controller 53 performs the following tasks: Estimate an average instantaneous fill quantity of the audio sample buffer 52, using an audio sample buffer fill quantity estimator 54. Estimate a jitter, using a jitter estimator 55. Determine a target fill quantity of the audio sample buffer 52 in dependence of the estimated jitter, using an audio sample buffer target fill quantity determiner 56. Provide updates of the resampling factor .sub.rs, and, in some embodiments, of the number of samples to be destroyed, using an adaptive resampler controller 57.
(69) These tasks and respective blocks as well as the adaptive resampler 51 will be in detail described in the following.
(70) Realization of the Average Instantaneous Fill Quantity Estimation
(71) To measure an average instantaneous fill quantity of the audio sample buffer 52 may be a complex task, becausefrom a signal processing point of viewthe measured values of a fill quantity during the input and output access events can be interpreted as severely disturbed signals, i.e., audio samples are written to and taken from the audio sample buffer 52 as frames of samples at specific moments in time and not continuously over time.
(72) In order to explain this in more detail, plots are given in
(73) The plot shown in
(74) A more complex and realistic example is given in
(75) In this example, the average instantaneous fill quantity of the audio sample buffer 52 slowly decreases resulting in buffer underrun conditions after a while. Recall that a buffer underrun condition occurs if there are not enough audio samples in the audio sample buffer 52 to fill a complete output buffer given that an output access event occurs.
(76) In
(77) In all examples, the curve of the possible fill quantities of the audio sample buffer 52 is located in a corridor, which is shown in
(78) In this example, in order to determine the average instantaneous fill quantity of the audio sample buffer 52, during every input and output access event, the time period for which the fill quantity of the audio sample buffer 52 has remained in the specific state before this access event is stored in a circular buffer. This circular buffer (not shown in
(79) Given a specific event K.sub.ev with K.sub.ev representing either an input or output access event, K.sub.ev=K.sub.in or K.sub.ev=K.sub.out, the instantaneous fill quantity of the audio sample buffer 52 present when starting the access event (prior to any output or input access operation) is (K.sub.ev).
(80) The time period for which the fill quantity of the audio sample buffer 52 was constant is then derived from the most recent timestamp among all access events K.sub.past that occurred in the pasteither in the context of an input or output access event. In this example, it is computed as:
.sub.t(K.sub.ev)=t.sub.ev(K.sub.ev)max(t.sub.past(K.sub.past)).
(81) The term K.sub.past, in this context, represents all access events K.sub.in,i and K.sub.out,i that occurred before the current access event K.sub.ev. An efficient realization of this operation can be achieved by simply storing the timestamp of each input and output access event to be propagated from access event to access event.
(82) Between different observation events, all pairs of fill quantities (K.sub.ev) and time periods .sub.t(K.sub.ev) are stored in the fill quantity short-term memory.
(83) Once an observation event occurs, all pairs of entries in the fill quantity short-term memory are taken into account to derive the average instantaneous fill quantity of the audio sample buffer 52 as
(84)
with K.sub.ev,i representing the pairs <(K.sub.ev,i), .sub.t(K.sub.ev,i)> of fill quantities and time periods stored in the fill quantity short-term memory for all access events since the last observation event.
(85) In some embodiments, the event rates for observation events as well as for input and output access events should be chosen such that multiple input and output access events occur within the time period between temporally adjacent observation events. However, care should be taken to reduce or prohibit significant delays being introduced in the evaluation of the instantaneous fill quantity of the audio sample buffer 52 in order to avoid an instable controller behavior in the adaptive resampler controller 57.
(86) Realization of the Jitter Estimation
(87) An estimation of the jitter shall be described next. It involves different jitter classifications. In particular, it distinguishes between a long-term jitter .sub.long, a considerable jitter
.sub.cons, and an instantaneous jitter
. Jitters are estimated in the unit seconds in the following.
(88) The overall jitter of the system typically comprises network jitter and soundcard jitter. Sample rate drifts are not detected as jitters but may be compensated by the adaptive resampler controller 57 implicitly. The estimate of the instantaneous jitter is derived from the timestamps given for the input or output access events. Between temporally adjacent input and output access events, the time period that has passed is measured as:
.sub.t(K.sub.in,i)=t.sub.in(K.sub.in,i)t.sub.in(K.sub.in,i-1)
for each input access event and
.sub.t(K.sub.out,i)=t.sub.out(K.sub.out,i)t.sub.out(K.sub.out,i-1)
for each output access event.
(89) This per-frame time period is transformed into a jitter estimate by subtracting the expected temporal delay between adjacent access events. This expected delay is computed based on the buffer size N.sub.out for output access events and N.sub.in for input access events as well as the sample rate f.sub.S. (It is noted that it is assumed that the input and output sample rate of the jitter buffer are identical. If another sample rate conversion is utilized here, it may be handled outside the audio sample buffer 52):
(90)
for input access events and
(91)
for output access events.
(92) The overall instantaneous jitter is then computed as the sum of jitters from both the input access events and the output access events as
(93)
(94) In that term, the output jitter is due to the soundcard jitter of the local audio device whereas the input jitter covers the network jitter and the soundcard jitter introduced by the audio device on the side of the connected communication partner.
(95) Given an instantaneous jitter for a specific access event, the long term jitter can be updated as
(96)
(97) In parallel to this, each instantaneous jitter value is fed into a long term jitter analyzer. This long term analyzer collects instantaneous jitter values
for long time periods such as, e.g., 5 seconds. After this time period has expired, the maximum value which occurred during this long term time frame is then used to replace the current value of the long term jitter
.sub.long as
(98)
(99) As a consequence, on the one hand, the estimated long-term jitter .sub.long may quickly follow larger values in case a new maximum instantaneous jitter
was observed. On the other hand, the application may wait for a long time period (e.g., 5 seconds) during which the maximum jitter values are observed andif suitablereduce the maximum jitter values accordingly.
(100) Derivation of the Target Fill Quantity of the Audio Sample Buffer 52
(101) For the derivation of the target fill quantity of the audio sample buffer 52, at first, a fill quantity variance corridor is defined, in which the curve of the fill quantities of the audio sample buffer 52 is expected to be located in the future given the currently measured long-term jitter
.sub.long and the system setup. For this purpose, reference is made to
(102) In this example, the fill quantity variance corridor is composed of two parts following
(103)
as the fill quantity variance corridor of the audio sample buffer 52 given for the special case that there is absolutely no jitter (example shown in
(104) The other part in the equation to compute the overall fill quantity variance corridor is
. It depends on the previously measured long term jitter
.sub.long. It is composed of a maximum increase of the fill quantity of the audio sample buffer 52, which depends on the buffer size N.sub.in in case of an input access event and the maximum decrease of the fill quantity of the audio sample buffer 52, which depends on the buffer size N.sub.out in case of an output access event, which may occur during the time of the maximum jitter
.sub.long,
(105)
(106) In this context, .sub.in and .sub.out are constants which help to tune the setting for this configuration to take into account sample rate drifts and similar effects in practice with, e.g.,
.sub.in=.sub.out=1.5.
(107) In the next step, a safety distance is specified to define a distance between the computed fill quantity variance corridor
of the audio sample buffer 52 and the buffer underrun condition (if the fill quantity is equal to zero, a buffer underrun occurs). Typically, this safety distance
can be specified directly and is subject to tuning procedures, the current configuration, and expected network conditions.
(108) The described fill quantity variance corridor and its subcomponents as well as the safety distance are shown in a plot similar to those from
(109) The target fill quantity is finally given as
(110)
and is shown as the stippled line in the example in
(111) Considering the temporal evolution of the target fill quantity, the given equations yield a new target fill quantity for each observation event, therefore the target fill quantity should be written as a function of the event as .sub.target(K.sub.obs).
(112) From one observation event to the other, however, the target fill quantity may not be not allowed to take arbitrary values. Instead, a temporal smoothing may be employed to compute the effective target fill quantity .sub.target(K.sub.obs) from
.sub.target(K.sub.obs) as follows: Given that the new target fill quantity
.sub.target(K.sub.obs) has a lower value than the effective target fill quantity from the previous observation event, the effective target fill quantity is given as
(113)
with =0.9 and the index i indicates the temporal dependence. The target fill quantity may be limited to not exceed half of the overall length of the audio sample buffer 52 in order to avoid buffer overrun conditions.
Derivation of the resampling factor .sub.rs and the number of samples to be destroyed
(114) Given the determined target fill quantity .sub.target(K.sub.obs) of the audio sample buffer 52 and a measure of the average instantaneous fill quantity
(K.sub.obs) as determined during each observation event, the resampling factor .sub.rs and, in some embodiments, the number of samples to be destroyed from
(115) The resampling factor .sub.rs is defined as the relation of the number of output audio samples to the number of input audio samples of the adaptive resampler 51. The number of audio samples to be fed into (and, thus, received by) the adaptive resampler 51 is N.sub.in and the number of output audio samples is N.sub.in. The resampling factor .sub.rs is hence
(116)
with .sub.rs>1 if the fill quantity of the audio sample buffer 52 is supposed to be increased and .sub.rs<1 if the fill quantity is supposed to be decreased.
(117) In order to explain the determination of the resampling factor .sub.rs, let us first review the example for a typical temporal evolution of the estimated average instantaneous fill quantity of the audio sample buffer 52 given a determined target fill quantity of the audio sample buffer 52, which is depicted in
(118) In that figure, the temporal evolution of the determined target fill quantity .sub.target(K.sub.obs) of the audio sample buffer 52which is constantand that of the estimated average instantaneous fill quantity
(K.sub.obs) of the audio sample buffer 52 are shown as the solid and the dotted curve, respectively.
(119) At the beginning, the curves deviate significantly. In order to steer the estimated average instantaneous fill quantity of the audio sample buffer 52 into the direction of the determined target fill quantity, the resampling factor .sub.rs is adjusted such that the fill quantity of the audio sample buffer 52 slowly increases to finally reach the optimal value.
(120) Note that the shown steering process is a control engineering problem. However, as the determination of the average instantaneous fill quantity of the audio sample buffer 52 is a rather slow process, a high controller delay can be expected. This can lead to instabilities in the control loop which, should be avoided. Hence, special care should be taken in order to carefully adjust the parameters as described in the following.
(121) For a realization of the adaptive resampler controller 57, the resampling factor .sub.rs is expressed in terms of a value .sub.N=N.sub.inN.sub.in, which can be derived from the resampling factor .sub.rs as
.sub.N=(.sub.rs1).Math.N.sub.in
and which describes the value by which the number of output samples deviates from the number of input (received) samples of the adaptive resampler 51.
(122) In order to cover different aspects, .sub.N is composed of three subcomponents,
.sub.N=.sub.N,var+.sub.N,const+.sub.N,slow
(123) In this example, the aspects to be covered are the following: 1. Given there is a sample rate drift between the two connected partners, the adaptive audio sample buffer 50 should compensate for it. For this purpose, .sub.N, here, contains a constant offset .sub.N,const, which compensates for the sample rate drift. .sub.N,const could be setup at the beginning of a VOIP connection and afterwards remain constant since in general, the sample rates may not drift over time. 2. If the estimated average instantaneous fill quantity of the audio sample buffer 52 deviates very much from the determined target fill quantity, in most cases, there may have been a modification of the network status. It is thus very likely that the network status has just switched from good to bad. In this case, the fill quantity of the audio sample buffer 52 should be changed quickly. The component .sub.N,var may vary significantly from time to time and can help to change the fill quantity quickly. 3. If the measured fill quantity of the audio sample buffer 52 is already very similar to the determined target fill quantity, the fill quantity of the audio sample buffer 52 should be steered carefully in order to avoid overshooting (due to the mentioned delay of the fill quantity estimation) and fill quantity oscillations. Therefore, the component .sub.N,slow is chosen here in the range of 1.sub.N,slow+1.
(124) The determined sample modification values may be fractional values (double precision) and are not restricted to be integer values. This, however, depends on the adaptive resampler 51 supporting fractional output (refer to the next section). Allowing fractional numbers of output samples may increase the performance of the described control loop.
(125) In order to derive the sample modification components, in .sub.target(K.sub.obs) is given by an upper corridor limit
.sub.+ and a lower corridor limit
.sub.. Here, the distance from the lower corridor limit
.sub. to the determined target fill quantity is smaller than that of the upper corridor limit
.sub.+ to the determined target fill quantity. This is by intention and due to the fact that it is more problematic if the determined target fill quantity of the audio sample buffer 52 is too low than if it were too high, because in the former case, buffer underruns may occur.
(126) The corridor limits .sub. and
.sub.+ are computed based on the previously determined jitter as
(127)
with .sub.<.sub.+ to achieve the described desired asymmetry of the corridor.
(128) In addition to that, from one observation event to the next, also the gradient of the estimated average instantaneous fill quantity of the audio sample buffer 52 may be determined in order to detect the direction of the fill quantity curve towards the determined target fill quantity or into the other direction.
(129) Depending on whether the estimated average instantaneous fill quantity of the audio sample buffer 52 is inside or outside the given corridor, different strategies are followed: If the estimated average instantaneous fill quantity of the audio sample buffer 52 is outside the corridor, a value for .sub.N,var is directly computed from the distance of the estimated average instantaneous fill quantity and the upper or lower corridor limits,
(130)
with .sub.+ and .sub. as constants that are subject to tuning procedures.
(131) The value of .sub.N,var should be limited to avoid audible artifacts such as pitch modifications caused by the adaptive resampler 51. Therefore, part of .sub.N,var may be transformed, in some embodiments, into the number of samples to be destroyed from
(132) The component .sub.N,const is modified in case that the estimated average instantaneous fill quantity of the audio sample buffer 52 tends to point away from determined target fill quantity. If this is the case, the value of .sub.N,const is increased by 1.
(133) .sub.N,slow does not play any role in this case, because the other values are significantly higher. It is therefore chosen as .sub.N,slow=0.
(134) If the estimated average instantaneous fill quantity of the audio sample buffer 52 is inside the corridor, a non-zero value of .sub.N,var is slowly decreased towards zero as follows:
(135)
(136) In this context, .sub.N,var is written as a function of the index of the observation event K.sub.obs,i to show the temporal aspect.
(137) The value of .sub.N,const may be modified as before by increasing or decreasing by 1. This, however, happens if the curve of the estimated average instantaneous fill quantity of the audio sample buffer 52 has previously crossed the line of the determined target fill quantity (highlighted by marker P1 in
(138) The value .sub.N,slow is set in order to slowly move the estimated average instantaneous fill quantity of the audio sample buffer 52 closer to the determined target fill quantity. It is chosen as
(139)
with .sub.+ and .sub. as constants that are subject to tuning procedures.
(140) The overall number of audio samples .sub.N to be modified by the adaptive resampler 51 should be limited such that its absolute value does not exceed a specific value. This limitation may reduce the speed at which the estimated average instantaneous fill quantity of the audio sample buffer 52 follows the determined target fill quantity, but it may increase the overall audio quality as it reduces the degree of pitch modification introduced by the adaptive resampler 51. In one embodiment, this limitation is to limit the value of .sub.N to not exceed 4% of the frame size .sub.in.
(141) Realization of the Adaptive Resampler 51
(142) Different approaches may be utilized to realize the adaptive resampler 51. The overall goal of these approaches is to increase or decrease the number of output audio samples in relation to the number of audio samples that have arrived as packets via the network link. This is explained based on two examples:
(143) Assuming that 100 samples arrive within each VOIP packet, a reduction of the fill quantity of the audio sample buffer 51 can be achieved if instead of the 100 audio samples only 99 audio samples are fed into the audio sample buffer 52. The purpose of the adaptive resampler 51 is to reduce the amount of audio samples without that any audio artifacts become audible.
(144) On the other hand, assuming that 100 audio samples arrive within each VOIP packet, an increase of the fill quantity of the audio sample buffer 52 can be achieved if instead of the 100 audio samples 101 audio samples are fed into the audio sample buffer 52. The purpose of the adaptive resampler 51 is to increase the amount of audio samples without that any audio artifacts become audible.
(145) Different approaches for this kind of task have been proposed in the context of time stretching for voice and audio such as the Waveform Similarity Overlap-Add (WSOLA) approach (see W. Verhelst and M. Roelands. An overlap-add technique based on waveform similarity (WSOLA) for high quality time-scale modification of speech. In Acoustics, Speech, and Signal Processing, 1993. ICASSP-93, 1993 IEEE International Conference on pages 554-557, vol. 2, 1993), different approaches to realize a Phase Vocoder (see M. Dolson. The phase vocoder: A tutorial. Computer Music Journal, 10(4):14-27, 1986) and approaches to exploit segments with audio pauses to reduce the number of audio samples.
(146) Besides the capability to accurately steer the capability to reduce the number of audio samples, it is important that the time stretching is not audible, neither for operation without any time stretching impact (that is: the number of input (received) audio samples of the adaptive resampler 51 is identical to the number of output samples) nor given that a massive reduction of number of audio samples is achieved. In this context, most approaches realizing a time stretching based on a phase vocoder are not suitable.
(147) Also, a high granularity of control of the exact number of audio samples to be output by the adaptive resampler 51 should be achieved to stabilize the control of the fill quantity of the audio sample buffer 52, as described in the previous sections. The adaptive resampler controller 57 may continuously adapt the resampling factor .sub.rs if the number of output samples is given with integer resolution, which can lead to delay oscillations. The adaptive resampler 51 should therefore, in some embodiments, be able to output a fractional number of audio samples.
(148) And finally, targeting binaural signals, it is important that given a stereo adaptive jitter buffer, waveform manipulation approaches to preserve the binaural cuesin particular the interchannel time difference (ITD).
(149) In various embodiments, a modified resampling filter composed of lowpass filters and decimation functionality is employed to fulfill the defined constraints. Under certain conditions, it continuously outputs a specific number of audio samples which deviates from the number of input audio samples. Also, fractional numbers of output audio samples are possible (It is noted that despite some embodiments where integer numbers of audio samples can be output, a fractional output rate can be achieved by varying the number of output audio samples over time, e.g., given an output rate of 100.5, the adaptive resampler 51 may output 100 samples if called for the first time and 101 samples if called for the second time).
(150) The proposed resampling filter causes almost no audible artifacts for speech signals in a wide range of resampling factors .sub.rs, and the binaural cues can be preserved since left and right channel signal are processed identically. One possible drawback of this approach is that for music, slight variations of the pitch of the modified signal may be perceivable.
(151) A concrete realization of the adaptive resampler 51 follows the method proposed in M. Pawig, Gerald Enzner, Peter Vary, Adaptive sampling rate correction for acoustic echo control in Voice-Over-IP, IEEE Transactions on Signal Processing, Vol. 58, No. 1, pp. 189-199, January 2010, in which an upsampling functionality combined with a Lagrange interpolator provides the required degree of freedom considering the output rate as well as an efficient implementation and high (non-audible) quality. The original implementation is slightly extended in order to allow also fractional sample output.
(152) Modification of the Fill Quantity of the Audio Sample Buffer 52 by Other Means
(153) In addition to the resampling factor .sub.rs, which controls the adaptive resampler 51, the adaptive resampler controller 57 may provide the number of samples to be destroyed (refer to
(154) In order to really destroy the proposed number of audio samples, instead of the adaptive resampler 51, other methods may be followed. One such method would be to search for pauses in the audio sample buffer 52 and to remove or to extend these pauses. Care has to be taken, however, that these pauses are removed identically in both channels in case of binaural signals in order to preserve the binaural cues.
(155) Example for the Temporal Evolution of the Fill Quantity of the Audio Sample Buffer 52
(156)
(157) In the figure, the instantaneous fill quantity of the audio sample buffer 52 is shown on the y-axis whereas the time is shown on the x-axis. The solid, light gray curve represents the instantaneous fill quantity of the audio sample buffer 52 just before new samples are written thereto; it is therefore also denoted as the Min. fill quantity. Analog to the Min. fill quantity, the stippled, dark gray curve represents the instantaneous fill quantity of the audio sample buffer 52 just before samples are read therefrom; it is therefore also denoted as Max. fill quantity. The dotted curve represents the determined target fill quantity.
(158) During operation, after the start-up (marker Q1), the audio sample buffer 52 is controlled to a stable state in which the safety distance to a buffer underflow condition is kept and the delay is reasonably low (marker Q2). After approximately one third of the operation, a buffer underrun condition occurs (marker Q3). As a consequence, the jitter estimator 55 detects a large jitter, and the adaptive resampler controller 57 steers the audio sample buffer 52 into a secure state, in which a large distance to a possible further buffer underrun condition is achieved. Then, after a while during which no large jitter has been observed, the fill quantity of the audio sample buffer 52 is steered back to lower values (marker Q4) in order to minimize the end-to-end delay.
(159) Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed embodiments, from a study of the drawings, the disclosure, and the appended claims.
(160) In the claims, the word comprising does not exclude other elements or steps, and the indefinite article a or an does not exclude a plurality.
(161) A single unit or device may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
(162) Any reference signs in the claims should not be construed as limiting the scope.
(163) The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
(164) These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.