DATA COMMUNICATION
20170272259 · 2017-09-21
Inventors
Cpc classification
H04L49/9057
ELECTRICITY
H04L12/08
ELECTRICITY
H04L47/32
ELECTRICITY
International classification
H04L12/08
ELECTRICITY
Abstract
Systems and methods utilizing a packet gate to improve communication performance with respect to a resource shared for data communication are disclosed. In embodiments, a packet gate is utilized with respect to a shared resource to improve the effective throughput and reduce packet losses with respect to a plurality of data flows sharing the resource. In operation of embodiments, data packets are dissembled into chunks and encoded, such as using forward error correction, for transmission through a switching fabric, wherein at the egress of the switching fabric the packet gate tracks the number of chunks of a packet that has been received and when a sufficient number of chunks are received drops all subsequent chunks of that packet. The admitted encoded chunks are passed through the shared resource, wherein the chunks are decoded and reassembled into the packet at the output of the shared resource of embodiments.
Claims
1. A method for data communication, the method comprising: monitoring, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate; passing, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows; and dropping, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
2. The method of claim 1, further comprising: receiving the data packet from a first data source; breaking the data packet into a plurality of chunks; and encoding the plurality of chunks using a redundant data encoding technique to provide a plurality of encoded chunks, wherein the chunks of the data packet received at the packet gate and passed to the shared resource comprise encoded chunks of the plurality of encoded chunks.
3. The method of claim 2, wherein the redundant data encoding technique comprises a forward error correction (FEC) encoding technique.
4. The method of claim 2, wherein the redundant data encoding technique comprises an erasure recovery code that requires a number of encoded chunks greater than a number of the plurality of chunks the data packet is broken into in order to provide a determined probability of recovery of the data packet.
5. The method of claim 2, further comprising: selecting an amount of data encoding overhead utilized by the data encoding technique based upon an incoming data rate of the plurality of data flows and a rate of data output by the shared resource.
6. The method of claim 5, wherein the selecting an amount of data encoding overhead comprises: increasing the amount of data encoding overhead utilized by the data encoding technique as the shared resource approaches a capacity limit.
7. The method of claim 6, wherein the shared resource comprises a buffer and the capacity limit comprises a buffer capacity of the buffer.
8. The method of claim 2, further comprising: receiving the encoded chunks passed by the packet gate to the shared resource from an output of the shared resource; decoding the encoded chunks to recover the data packet; and passing the data packet recovered from the encoded chunks to a data sink.
9. The method of claim 8, further comprising: receiving a second data packet from a second data source, wherein the second data packet comprises a data packet of a second data flow of the plurality of data flows; breaking the second data packet into a second plurality of chunks; encoding the second plurality of chunks using the redundant data encoding technique to provide a second plurality of encoded chunks; monitoring, by the packet gate, a number of second encoded chunks of the second plurality of encoded chunks of the second data packet received at the packet gate; passing, by the packet gate, the second encoded chunks on to the shared resource until a specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; dropping, by the packet gate, all second encoded chunks of the second data packet after the specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; receiving the second encoded chunks passed by the packet gate to the shared resource from the output of the shared resource; decoding the second encoded chunks to recover the second data packet; and passing the second data packet recovered from the second encoded chunks to the data sink.
10. The method of claim 9, wherein the passing and dropping of the encoded chunks and the second encoded chunks by the packet gate increases an effective throughput of the shared resource.
11. An apparatus for data communication, the apparatus comprising: means for monitoring, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate; means for passing, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows; and means for dropping, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
12. The apparatus of claim 11, further comprising: means for receiving the data packet from a first data source; means for breaking the data packet into a plurality of chunks; and means for encoding the plurality of chunks using a redundant data encoding technique to provide a plurality of encoded chunks, wherein the chunks of the data packet received at the packet gate and passed to the shared resource comprise encoded chunks of the plurality of encoded chunks.
13. The apparatus of claim 12, further comprising: means for selecting an amount of data encoding overhead utilized by the data encoding technique based upon an incoming data rate of the plurality of data flows and a rate of data output by the shared resource.
14. The apparatus of claim 12, further comprising: means for receiving the encoded chunks passed by the packet gate to the shared resource from an output of the shared resource; means for decoding the encoded chunks to recover the data packet; and means for passing the data packet recovered from the encoded chunks to a data sink.
15. The apparatus of claim 14, further comprising: means for receiving a second data packet from a second data source, wherein the second data packet comprises a data packet of a second data flow of the plurality of data flows; means for breaking the second data packet into a second plurality of chunks; means for encoding the second plurality of chunks using the redundant data encoding technique to provide a second plurality of encoded chunks; means for monitoring, by the packet gate, a number of second encoded chunks of the second plurality of encoded chunks of the second data packet received at the packet gate; means for passing, by the packet gate, the second encoded chunks on to the shared resource until a specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; means for dropping, by the packet gate, all second encoded chunks of the second data packet after the specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; means for receiving the second encoded chunks passed by the packet gate to the shared resource from the output of the shared resource; means for decoding the second encoded chunks to recover the second data packet; and means for passing the second data packet recovered from the second encoded chunks to the data sink.
16. A non-transitory computer-readable medium having program code recorded thereon, the program code comprising: program code for causing a computer to: monitor, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate; pass, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows; and drop, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
17. The non-transitory computer-readable medium of claim 16, wherein the program code is further for causing the computer to: receive the data packet from a first data source; break the data packet into a plurality of chunks; and encode the plurality of chunks using a redundant data encoding technique to provide a plurality of encoded chunks, wherein the chunks of the data packet received at the packet gate and passed to the shared resource comprise encoded chunks of the plurality of encoded chunks.
18. The non-transitory computer-readable medium of claim 17, wherein the program code is further for causing the computer to: select an amount of data encoding overhead utilized by the data encoding technique based upon an incoming data rate of the plurality of data flows and a rate of data output by the shared resource.
19. The non-transitory computer-readable medium of claim 17, wherein the program code is further for causing the computer to: receive the encoded chunks passed by the packet gate to the shared resource from an output of the shared resource; decode the encoded chunks to recover the data packet; and pass the data packet recovered from the encoded chunks to a data sink.
20. The non-transitory computer-readable medium of claim 19, wherein the program code is further for causing the computer to: receive a second data packet from a second data source, wherein the second data packet comprises a data packet of a second data flow of the plurality of data flows; break the second data packet into a second plurality of chunks; encode the second plurality of chunks using the redundant data encoding technique to provide a second plurality of encoded chunks; monitor, by the packet gate, a number of second encoded chunks of the second plurality of encoded chunks of the second data packet received at the packet gate; pass, by the packet gate, the second encoded chunks on to the shared resource until a specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; drop, by the packet gate, all second encoded chunks of the second data packet after the specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; receive the second encoded chunks passed by the packet gate to the shared resource from the output of the shared resource; decode the second encoded chunks to recover the second data packet; and pass the second data packet recovered from the second encoded chunks to the data sink.
21. An apparatus for data communication, the apparatus comprising: at least one processor; and a memory coupled to the at least one processor, wherein the at least one processor is configured: to monitor, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate; to pass, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows; and to drop, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
22. The apparatus of claim 21, wherein the at least one processor is further configured: to receive the data packet from a first data source; to break the data packet into a plurality of chunks; and to encode the plurality of chunks using a redundant data encoding technique to provide a plurality of encoded chunks, wherein the chunks of the data packet received at the packet gate and passed to the shared resource comprise encoded chunks of the plurality of encoded chunks.
23. The apparatus of claim 22, wherein the redundant data encoding technique comprises a forward error correction (FEC) encoding technique.
24. The apparatus of claim 22, wherein the redundant data encoding technique comprises an erasure recovery code that requires a number of encoded chunks greater than a number of the plurality of chunks the data packet is broken into in order to provide a determined probability of recovery of the data packet.
25. The apparatus of claim 22, wherein the at least one processor is further configured: to select an amount of data encoding overhead utilized by the data encoding technique based upon an incoming data rate of the plurality of data flows and a rate of data output by the shared resource.
26. The apparatus of claim 25, wherein the at least one processor configured to select an amount of data encoding overhead is further configured: to increase the amount of data encoding overhead utilized by the data encoding technique as the shared resource approaches a capacity limit.
27. The apparatus of claim 26, wherein the shared resource comprises a buffer and the capacity limit comprises a buffer capacity of the buffer.
28. The apparatus of claim 22, wherein the at least one processor is further configured: to receive the encoded chunks passed by the packet gate to the shared resource from an output of the shared resource; to decode the encoded chunks to recover the data packet; and to pass the data packet recovered from the encoded chunks to a data sink.
29. The apparatus of claim 28, wherein the at least one processor is further configured: to receive a second data packet from a second data source, wherein the second data packet comprises a data packet of a second data flow of the plurality of data flows; to break the second data packet into a second plurality of chunks; to encode the second plurality of chunks using the redundant data encoding technique to provide a second plurality of encoded chunks; to monitor, by the packet gate, a number of second encoded chunks of the second plurality of encoded chunks of the second data packet received at the packet gate; to pass, by the packet gate, the second encoded chunks on to the shared resource until a specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; to drop, by the packet gate, all second encoded chunks of the second data packet after the specified number of second encoded chunks of the second data packet are passed on to provide recovery of the second data packet from the passed second encoded chunks; to receive the second encoded chunks passed by the packet gate to the shared resource from the output of the shared resource; to decode the second encoded chunks to recover the second data packet; and to pass the second data packet recovered from the second encoded chunks to the data sink.
30. The apparatus of claim 29, wherein passing and dropping of the encoded chunks and the second encoded chunks by the packet gate increases an effective throughput of the shared resource.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DETAILED DESCRIPTION
[0025] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various possible configurations and is not intended to limit the scope of the disclosure. Rather, the detailed description includes specific details for the purpose of providing a thorough understanding of the inventive subject matter. It will be apparent to those skilled in the art that these specific details are not required in every case and that, in some instances, well-known structures and components are shown in block diagram form for clarity of presentation.
[0026] This disclosure relates generally to providing or participating in data communications utilizing a shared resource, wherein communication performance is improved with respect to the shared resource utilizing a packet gate operable in accordance with the concepts herein. For example, a packet gate is utilized with respect to a shared resource to improve the effective throughput and/or reduce packet losses with respect to a plurality of data flows sharing the resource.
[0027]
[0028] The configuration of system 100 illustrated in
[0029] It should be appreciated that, although three data sources and three data sinks are shown in the illustrated embodiment of system 100, different numbers of data sources and/or data sinks may be provided in accordance with the concepts herein. Moreover, there is no requirement that the number of data sources and the number of data sinks be the same. It should also be appreciated that the particular configuration of data sources and/or data sinks may differ from that of the illustrated embodiment. For example, the data sources may include wireline and/or wireless data sources, multiple instances of a same type or configuration of data source, etc. Similarly, the data sinks may include multiple instances of the same type or configuration of data sinks, a number of differently configured data sinks, a single data sink, etc.
[0030] Irrespective of the particular configuration of system 100, one or more resources may be shared with respect to data flows between the data sources and one or more data sinks, whereby the sharing of the resource is subject to data packet losses. For example, as shown in the further detail provided in
[0031] In operation according to the embodiment illustrated in
[0032] It should be appreciated that, although the embodiment illustrated in
[0033] As shown in the embodiment of
[0034] Graph 200 of
[0035] Embodiments implemented in accordance with concepts of the disclosure improve the effective throughput of a shared resource, such as buffer 131, and reduce packet losses associated with its shared use without requiring an increase with respect to attributes of the shared resource, such as without requiring increased buffer size.
[0036] The embodiment of system 300 shown in
[0037] The illustrated embodiment of system 300 includes encoders 310-1 through 310-3 disposed in the data paths between each data source and the switching and routing fabric coupling the data sources to the shared resource, packet gate 320 disposed between the switching and routing fabric and the input to the shared resource, and decoder 331 disposed between the output of the shared resource and the data packet destination. It should be appreciated that, although the illustrated embodiment of system 300 shows packet gate 320 as being separate from data sink 330, packet gates implemented according to the concepts herein may be provided in configurations different than that shown, such as to be fully or partially integrated into a data sink. Similarly, although the shared resource (e.g., buffer 131) and corresponding decoder 331 are shown in the illustrated embodiment of system 300 as being integrated with data sink 330, this functionality may be provided in configurations different than that shown, such as to be fully or partially separated from a data sink. Also, although a single encoder or decoder is shown with respect to a particular data path, it should be appreciated that embodiments may implement different numbers of encoders and/or decoders, such as to provide a plurality of encoders/decoders operable to perform different coding techniques. Additionally or alternatively, a different number of packet gates may be provided with respect to a data sink than shown, such as to provide a plurality of packet gates where a plurality of shared resources are implemented with respect to a data sink.
[0038] Encoders 310-1 through 310-3 provide data redundancy encoding, such as through the use of forward error correction (FEC) encoding, with respect to the data of the respective flows. For example, encoders 310-1 through 310-3 may implement one or more erasure codes (e.g., tornado codes, low-density parity-check codes, Reed-Solomon coding, fountain codes, RAPTOR codes, RAPTORQ codes, and maximum distance separable (MDS) codes) whereby source data is broken into fragments (e.g., k source fragments for each source object such as data packets or other blocks of source data) and additional repair fragments (e.g., r repair fragments for each source object) are generated to provide a total number of fragments (e.g., n=k+r) greater than the source fragments. Accordingly, encoders 310-1 through 310-3 are shown as including data packet disassembly blocks, as may be operable to break the source data into the aforementioned fragments, and encoder blocks, as may be operable to perform the aforementioned data coding.
[0039] Correspondingly, decoder 331 provides decoding of the source data from the encoded data. For example, where FEC encoding is utilized as described above, decoder 331 may operate to recover the source object using any combination of k number of fragments (i.e., any combination of source fragments and/or repair fragments totaling k in number), or possibly k+x where x is some small integer value (e.g., 1 or 2) where a non-MDS code is used. Accordingly, decoder 331 is shown as including a decoder block, as may be operable to perform the aforementioned data decoding, and a packet assembly block, as may be operable to reassemble source objects from the decoded fragments.
[0040] Use of the aforementioned encoding facilitates a high probability of recovery of the data from some specified portion of the total number of encoded fragments, wherein the specified portion of encoded fragments is configured provide data recovery to a certain probability of success. For example, perfect recovery codes, such as MDS codes, facilitate recovery of the source data using any combination of k fragments (i.e., any combination of a number of source fragments and/or a number of repair fragments totaling k) to a very high probability (e.g., 100% probability of recovery). Similarly, some near perfect recovery codes, such as RAPTOR codes and RAPTORQ codes, facilitate recovery of the source data using any combination of k+x fragments (i.e., any combination of a number of source fragments and/or an number of repair fragments totaling k+x) to a high probability (e.g., 99.99% probability of recovery where x=1, 99.999% probability of recovery where x=2, etc.). In providing the foregoing data encoding, embodiments herein utilize RAPTORQ encoding in light of RAPTORQ being a near perfect erasure recovery code that provides a high probability of data recovery with very small encoding and decoding complexity, and thus is particularly well suited for implementation in some system configurations, such as SoC systems.
[0041] In operation of system 300 of the illustrated embodiment, data packets from a data source go through a “Packet Disassembly” process of a respective encoder 310 where the packets are broken up into smaller fixed size chunks suitable for transmission over the switching and routing fabric. FEC encoding is then applied by the respective encoder 310 to the foregoing chunks (e.g., using the aforementioned RAPTORQ encoding), whereby the encoding technique utilized allows recovery of data with some loss of data chunks in transmission. The encoded chunks are then sent into switching and routing fabric 120 to be routed to an appropriate data sink, such as data sink 330 (e.g., host processor, operating system, application, etc.).
[0042] Packet gate 320 of the illustrated embodiment, provided between the egress of the switching and routing fabric and an input of the shared resource, operates to keep track of the number of chunks of a packet that have been received. When logic of packet gate 320 determines that a specified number of chunks of a packet are received that are sufficient for the decoder to recover the packet with a high probability (e.g., k chunks or k+x chunks, a known number established by the encoding technique implemented), the packet gate drops all subsequent chunks of that packet. The chunks that are not dropped by the packet gate are passed through buffer 131 (i.e., the shared resource) for processing downstream by the respective decoder 331. Accordingly, at the output of the shared resource, the received chunks are processed by a “Packet Assembly” process of encoder 331 and the original packet assembled by encoder 331. The packet is then passed to data packet destination 132 of data sink 330.
[0043]
[0044] At block 401 of the illustrated flow, a data packet to be provided to data sink 330 is received from a data source by encoder 310. An exemplary format of a received data packet is shown in
[0045] Logic of encoder 310 operates to disassemble the received data packet into chunks (e.g., k data packet portions of equal size) at block 402. An exemplary format of the resulting chunks is shown in
[0046] The chunks of source data are provided to coding logic of encoder 310 for encoding the chunks using a redundant data coding technique (e.g., FEC encoding, such as RAPTORQ) at block 403 of the illustrated embodiment. For example, the coding logic may operate to generate a number of repair chunks (e.g., r) providing redundant data from which the data packet can be recovered from any combination of a predetermined number (e.g., k or k+x) of source chunks and repair chunks. It should be appreciated that, in operation according to embodiments, the chunk identification field may exceed the chunk count field when the chunk contains repair symbols generated by the encoding technique.
[0047] At block 404 of flow 400 shown in
[0048] At block 405 of the illustrated flow, a forwarded encoded chunk to be provided to data sink 330 is received from encoder 310 by packet gate 320. In providing intelligent gating operation according to concepts herein, logic of packet gate 320 operates to track the number of received encoded chunks for each packet, as shown at block 406 of the illustrated embodiment. For example, packet gate 320 may track the number of received encoded chunks using a database or table as illustrated in
[0049] Packet gate 320 of embodiments operates to pass encoded chunks on to the shared resource (e.g., buffer 131 of the embodiment illustrated in
[0050] It should be appreciated that, although the illustrated flow of
[0051] The encoded chunks passed to the shared resource of the embodiment illustrated in
[0052] The chunks of encoded data, as may comprise source chunks and/or repair chunks, provided through the shared resource are provided to decoding logic of decoder 331 for decoding the chunks using a redundant data coding technique (e.g., FEC encoding, such as RAPTORQ) at block 411 of the illustrated embodiment. For example, the decoding logic may operate to regenerate a packet from some portion of source chunks (e.g., some number of the k source chunks) and/or some number of repair chunks (e.g., some number of the r repair chunks), wherein the total number of encoded chunks (e.g., k or k+x) used to regenerate the data packet is determined by the particular coding technique utilized.
[0053] Thereafter, at block 412 of the illustrated embodiment, the recovered packets are forwarded by decoder 331 to data packet destination 132 for normal operation of the data packet destination. It should be appreciated that, although operation to provide reduction in data packet losses with respect to buffer 131, as may be shared among a number of data flows directed to data packet destination 132, utilizing packet gate 320 and associated encoding and decoding is implemented according to the illustrated embodiment, data processing as performed by data packet destination 132 may be performed without modification to accommodate the use of the packet gate. That is, operation of the packet gate of embodiments is transparent with respect to the data sources and data packet destination.
[0054] It should be appreciated that the operation of the aforementioned encoder, packet gate, and decoder are shown in flow 400 of the illustrated embodiment as being performed serially, some or all such operations or portions thereof may be performed in parallel. For example, the decoder may be receiving encoded chunks forwarded by the packet gate while the packet gate continues to receive forwarded encoded chunks and perform analysis with respect thereto. Similarly, the packet gate may perform operations to drop additional received encoded chunks while the decoder continues to receive previously forwarded encoded chunks. Accordingly, it can be appreciated that the operations shown in the illustrated embodiment of flow 400 may be performed in an order different than that shown.
[0055] It should also be appreciated that multiple instances of some or all of the operations of flow 400 may be performed, whether in parallel, serially, or independently. For example, the operations illustrated as being performed by encoder 310 (blocks 401-404) may be performed in parallel by a plurality of encoders (e.g., any or all of encoders 310-1 through 310-3) associated with data sources providing data to data sink 330. Additionally or alternatively, multiple instances of the operations of flow 400 may be performed in parallel, such as to provide reduction in data packet losses for a plurality of shared resources using a packet gate in accordance with the concepts herein.
[0056] In accordance with the foregoing operation of flow 400, performance improvements are gained by adding extra overhead using a redundant data encoder (e.g., FEC encoder). For example, using a near perfect coding technique with low encoding and decoding complexity, such as RAPTORQ, the system can tolerate larger data losses and still perform very well, such as to maintain an effective throughput of 1 (i.e., no packet loss) even when the ratio of input to output rate of the shared resource reaches 1.
[0057] In the aforementioned use of redundant data coding with a packet gate implementation, it can be appreciated that embodiments introduce an additional design parameter, wherein the additional design parameter is the amount of repair symbols generated by the redundant data encoder. This parameter, together with the shared resource attributes (e.g., buffer size) and input and output data rates, defines the performance of the system. Thus, although it seems counter-intuitive that performance improvements can be gained by adding extra overhead using a redundant encoder, systems implementing packet gates in accordance with the concepts herein can tolerate larger data losses and still perform very well.
[0058] The following analysis illustrates the gains that can be achieved by implementations in accordance with the concepts herein. In analyzing the performance of a system implementation in accordance with embodiments herein, the system may be modeled as a simple M/M/1/K queue with an input data rate of λ, as shown in
and where (1+δ)(1−P.sub.k)=1. If δ is chosen such that (1+δ)(1−P.sub.k)=1, then the effective packet loss rate after the decoder becomes 0.
[0059] The graphs of
[0060] Accordingly, embodiments herein operate to select an amount of data encoding overhead to utilize based upon the incoming data rate and the rate of data output by the shared resource. Embodiments may thus dynamically select an amount of data encoding overhead to implement, such as to implement no or little data encoding overhead when the shared resource is not near its capacity and to increase the data encoding overhead as the shared resource approaches its capacity limit (e.g., buffer or channel throughput limit).
[0061] Although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present disclosure.
[0062] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
[0063] The functional blocks and modules in
[0064] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Skilled artisans will also readily recognize that the order or combination of components, methods, or interactions that are described herein are merely examples and that the components, methods, or interactions of the various aspects of the present disclosure may be combined or performed in ways other than those illustrated and described herein.
[0065] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0066] The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of tangible storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
[0067] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. Computer-readable storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, a connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, or digital subscriber line (DSL), then the coaxial cable, fiber optic cable, twisted pair, or DSL, are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0068] As used herein, including in the claims, the term “and/or,” when used in a list of two or more items, means that any one of the listed items can be employed by itself, or any combination of two or more of the listed items can be employed. For example, if a composition is described as containing components A, B, and/or C, the composition can contain A alone; B alone; C alone; A and B in combination; A and C in combination; B and C in combination; or A, B, and C in combination. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) or any of these in any combination thereof.
[0069] The previous description of the disclosure is provided to enable any person skilled in the art to make or use embodiments in accordance with concepts of the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.