Efficient peak-to-average-power reduction for OFDM and MIMO-OFDM
11671151 · 2023-06-06
Assignee
Inventors
Cpc classification
H04B7/0456
ELECTRICITY
H04L27/2634
ELECTRICITY
H04L27/26362
ELECTRICITY
H04L27/2621
ELECTRICITY
Y02D30/70
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
Abstract
Certain aspects of the present disclosure generally relate to wireless communications. In some aspects, a wireless device reduces a peak-to-average power ratio (PAPR) of a discrete-time orthogonal frequency division multiplexing (OFDM) transmission by selecting a signal with low PAPR from a set of candidate discrete-time OFDM signals. The wireless device may generate a partial-update discrete-time OFDM signal by performing a sparse transform operation on a base data symbol sequence, and then linearly combine the partial-update discrete-time OFDM signal with a base discrete-time OFDM signal to produce an updated discrete-time OFDM signal, which is added to the set of candidate discrete-time OFDM signals. Numerous other aspects are provided.
Claims
1. A method for reducing a peak-to-average power ratio (PAPR) of a discrete-time orthogonal frequency division multiplexing (OFDM) signal by selecting a signal with low PAPR from a plurality of candidate discrete-time OFDM signals, the method comprising: generating a partial update discrete-time OFDM signal by performing a sparse invertible transform operation on a base data symbol sequence; and linearly combining a base discrete-time OFDM signal and the partial update discrete-time OFDM signal to produce an updated discrete-time OFDM signal, the updated discrete-time OFDM signal being designated as one of the plurality of candidate discrete-time OFDM signals; wherein the base discrete-time OFDM signal is generated by performing a dense invertible transform operation on the base data symbol sequence or is selected from a previous updated discrete-time OFDM signal.
2. The method of claim 1, wherein the sparse invertible transform operation comprises at least one of a sparse inverse fast Fourier transform (IFFT), a wavelet-based approximate IFFT, a sparse matrix-vector multiplication, a sparse-matrix sparse vector multiplication, or a matrix sparse-vector multiplication.
3. The method of claim 1, further comprising generating at least one additional partial update discrete-time OFDM signal by at least one of linear combining a first partial update discrete-time OFDM signal with a second partial update discrete-time OFDM signal, or multiplying the partial update discrete-time OFDM signal with a complex-value scaling factor.
4. The method of claim 1, wherein performing the sparse invertible transform operation includes at least one of: performing a component-wise multiplication of the base data symbol sequence with a sparse weight matrix to generate a sparse update symbol sequence, and performing an invertible transform operation on the sparse update symbol sequence; employing the sparse weight matrix to select at least one block of elements in a dense invertible transform operator to produce a sparse invertible transform operator, and using the sparse invertible transform operator to operate on the base data symbol sequence; or selecting at least one block of elements in the dense invertible transform operator to produce the sparse invertible transform operator, selecting at least one element in the base data symbol sequence to produce the sparse update symbol sequence, and using the sparse invertible transform operator to operate on the sparse update symbol sequence.
5. The method of claim 1, wherein performing the sparse invertible transform operation comprises optimizing the sparse invertible transform operation to run on a graphics processing unit.
6. The method of claim 1, wherein the PAPR comprises a sum of PAPRs scaled with weights, each weight comprising a measure of PAPR sensitivity for a corresponding antenna or node.
7. The method of claim 1, further comprising transmitting side information indicating a selected one of the plurality of candidate discrete-time OFDM signals to enable a receiver to decode the selected one of the plurality of candidate discrete-time OFDM signals.
8. An apparatus for reducing a peak-to-average power ratio (PAPR) of a discrete-time signal by selecting a signal with low PAPR from a set of candidate discrete-time signals, comprising: a memory; and one or more processors operatively coupled to the memory, the one or more processors configured to: generate a partial update discrete-time signal by performing a sparse transform operation on a base data symbol sequence; and linearly combine a base discrete-time signal and the partial update discrete-time signal to produce an updated discrete-time signal, the updated discrete-time signal being included in the set of candidate discrete-time signals; wherein the base discrete-time OFDM signal is generated by performing a dense invertible transform operation on the base data symbol sequence or is selected from a previous updated discrete-time OFDM signal.
9. The apparatus of claim 8, wherein the sparse transform operation comprises at least one of a sparse inverse fast Fourier transform (IFFT), a wavelet-based approximate IFFT, a sparse matrix-vector multiplication, a sparse-matrix sparse vector multiplication, or a matrix sparse-vector multiplication.
10. The apparatus of claim 8, wherein the one or more processors are configured for generating at least one additional partial update discrete-time signal by at least one of linear combining a first partial update discrete-time signal with a second partial update discrete-time signal, or multiplying the partial update discrete-time signal with a complex-value scaling factor.
11. The apparatus of claim 8, wherein performing a sparse transform operation comprises at least one of: performing a component-wise multiplication of the base symbol sequence with a sparse weight matrix to generate a sparse update symbol sequence, and performing an invertible transform operation on the sparse update symbol sequence; employing the sparse weight matrix to select at least one block of elements in a dense invertible transform operator to produce a sparse transform operator, and using the sparse transform operator to operate on the base symbol sequence; or selecting at least one block of elements in the dense invertible transform operator to produce the sparse transform operator, selecting at least one element in the base symbol sequence to produce the sparse update symbol sequence, and using the sparse transform operator to operate on the sparse update symbol sequence.
12. The apparatus of claim 8, wherein the PAPR comprises a sum of PAPRs scaled with weights, each weight comprising a measure of PAPR sensitivity for at least one of a corresponding antenna or node.
13. The apparatus of claim 8, wherein performing the sparse transform operation comprises optimizing the sparse transform operation to run on a graphics processing unit.
14. The apparatus of claim 8, wherein the one or more processors are configured to provide for transmitting side information indicating a selected one of the candidate discrete-time signals to enable a receiver to decode the elected one of the candidate discrete-time signals.
15. A non-transitory computer-readable medium storing one or more instructions for reducing a peak-to-average power ratio (PAPR) of a transmitted discrete-time signal by selecting a signal with low PAPR from a set of candidate discrete-time signals, the one or more instructions, when executed by one or more processors, cause the one or more processors to: generate a partial update discrete-time signal by performing a sparse transform operation on a base data symbol sequence; and linearly combine a base discrete-time signal and the partial update discrete-time signal to produce an updated discrete-time signal, the updated discrete-time signal being included in the set of candidate discrete-time signals; wherein the base discrete-time OFDM signal is generated by performing a dense invertible transform operation on the base data symbol sequence or is selected from a previous updated discrete-time OFDM signal.
16. The non-transitory computer-readable medium of claim 15, wherein the sparse transform operation comprises at least one of a sparse inverse fast Fourier transform (IFFT), a wavelet-based approximate IFFT, a sparse matrix-vector multiplication, a sparse-matrix sparse vector multiplication, or a matrix sparse-vector multiplication.
17. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to generate at least one additional partial update discrete-time signal by at least one of linear combining a first partial update discrete-time signal with a second partial update discrete-time signal, or multiplying the partial update discrete-time signal with a complex-value scaling factor.
18. The non-transitory computer-readable medium of claim 15, wherein performing a sparse transform operation comprises at least one of: performing a component-wise multiplication of the base symbol sequence with a sparse weight matrix to generate a sparse update symbol sequence, and performing an invertible transform operation on the sparse update symbol sequence; employing the sparse weight matrix to select at least one block of elements in a dense invertible transform operator to produce a sparse transform operator, and using the sparse transform operator to operate on the base symbol sequence; or selecting at least one block of elements in the dense invertible transform operator to produce the sparse transform operator, selecting at least one element in the base symbol sequence to produce the sparse update symbol sequence, and using the sparse transform operator to operate on the sparse update symbol sequence.
19. The non-transitory computer-readable medium of claim 15, wherein the PAPR comprises a sum of PAPRs scaled with weights, each weight comprising a measure of PAPR sensitivity for at least one of a corresponding antenna or node.
20. The non-transitory computer-readable medium of claim 15, wherein performing the sparse transform operation comprises optimizing the sparse transform operation to run on a graphics processing unit.
21. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to provide for transmitting side information indicating a selected one of the candidate discrete-time signals to enable a receiver to decode the transmitted discrete-time signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings described below. Throughout the drawings and detailed description, like reference characters may be used to identify like elements appearing in one or more of the drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) It is contemplated that elements described in one aspect may be beneficially utilized on other aspects without specific recitation.
DETAILED DESCRIPTION
(10) The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.
(11) Aspects of the telecommunication systems are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
(12) By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
(13) Accordingly, in one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a non-transitory computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
(14) Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
(15)
(16) The baseband data processor 101 couples original data symbols (e.g., comprising bit sequences that have been converted into modulation symbols) to the SLM precoder 102, which selects SLM weights that reduce the PAPR of the discrete-time OFDM transmission signal and applies those selected weights to the original data symbols. For example, the SLM precoder 102 may select a weight matrix from a set of candidate weight matrices, such that when applied to the original data symbols, results in a discrete-time OFDM signal having the least PAPR value, and then output a weighted data set comprising the selected weight matrix multiplied component-wise with the original data symbols. The SLM precoder 102 may compute the PAPR of each discrete-time OFDM signal corresponding to each candidate weight matrix, compare the PAPR to a threshold value, and then select a weight matrix that provides a PAPR below the threshold value. The SLM precoder 102 outputs a weighted data set comprising the selected weight matrix multiplied component-wise with the original data symbols.
(17) The transform precoder 103 performs transform precoding on the weighted data set. The transform precoder 103 may be an SC-FDMA precoder comprising one or more DFT modules. In case of an M-point DFT, a block of M input samples are transformed to frequency-domain symbols. The spatial mapper 104 assigns at least one source of the original data symbols to a plurality of antennas. Mapping the data to the respective antennas (ports) is referred to as spatial mapping. Spatial mapper 104 may be called a layer mapper. The MIMO precoder 105 applies a spatial precoding matrix, such as spatial multiplexing weights computed from channel state information (CSI) or MIMO weights retrieved from a codebook. For example, the MIMO precoder 105 performs precoding on multiple layers output by spatial (or layer) mapper 104. The subcarrier mapper 106 maps the precoded data to the appropriate (e.g., scheduled) subcarriers. The subcarrier mapper 106 may be called a resource-element mapper. Subcarrier mapper 106 can comprise a plurality of subcarrier mapper modules, such as one subcarrier mapper module for each layer or antenna. The IDFT module 107 converts mapped frequency-domain symbols into discrete-time OFDM signals. The IDFT module 107 may comprise a separate IDFT for each layer or antenna. The IDFT module 107 may provide for an oversampled IDFT. The CP appender 108 adds a CP to each discrete-time OFDM signal. The DAC/RF module 109 converts the digital signal to analog and transmits the analog signals in the radio channel.
(18) In
(19) In some transmitter configurations, more than one SLM precoder 102 can be provided, such as multiple SLM precoders positioned at different locations in the transmitter chain. In the transmitter configuration disclosed herein, some of the blocks depicted in the Figures can be optional. For example, the transform precoder 103 can be optional. The spatial mapper 104 and MIMO precoder 105 can be optional. In some aspects, a transmitter is provided without the transform precoder 103, spatial mapper 104, and MIMO precoder 105. It should also be appreciated that transmitter configurations can be provided in accordance with aspects of the invention that comprise transmitter blocks not explicitly depicted herein. A transmitter employed in the invention may comprise encoding, bit-shifting, spreading, scrambling, and/or interleaving blocks, and the SLM precoder 102 operations can be configured accordingly to perform its functions while accommodating such encoding, bit-shifting, spreading, scrambling, and/or interleaving. A transmitter may comprise one or more additional or alternative invertible transform operations, and the SLM precoder 102 can be adapted accordingly to perform its operations as disclosed herein.
(20) Referring to
(21) The RF/ADC module 201 receives and converts received radio signals to digital baseband signals. The CP remover 202 removes the CP of each received discrete-time OFDM signal. The DFT module 203 transforms (e.g., demodulates) the discrete-time OFDM signal to frequency-domain symbols. The channel estimator/equalizer 204 estimates the propagation channel (e.g., derives CSI) and performs frequency-domain equalization. Subcarrier demapper 205 separates the frequency-domain data into subcarrier data (which may correspond to different scheduled transmission channels). The spatial de-multiplexer (de-MUX) 206 is optionally provided to perform any decoding on the data based on the precoding applied to the transmitted data. For example, a decoder in the spatial de-MUX 206 may employ a codebook index shared by the transmitter and receiver to select a decoding matrix. The spatial de-MUX 206 may perform spatial de-multiplexing to discriminate between per-antenna data. The transform decoder 207 performs transform decoding on the data. For example, if the transform precoder 103 comprises a DFT module, the transform decoder 207 comprises an IDFT module. The transform-decoded data symbols are processed by the SLM decoder 208, which removes SLM weights from the received data symbols.
(22) The SLM decoder 208 may receive an index (possibly a codebook index) corresponding to the selected weight matrix employed by the SLM precoder 102 in the transmitter. For example, the index may be transmitted as side information in a control channel (e.g., a physical uplink control channel or a physical uplink shared channel), derived from a syndrome in the received signal, or otherwise conveyed to the receiver. The SLM decoder 208 may determine the selected weight matrix blindly. In some aspects, the SLM decoder 208 performs decoding using different possible codes or code segments until it identifies the selected weight matrix. The SLM precoder 102 and decoder 208 may employ orthogonal SLM codes. When the SLM decoder 208 identifies the selected weight matrix, it removes the weights (i.e., the SLM sequence) from the received data. The data symbol estimator 209 determines the original data symbol from the SLM-decoded data.
(23)
(24) Transmitters and receivers disclosed herein can comprise client-side, server-side, and/or intermediate (e.g., relay) devices. Client-side devices can include UEs, access terminals, user terminals, Internet-of-Things (IoT) devices, wireless local area network (WLAN) devices, wireless personal area network (WPAN) devices, unmanned aerial vehicles, and intelligent transportation system (ITS) nodes. Many client-side devices are battery powered and may have limited access to computational resources, and thus will benefit from improved power-efficiency and low computational complexity such as provided for uplink communications in the aspects disclosed herein. Client-side devices can be configured to perform Cooperative-MIMO in a distributed antenna configuration comprising other client-side devices, relays, and/or server-side devices. MIMO precoding can entail additional challenges for power efficiency and can increase computational overhead. Client-side devices that have cost, power, and/or computational processing restrictions will benefit from PAPR-reduction schemes with reduced computational processing.
(25) Server-side devices can comprise base transceiver stations, which are also referred to as EnodeB's, small cells, femtocells, metro cells, remote radio heads, mobile base stations, cell towers, wireless access points, wireless routers, wireless hubs, network controllers, network managers, radio access network (RAN) nodes, HetNet nodes, wireless wide area network (WWAN) nodes, distributed antenna systems, massive-MIMO nodes, and cluster managers. In some aspects, server-side devices can comprise client devices and/or relays configured to operate in server-side mode. Dense deployments of server-side devices often entail power, computer-processing, and/or cost constraints. Such devices will benefit from PAPR-reduction schemes with reduced computational processing disclosed herein.
(26) Intermediate devices can include fixed and/or mobile relays. Intermediate devices can comprise client devices and/or server-side devices, such as those disclosed herein. An intermediate device can include a remote radio head having a wireless backhaul and/or fronthaul. In ad-hoc, mesh, and other distributed network topologies, intermediate devices can provide for improving network coverage and performance. Intermediate devices include mobile ad hoc network (MANET) nodes, peer-to-peer nodes, gateway nodes, vehicular ad hoc network (VANET) nodes, smart phone ad hoc network (SPAN) nodes, Cloud-relay nodes, geographically distributed MANET nodes, flying ad hoc network (FANET) nodes, airborne relay nodes, etc. Intermediate devices may be battery-powered, solar-powered, or otherwise have limited available power. Similarly, intermediate devices may have cost constraints and/or limited computer processing capabilities. Such devices will benefit from PAPR-reduction schemes with reduced computational processing disclosed herein.
(27)
(28) One or more input data streams are mapped 301 to a number N.sub.t of layers corresponding to multiple MIMO transmission channels, such as MIMO subspace channels. Data in each layer 1-N.sub.t is mapped 302.1-302.N.sub.t to a plurality N of OFDM subcarrier frequencies, such as in accordance with scheduling information that assigns N subcarriers to a transmitter. The mapping 302.1-302.N.sub.t can comprise partitioning the data symbols into N.sub.t blocks of size N. Data selection 303.1-303.N provides for selecting a set of N.sub.t data symbols corresponding to each frequency, f.sub.1 to f.sub.N. For each frequency f.sub.1 to f.sub.N, a corresponding data symbol is collected from each of the aforementioned N.sub.t blocks. Data symbols arranged in each process 303.1-303.N can be formatted into N blocks of size N.sub.t.
(29) A block of N.sub.t data symbols d(f.sub.1) corresponding to frequency f.sub.1 is processed for each of the N.sub.t antennas (e.g., shown as Antenna 1-Antenna N.sub.t). This is performed for each frequency up to f.sub.N. For simplicity, it is assumed that the number of transmit antennas equals the number of layers. However, different antenna configurations can be employed, such as wherein the number of antennas is greater than N.sub.t.
(30) Processing for Antenna 1 can comprise applying a PAPR-reduction weight matrix (which may comprise a phase rotation sequence) to each of the data blocks d(f.sub.1)-d(f.sub.N) 304.1,1-304.1,N-304.N.sub.t,1-304.N.sub.t,N. Weight matrices W.sub.1(f)-W.sub.1(f.sub.N) can be employed for Antenna 1, and W.sub.Nt(f.sub.1)-W.sub.Nt(f.sub.N) can be employed for Antenna N.sub.t. Each data block resulting from the product of a weight matrix W.sub.j(f.sub.n) (indexed by antenna (j) and frequency (n)) with a data symbol block d(f.sub.n) 304.1,1-304.1,N-304.N.sub.t,1-304.N.sub.t,N is denoted as {circumflex over (d)}(f.sub.n).
(31) Each data symbol block d(f.sub.n) corresponding to each antenna (1 to N.sub.t) is multiplied by a MIMO precoding vector s.sub.i(f.sub.n) indexed by antenna (j) and frequency (n) 305.1,1-305.1,N-305.N.sub.t,1-305.N.sub.t,N to produce a corresponding precoded symbol value. Thus, for each antenna, N precoded symbol values are produced that correspond to a set of N symbol blocks d(f.sub.n), n=1, . . . ,N, of size N.sub.t. Each of the N precoded symbol values comprises a linear combination of the N.sub.t data symbols of the block d(f.sub.n) of the corresponding subcarrier frequency f.sub.n. The N precoded symbol values for each antenna are mapped 306.1-306.N.sub.t to input bins of a set of IFFTs 307.1-307.N.sub.t, which generate a discrete-time MIMO-OFDM signal for each of the antennas 1-N.sub.t.
(32) Selection of the weight matrices W.sub.j(f.sub.n) in
(33) A data mapper 401 can map one or more input data streams to resource blocks and layers. Optionally, data may be processed by a multiplier 402 configured to multiply the data with one or more weights, such as an initial weight set W.sup.(0). Multiplier 402 might be configured to scramble the data, spread the data with any type of spreading code and/or multiple access code, and/or perform any type of transform precoding (such as SC-FDMA precoding). Data symbols output by the mapper 401 or the multiplier 402 are input to a plurality N.sub.t of processing branches wherein each branch corresponds to one of the N.sub.t antennas. The processing branches can be implemented in a serial or parallel architecture of processors, or combinations thereof. The processing branches may employ a centralized processor, a distributed set of processors, or a combination thereof.
(34) A first branch comprises a first path through an Invertible Transform 404.1 and generates an initial base discrete-time MIMO-OFDM signal, and a second path through a Sparse Matrix Multiplier 407.1 and an Invertible Transform 409.1 that generates one or more (U) partial-update discrete-time MIMO-OFDM signals. Linear Combiner 405.1 sums at least one partial-update discrete-time MIMO-OFDM signal with a base discrete-time MIMO-OFDM signal to produce an updated discrete-time MIMO-OFDM signal, which is analyzed in a PAPR measurement module 406.1 to measure the signal's PAPR. A MIMO Precoder 403.1 provides a set of MIMO precoding weights to the invertible transforms 404.1 and 409.1. A similar process is performed in each of the remaining N.sub.t−1 (physical or logical) processing branches.
(35) An N.sub.t.sup.th branch comprises a first path through Invertible transform 404.N.sub.t, which produces an initial base discrete-time MIMO-OFDM signal, and a second path through Sparse Matrix Multiplier 407.N.sub.t and Invertible Transform 409.N.sub.t, which produces one or more (U) partial-update discrete-time MIMO-OFDM signals. Linear Combiner 405.N.sub.t sums at least one partial-update discrete-time MIMO-OFDM signal with a base discrete-time MIMO-OFDM signal to produce an updated discrete-time MIMO-OFDM signal, which is analyzed in a PAPR measurement module 406.1 to measure the signal's PAPR. A MIMO Precoder 403.N.sub.t provides a set of MIMO precoding weights to the invertible transforms 404.N.sub.t and 409.N.sub.t.
(36) With respect to each of the N.sub.t branches, a description of the first branch is provided herein for simplicity. Linear Combiner 405.1 might store and/or read a base discrete-time MIMO-OFDM signal y.sup.(u) from memory 415.1. In one aspect, the initial base discrete-time MIMO-OFDM signal is the only base discrete-time MIMO-OFDM signal employed in the Linear Combiner 405.1. In other aspects, an updated discrete-time MIMO-OFDM signal may be designated as a base discrete-time MIMO-OFDM signal. PAPR measurement module 406.1 may store and/or read PAPR (e.g., PAPR.sup.(u)) and/or update index u to memory 415.1. Index u can be a codebook index corresponding to a weight matrix w.sup.(u) in a weight codebook. PAPR measurement module 406.1 may store an updated discrete-time MIMO-OFDM signal to the memory, its PAPR, and the corresponding update index, such as in response to comparing its PAPR to a previous PAPR measurement or some threshold value. PAPR measurement module 406.1 might designate an updated discrete-time MIMO-OFDM signal having a low PAPR as a base discrete-time MIMO-OFDM signal, and may delete any previously written data from the Memory 415.1. Based on PAPR measurements (such as PAPR.sup.(u) and possibly index u read from Memory 415.1), the Sparse Matrix Multiplier 407.1 might select a weight matrix W.sup.(u) from Memory 410.
(37) Stored values, such as u and its corresponding PAPR.sup.(u), can be read from the memory 415.1 by the module 406.1 and communicated to a PAPR aggregator 411 configured to collect PAPR and weight index values (and possibly other data) from the N.sub.t branches. Each branch's module 406.1-406.N.sub.t might communicate data corresponding to all U PAPRs to the aggregator 411, a number of PAPRs below a predetermined threshold, or a predetermined number of lowest PAPRs.
(38) A PAPR weighting module 412 may optionally be provided for scaling each PAPR with a weight value corresponding to the branch from which it was received. For example, for a PAPR-sensitive branch, the weight might be 1, and for a branch having low PAPR-sensitivity, the PAPR might be zero. The weighted PAPR values are then processed in a weight selector 413, which can select a best weight set for use by all the branches. For example, for each index u, weight selector 413 can sum the corresponding weighted PAPR values from all the branches to generate an aggregate weighted-PAPR metric. The best weight set index (0≤u≤U) can be selected from the corresponding aggregate weighted-PAPR metric with the smallest value. The weight set selector 413 then communicates the best weight set index u (or the corresponding weights W.sup.(u) to the processing branches shown in
(39) In aspects in which PAPR weighting 412 is employed, each branch weight comprises a measure of the branch antenna's (or corresponding network node's) sensitivity to PAPR. For example, a normalized branch weight near one can correspond to a high PAPR sensitivity, whereas a normalized branch weight near zero can correspond to a low PAPR sensitivity. A battery-powered node can have a higher branch weight than a node with line power, since power efficiency is likely more critical to the operation of a battery-powered device. It is advantageous to schedule one or more line-powered nodes to operate in a cluster with a set of battery-powered nodes in a distributed antenna system, since the low branch weights of line-powered nodes can provide for additional degrees of freedom, which affords lower PAPR for battery-powered nodes. This enables weight selection 413 to provide lower PAPR for PAPR-sensitive nodes by allowing for high PAPR for nodes that are not as PAPR-sensitive.
(40) In some aspects, PAPR weighting module 412 might compute one or more of the branch weights based on each corresponding node's battery life (which can comprise battery wear, battery charge level, percentage of fully charged, remaining device run time, battery status (e.g., charging or discharging), and combinations thereof) reported to the module 412. Devices with low battery life can be provided with higher corresponding branch weights than devices with nearly high battery life. Each branch weight might correspond to the inverse of the branch's battery charge level. The PAPR weighting module 412 might compute the branch weights based on a power-scaling factor assigned to each device (e.g., devices transmitting with higher power might have higher corresponding branch weights), a session duration assigned to each device (e.g., devices that are scheduled or otherwise expected to have a longer session, such as based on their type of data service or the file size they are transmitting, might have higher corresponding branch weights), a priority level (such as based on emergency or non-emergency links), a subscription level, or some other metric(s), or combination thereof. It is advantageous to schedule one or more nodes with low PAPR sensitivity to operate in a cluster with a set of nodes with high PAPR sensitivity in a distributed antenna system, since the low branch weights of low-PAPR-sensitive nodes can provide for additional degrees of freedom, which enables lower PAPR for the nodes with high PAPR sensitivity.
(41)
(42) The first Invertible Transform 504 operates on a data symbol vector X to produce an initial base discrete-time OFDM signal: x=X, where
is an invertible transform operator. The operator
can comprise an inverse DFT matrix F.sup.H, which may be implemented via a fast transform. The computational complexity of a complex N-point IFFT with oversampling factor K comprises (KN/2)log.sub.2(KN) complex multiplications and KN log.sub.2(KN) complex additions. The operator
can comprise one or more additional matrix operators, which usually increases the computational complexity. For example, MIMO Precoder 508 can provide a set of MIMO precoding weights to the Invertible Transform 504. The Invertible Transform 504 can generate a precoding matrix S from the precoding weights and multiply the data symbol vector X, and the product SX can be transformed by F.sup.H: x=F.sup.H(SX).
(43) Sparse Matrix Multiplier 507 can employ a set of length-N sparse weight vectors w to multiply the symbol vector X=[X.sub.0 X.sub.1 . . . X.sub.N-1]T before processing by the second Invertible Transform 509. In some aspects, N×N diagonal weight matrices W may be employed. A sparse diagonal matrix W comprises one or more diagonal elements having zero value. In one aspect, a first weight matrix corresponding to a first symbol position can be w.sup.(1,0, . . . ,0)=[1, 0, . . . , 0], a second weight matrix corresponding to a second symbol position can be w.sup.(0,1, . . . ,0)=[0, 1, . . . , 0], . . . , and an N.sup.th weight matrix corresponding to an N.sup.th symbol position can be w.sup.(0,0, . . . ,1)=[0, 0, . . . , 1].
(44) A set of sparse partial-update symbol matrices (e.g., sequences) w.sup.( . . . )X can be computed (e.g., w.sup.( . . . )X is computed as w.sup.( . . . ).Math.X, where “.Math.” denotes element-wise multiplication). Each partial-update symbol matrix is the result of a Hadamard product (also known as the Schur product, entry-wise product, or component-wise product), which takes two matrices (w.sup.( . . . ) and X) of the same dimension and produces another matrix (w.sup.( . . . )X) where each element i,j is the product of elements i,j of the original two matrices: (w.sup.( . . . )X).sub.i,j=(w.sup.( . . . )).sub.i,j(X).sub.i,j. It should be appreciated that variations and alternatives of this disclosure can exploit the associative, distributive, and/or commutative properties of the Hadamard product.
(45) In some aspects, a multiplication may be performed via addition or subtraction to arrive at the equivalent result. Various corresponding bit-level operations may be employed to effect multiplication in the aspects disclosed herein. Multiplication can be performed by mapping constellation points of an input symbol sequence to another set of constellation points according to a weight sequence.
(46) The second Invertible Transform 509 operates upon each sparse matrix w.sup.( . . . )X (which is a partial update to data vector X) with operator to produce a corresponding partial-update discrete-time OFDM signal x). In one aspect, Invertible Transform 509 generates precoding matrix S from precoding weights received from the MIMO Precoder 508 and then computes operator
=(F.sup.HS). The operator
may be stored in memory and used to operate on each sparse matrix w.sup.( . . . )X. This results in the operation: x.sup.( . . . )=(F.sup.HS)(w.sup.( . . . )X). In another aspect, an operator
=F.sup.HSw.sup.( . . . ) is generated for each sparse weight matrix w.sup.( . . . ) and may be stored in memory. The Invertible Transform 509 can select stored operators from memory to operate on the data vector X, such as to perform the operation, x.sup.( . . . )=(F.sup.HSw.sup.( . . . ))X. This operator is a sparse matrix, so sparse-matrix vector (spMV) may be exploited. In one aspect, F.sup.HS is computed and stored, and for each w.sup.( . . . ), a corresponding column of F.sup.HS is read, followed by multiplication with X.
(47) The operators disclosed herein can be multiplied by scaling factors and may be used to generate scaled partial-update discrete-time OFDM signals x.sup.( . . . ). The linearity property of invertible transforms can be exploited in combination with scaling factors to reduce the number of invertible transform computations. The Invertible Transform 509 may store partial-update discrete-time OFDM signals x.sup.( . . . ) in memory and supply scaled versions of such signals to the linear combiner 505.
(48) The sparseness of w.sup.( . . . ) provides for simplification of the operations by reducing the required number of complex multiplications and additions, resulting in a partial invertible transform operation. For example, the zero values in w.sup.( . . . )X allow for reducing the number of complex multiplications and additions in operator
=(F.sup.HS) acting upon w.sup.( . . . )X compared to the full transform operation required to produce the initial base discrete-time OFDM signal. An updated discrete-time OFDM signal is produced by summing a partial-update discrete-time OFDM signal with the base discrete-time OFDM signal. This sum may comprise another KN (or fewer) complex additions. Similarly, the operator
=(F.sup.HSw.sup.( . . . )) has reduced complexity due to zero values in w( . . . ), and is referred to herein as a partial invertible transform operation. This approach can be adapted for other linear transform operations. For example, the operator
=TF.sup.HS and its variants can be simplified by virtue of the sparseness of w.sup.( . . . ), where T and S each represents any number of invertible transform operators. T and S can comprise one or more operators, such as spreading, pre-coding, permutation, block coding, space-time coding, and/or constellation mapping operators. F.sup.H may comprise any invertible transform operator, such as a wavelet transform, a fractional Fourier transform, etc.
(49) It should be appreciated that the first and second Invertible Transforms 504 and 509 can comprise common structure. An Invertible Transform circuit, processor, and/or code segment can operate as the first Invertible Transform 504 employing a full invertible transform operation to produce the initial base discrete-time OFDM signal and operate as the second Invertible Transforms 509 employing partial invertible transform operations to produce partial-update discrete-time OFDM signals, the partial invertible transform operations each comprising fewer multiplications and additions than the full invertible transform operation.
(50) The partial-update discrete-time OFDM signals x.sup.( . . . ) produced by the Invertible Transform 509, along with scaling factors a, can be stored in memory 502 for subsequent processing. Invertible Transform 509 and/or Linear Combiner 505 may generate new partial-update discrete-time OFDM signals x.sup.( . . . ) by scaling and/or combining previously generated partial-update discrete-time OFDM signals x.sup.( . . . ). Pre-computed partial discrete-time OFDM signals x.sup.( . . . ), each corresponding to a different one of the N symbol positions in X, can be selected and multiplied by scaling factor(s) a to produce new partial-update discrete-time OFDM signals x.sup.( . . . ). Aspects disclosed herein can exploit the linearity of invertible transforms to provide low-complexity partial updates to OFDM signals (which include spread-OFDM signals and MIMO-precoded OFDM signals).
ax.sub.1.sup.( . . . )(t)+bx.sub.2.sup.( . . . )aX.sub.1(ω)+bX.sub.2(ω)
where a and b are scalar values, x.sub.1.sup.( . . . )(t) and x.sub.2.sup.( . . . )(t) are length-KN partial-update discrete-time OFDM signals, and X.sub.1(ω) and X.sub.2(ω) are length-N sparse partial-update symbol matrices (e.g., X.sub.1(ω)=w.sub.1.sup.( . . . ).Math.X and X.sub.2(ω)=w.sub.2.sup.( . . . ).Math.X, where w.sub.1.sup.( . . . ) and w.sub.2.sup.( . . . ) are length-N sparse weight vectors with non-zero values corresponding to the same or different symbol positions in X).
(51) For sparse weight vectors w.sup.( . . . ) having a predetermined or adaptable symbol constellation of weight values, the scaling factors a and b can be selected according to the symbol constellation and employed as described above to produce corresponding partial-update discrete-time OFDM signals. For example, if x.sup.(1,0, . . . ,0) is generated by an partial invertible transform corresponding to the sparse weight vector w.sup.(1,0, . . . ,0), then x.sup.(a,0, . . . ,0) corresponding to w.sup.(a,0, . . . ,0) is produced from the product x.sup.(a,0, . . . ,0)=ax.sup.(1,0, . . . ,0). Instead of performing an additional transform operation, x.sup.(a,0, . . . ,0) is produced by performing KN or fewer complex multiplications. New partial-update discrete-time OFDM signals can be generated from sums of partial-update discrete-time OFDM signals. For example, implementation of the scaling factor of (a+b) can be achieved by the following summation of previously computed signals x.sup.(a,0, . . . ,0) and x.sup.(b,0, . . . ,0): x.sup.(a+b,0, . . . ,0)=x.sup.(a,0, . . . ,0)+x.sup.(b,0, . . . ,0), which can comprise KN or fewer complex additions instead of a transform operation.
(52) The Linear Combiner 505 is configured to sum each partial-update discrete-time OFDM signal with a base discrete-time OFDM signal to produce an updated discrete-time OFDM signal. The following addition is performed:
y.sup.(u)=y.sup.(0)+x.sup.(u)
where x.sup.(u) is a u.sup.th partial-update discrete-time OFDM signal, y.sup.(0) is a base discrete-time OFDM signal, and y.sup.(u) is an updated discrete-time OFDM signal corresponding to index u. Linear Combiner 505 may store values y.sup.(u), y.sup.(0), and x.sup.(u) in memory 502, and may read values y.sup.(0) and x.sup.(u) from memory 502. Linear Combiner 505 may generate new x.sup.(u) values as described herein.
(53) In one aspect, an initial (u=0) iteration includes writing the initial base discrete-time MIMO-OFDM signal (denoted by y.sup.(0)) to the memory. Linear Combiner 505 can read y.sup.(0) from memory 502 and combine it with an x.sup.(u) generated by Invertible Transform 509. Linear Combiner 505 might store the resulting sum y.sup.(u) in the memory 502 for subsequent use by the Linear Combiner 505 and/or PAPR Measurement Module 506.
(54) PAPR Measurement Module 506 computes the PAPR of y.sup.(u) and compares it to a previous PAPR and/or at least one PAPR threshold value. Based on the comparison, the signal y.sup.(u) and/or y.sup.(0) can be selected for further processing herein or may be selected as the signal to be transmitted. For example, the Linear Combiner 505 or Invertible Transform 509 can generate new x.sup.(u) values (such as by scaling and/or linear combining of previously generated x.sup.(u) values) based on the PAPR, and the Linear Combiner 505 combines the new x.sup.(u) with y.sup.(u) or a previous y.sup.(u). In some aspects, the PAPR Measurement Module 506 designates y.sup.(u) as the value y.sup.(u) to be updated in subsequent iterations, or PAPR Measurement Module 506 might select a previous value y.sup.(u). PAPR Measurement Module 506 can instruct Linear Combiner 505 to read values (e.g., x.sup.(u),y.sup.(u) y.sup.(0)) from memory for subsequent processing. PAPR Measurement Module 506 might instruct Sparse Matrix Multiplier 507 to read values (e.g., W.sup.(u), a) to generate new weights.
(55) PAPR Measurement Module 506 can comprise a peak detector, which is sometimes called a peak-hold circuit or a full-wave rectifier. The peak detector monitors a voltage and retains its peak value. A peak detector circuit tracks or follows an input voltage until the extreme point is reached and holds that value as the input decreases. This may be performed in a digital circuit or a processor programmed to determine a maximum value from a data set corresponding to the discrete-time signal under test. The peak detector may identify a signal having minimum peak power among U discrete signals by finding the signal having the smallest maximum value among the LN samples. PAPR Measurement Module 506 may perform algorithmic operations on digital data to determine PAPR. A cumulative distribution function (CDF) or a complementary cumulative distribution function (CCDF) can be used as a performance measure for PAPR. CCDF represents the probability that the PAPR of an OFDM symbol exceeds a given threshold, PAPR.sub.0, and is denoted as CCDF=Pr(PAPR>PAPR.sub.0). PAPR can comprise peak, CDF, CCDF, and/or crest factor (which is the ratio of peak value to RMS value of a waveform). Other PAPR performance measures may be used.
(56) The I/O 501 can comprise a processor configured for writing data received from components and/or other nodes to the memory 502 and reading the data from the memory 502 to be transmitted to components and/or other nodes. I/O circuitry 501 can comprise one or more wireless (e.g., radio, optical, or some other wireless technology) and/or wired (e.g., cable, fiber, or some other wire-line technology) transceivers. The I/O 501 can communicate PAPR to a PAPR Aggregator component (in the node or external to the node), which is then processed for weight selection. The I/O 501 can receive selected weights (or corresponding indices) from a weight set selector, and store the data in the memory 502 for use by the OFDM transmitter. For example, the Sparse Matrix Multiplier 507 can read the selected weights from the memory 502. The I/O 501 can communicate baseband OFDM signals (e.g., y.sup.(u)) and/or other data (including side information, such as index u) to radio transceiver circuitry for processing and transmission.
(57) CSI estimator 510 can measure received pilot signals and estimate CSI therefrom. The CSI may be stored in the memory 502 for use by the MIMO Precoder 508 and/or MIMO precoders in other nodes, which can select or generate Precoding weights therefrom. CSI may be used by PAPR Weighting module 412 to generate PAPR scaling weights.
(58)
(59) For a first block of data symbols X, partial-update discrete-time OFDM signals generated in 512 can be stored to memory. Step 512 can further comprise generating additional partial-update discrete-time OFDM signals by scaling and/or linear combining previously generated partial-update discrete-time OFDM signals. When large symbol constellations are used for SLM weights, step 512 can scale partial update discrete-time OFDM signals to produce new partial update discrete-time OFDM signals so no additional operations of F.sup.HS are required. The symmetry of such constellations can be exploited to reduce the number of operations. Step 512 can combine partial update discrete-time OFDM signals to generate new partial update discrete-time OFDM signals without requiring additional operations of F.sup.HS. Thus, the number of F.sup.HS operations can be independent of constellation size and the number U of candidate signals.
(60) Linear combining 513 comprises summing at least one partial-update discrete-time OFDM signal with a base discrete-time OFDM signal to produce a new updated (or candidate) discrete-time OFDM signal. The base discrete-time OFDM signal may be an initial base discrete-time OFDM signal or a previous updated discrete-time OFDM signal. The candidate discrete-time OFDM signals (including the base) and an index u corresponding to each candidate discrete-time OFDM signal may be stored in memory.
(61) A PAPR 514 is computed for each candidate discrete-time OFDM signal and possibly stored such that it is indexed by u. A decision process 515 comprises comparing the PAPR to a threshold and/or at least one previous PAPR, and possibly storing the current PAPR in memory indexed by u. The decision 515 can direct whether to perform subsequent iterations. The decision 515 may comprise denoting the current candidate discrete-time OFDM signal as the base discrete-time OFDM signal to be used in a subsequent iteration. The decision 515 may select for output the discrete-time OFDM signal and/or associated data (e.g., weights, index, etc.) corresponding the best PAPR or PAPR below the threshold.
(62) If a subsequent iteration is performed, subsequent partial updates to the base signal are selected or adapted 516. Select/Adapt 516 can control the function of Generate 512 and/or Linear combine 513. For example, based on the current PAPR (and previous PAPRs), Select/Adapt 516 can select the partial update to be summed with the base signal, and optionally which base signal to use. Select/Adapt 516 can select n and/or an corresponding to the update. Such data-dependent updating can provide faster convergence in some cases (e.g., for stationary signals) than algorithms that use data-independent updating schedules. A step size for updating the scaling factor an can be selected to improve convergence and/or stability. A new scaled partial-update discrete-time OFDM signal can be produced by scaling a previous discrete-time OFDM signal and/or combining discrete-time OFDM signals. The step size may be constant or may be variable based on one or more measurement criteria. Conditions on the step size can be derived to provide convergence in the mean and the mean square sense. Step sizes and other parameters can be stored in the memory.
(63)
(64) As in
(65)
(66) Aspects disclosed herein can provide for optimizing sparse operations (such as sparse matrix-vector multiplication) on graphics processing units (GPUs) using model-driven compile- and run-time strategies. By way of illustration,
(67) The shared memory 612 is present in each SM 610.1-610.N and is organized into banks. Bank conflict occurs when multiple addresses belonging to the same bank are accessed at the same time. Each SM 610.1-610.N also has a set of registers 614.1-614.M. The constant and texture memories are read-only regions in the global memory space and they have on-chip read-only caches. Accessing constant cache 620 is faster, but it has only a single port and hence it is beneficial when multiple processor cores load the same value from the cache. Texture cache 624 has higher latency than constant cache 620, but it does not suffer greatly when memory read accesses are irregular, and it is also beneficial for accessing data with two-dimensional (2D) spatial locality.
(68) The GPU computing architecture can employ a single instruction multiple threads (SIMT) model of execution. The threads in a kernel are executed in groups called warps, where a warp is a unit of execution. The scalar SPs within an SM share a single instruction unit and the threads of a warp are executed on the SPs. All the threads of a warp execute the same instruction and each warp has its own program counter. Each thread can access memories at different levels in the hierarchy, and the threads have a private local memory space and register space. The threads in a thread block can share a shared memory space, and the GPU dynamic random access memory (DRAM) is accessible by all threads in a kernel.
(69) For memory-bound applications, such as matrix-vector multiplication, it is advantageous to optimize memory performance, such as reducing the memory footprint and implementing processing strategies that better tolerate memory access latency. Many optimization strategies have been developed to handle the indirect and irregular memory accesses of sparse matrix vector multiplication. SpMV-specific optimizations depend heavily on the structural properties of the sparse matrix, and the problem is often formulated as one in which these properties are known only at run-time. However, sparse matrices in the present disclosure benefit from a well-defined structure that is known before run-time, and this structure can remain the same for many data sets. This simplifies the problem and thereby enables better-performing solutions. When a sparse weight vector is employed, the matrix-vector multiplication can be modeled as SpMV with a corresponding sparse operator matrix. For example, matrix elements that multiply only zero-value vector elements can be set to zero to provide a sparse matrix. If the sparse weight vector w is predetermined and is irrespective of the data symbols X and the operator matrix, then the structural properties of the sparse operator matrix are known before run-time, and the hardware and software acceleration strategies can be more precisely defined.
(70) The optimal memory access pattern is also dependent on the manner in which threads are mapped for computation and also on the number of threads involved in global memory access, as involving more threads would assist in hiding the global memory access latency. Consequently, thread mapping schemes have been developed to ensure optimized memory access. Memory optimization may be based on the CSR format, and the CSR storage format can be adapted to suit the GPU architecture.
(71) Some aspects can exploit synchronization-free parallelism. In SpMV computation, the parallelism available across rows enables distribution of computations corresponding to a row or a set of rows to a thread block as opposed to allocating one thread to perform the computation corresponding to one row and a thread block to handle a set of rows. A useful access strategy for global memory is the hardware-optimized coalesced access pattern when consecutive threads of a half-warp access consecutive elements. For example, when all the words requested by the threads of a half-warp lie within the same memory segment, and if consecutive threads access consecutive words, then all the memory requests of the half-warp are coalesced into one memory transaction.
(72) One strategy maps multiple threads per row such that consecutive threads access consecutive non-zero elements of the row in a cyclic fashion to compute partial products corresponding to the non-zero elements. The threads mapped to a row can compute the output vector element corresponding to the row from the partial products through parallel sum reduction. The partial products can be stored in shared memory as they are accessed only by threads within a thread block.
(73) Some techniques exploit data locality and reuse. The input and output vectors can exhibit data reuse in SpMV computation. The reuse of output vector elements can be achieved by exploiting synchronization-free parallelism with optimized thread mapping, which ensures that partial contributions to each output vector element are computed only by a certain set of threads and the final value is written only once. The reuse pattern of input vector elements depends on the non-zero access pattern of the sparse matrix.
(74) Exploiting data reuse of the input vector elements within a thread or among threads within a thread block can be achieved by caching the elements in on-chip memories. The on-chip memory may be, for example, texture (hardware) cache, registers, or shared memory (software) cache. Utilizing registers or shared memory to cache input vector elements can include identifying portions of a vector that are reused, which in turn, requires the identification of dense sub-blocks in the sparse matrix. For a predetermined set of sparse weight vectors, this information is already known. Preprocessing of the sparse matrix can be performed to extract dense sub-blocks, and a block storage format can be implemented that suits the GPU architecture (e.g., enables fine-grained thread-level parallelism). If the sequence length of the data symbols does not vary, then the sub-block size remains constant, which avoids the memory access penalty for reading block size and block index, as is typically required in SpMV optimizations.
(75) Techniques described herein can include tuning configuration parameters, such as varying the number of threads per thread block used for execution and/or varying number of threads handling a row. To achieve high parallelism and to meet latency constraint, the SpMV can include multiple buffers. In one aspect, SpMV may include two sparse matrix buffers, two pointer buffers, and two output buffers. Two sparse matrix buffers are configured in alternate buffer mode for buffering sparse matrix coefficients, two pointer buffers are configured in alternate buffer mode for buffering pointers representing non-zero coefficient start positions in each column of the sparse matrix, while two output buffers are configured in alternate buffer mode to output the calculation result from one output buffer while the other output buffer is used to buffer the calculation result.
(76) Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
(77) The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
(78) The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a client-side, server-side, and/or intermediate device. In the alternative, the processor and the storage medium may reside as discrete components in a client-side, server-side, and/or intermediate device.
(79) In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies, such as infrared, radio, and microwave are included in the definition of medium. Combinations of the above should also be included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C).